WO2022060370A1 - Responsive actions based on spatial input data - Google Patents

Responsive actions based on spatial input data Download PDF

Info

Publication number
WO2022060370A1
WO2022060370A1 PCT/US2020/051792 US2020051792W WO2022060370A1 WO 2022060370 A1 WO2022060370 A1 WO 2022060370A1 US 2020051792 W US2020051792 W US 2020051792W WO 2022060370 A1 WO2022060370 A1 WO 2022060370A1
Authority
WO
WIPO (PCT)
Prior art keywords
pointing device
input data
edge
spatial input
bounds
Prior art date
Application number
PCT/US2020/051792
Other languages
French (fr)
Inventor
Cyrille De Brebisson
Timothy James Wessman
Original Assignee
Hewlett-Packard Development Company, L.P.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hewlett-Packard Development Company, L.P. filed Critical Hewlett-Packard Development Company, L.P.
Priority to PCT/US2020/051792 priority Critical patent/WO2022060370A1/en
Publication of WO2022060370A1 publication Critical patent/WO2022060370A1/en

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/08Cursor circuits
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04812Interaction techniques based on cursor appearance or behaviour, e.g. being affected by the presence of displayed objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1423Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display
    • G06F3/1431Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display using a single graphics controller
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2354/00Aspects of interface with display user
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2360/00Aspects of the architecture of display systems
    • G09G2360/04Display device controller operating with a plurality of display units
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2370/00Aspects of data communication
    • G09G2370/16Use of wireless transmission of display information
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2370/00Aspects of data communication
    • G09G2370/24Keyboard-Video-Mouse [KVM] switch

Definitions

  • Various input devices may be used by users to input spatial data, thereby moving a graphical element such as a cursor or pointer around a display. These input devices are often referred to as “pointing devices” or “motion tracking pointing devices.” Perhaps the most common example of a pointing device is a computer mouse. Another example that is frequently found on laptop computers is a capacitive touchpad, which may be operated with a user’s finger, a stylus, etc.
  • FIG. 1 illustrates an example environment in which aspects of the present disclosure may be implemented.
  • Fig. 2 depicts an example of how different components of Fig. 1 may interact to implement selected aspects of the present disclosure.
  • Fig. 3 depicts examples of responsive actions that may be performed based on detected edge gestures.
  • Fig. 4 is a flow diagram that illustrates an example method, in accordance with an example of the present disclosure.
  • FIG. 5 illustrates an example system with example executable instructions to perform example operations, in accordance with an example of the present disclosure.
  • FIG. 6 schematically depicts an example computer-readable medium with a processor, in accordance with an example of the present disclosure.
  • Examples are described herein for leveraging out-of-bounds spatial input data to trigger various responsive actions.
  • a determination may be made that a graphical element such as a cursor has reached an edge or boundary of a display. Even though the graphical element cannot be moved farther in the same direction, a user may continue to operate the pointing device to input spatial data in the same direction. If the resulting spatial input data received from the pointing device satisfies a criterion, such as a minimum velocity or distance, an “edge gesture” may be detected. This edge gesture may signify an intent of the user to trigger performance of a responsive action.
  • a criterion such as a minimum velocity or distance
  • a variety of different responsive actions may be performed in response to detection of an edge gesture.
  • the responsive action may be selected from a library of responsive actions based on a characteristic of the edge gesture, such as its location, duration, velocity, etc. For example, an edge gesture detected at one side (vertical boundary) or another of a display may cause a wireless pointing device to be wirelessly coupled to a different computer.
  • an edge gesture detected at a top or bottom of a display may trigger adjustment of a setting of an output device.
  • a speaker’s volume may be turned up or down, or the display’s brightness and/or contrast may be turned up or down.
  • detecting an edge gesture at particular locations along a display’s boundary may open or launch various files, folders, and/or applications.
  • detecting an edge gesture may cause a document to advance or retreat by a page, or by a number of pages that may be determined based on a velocity of the edge gesture.
  • an OS may disregard or not capture out-of- bounds spatial input, and reconfiguring an OS to capture out-of-bounds spatial input data may not be practical for a variety of reasons.
  • computer-readable instructions may be provided that implement a software application or module separate from, e.g., “on top of,” an OS. This application may be able to obtain spatial input data directly from the pointing device, e.g., through a side channel.
  • This application may detect when a graphical element has reached an edge of a display, and in response, may transmit a request to the pointing device to provide the out-of-bounds spatial input data directly to the application (whereas otherwise the spatial input data is ignored by the OS). Based on this out-of-bounds spatial input data, the application may trigger performance of a responsive action. In some examples, the application may detect additional out-of-bounds spatial input data in a different direction, e.g., signifying that the user has moved the graphical element inward away from the edge of the display. In response, the application may transmit another request to the pointing device to cease providing out-of-bounds spatial input data, avoiding unnecessary exchange of data.
  • FIG. 1 illustrates an example environment in which aspects of the present disclosure may be implemented.
  • a computer 100 includes various components, including hardware 102, a hardware interface 104, an OS 106, software application(s) 108 (or “apps”) that execute on top of (e.g., are compiled to be compatible and/or are launched from) OS 106, and an edge gesture module 110.
  • all or a portion of edge gesture module 110 may be implemented by circuitry 113 (which may include memory if applicable) that is onboard pointing device 112.
  • edge gesture module 110 may be implemented as an integral part of OS 106.
  • Computer 100 may take various forms, such as: a desktop computing device, a laptop computing device, a smart appliance such as a smart television (or a standard television equipped with a networked dongle with automated assistant capabilities), glasses of the user having a computing device, a virtual, augmented, and/or mixed reality computing device (collectively, “extended reality”), etc Additional and/or alternative types of computers may be provided.
  • Hardware 102 may include various types of electronic components such as a processor/microprocessor, a graphical processing unit (“GPU”), a motherboard, memory, busses, input/output (“I/O”) ports, etc.
  • Hardware interface 104 may include computer-readable instructions that allow operating system 106 and/or application(s) 108 to interact with hardware 102.
  • Hardware interface 104 may include, for instance, computer-readable instructions sometimes referred to as “firmware” that control various hardware components, device drivers that enable operating system 106 to exchange data with various hardware/peripheral components, and so forth.
  • Applications 108 may include any type of software application that may be executed on a computer, such as productivity applications (e.g., spreadsheet, word processing), communication applications (e.g., email, social media), web browsers, games, and so forth.
  • Computer 100 may be operably coupled with a variety of I/O devices, such as a keyboard (not depicted), a pointing device 112, a microphone (not depicted), various sensors, a display 114, etc.
  • Pointing device 112 allows a user (not depicted) to input spatial data to OS 106.
  • “Spatial input” refers to input from a user that identifies a location and/or movement.
  • Pointing device 112 may take a variety of different forms, including but not limited to a computer mouse, a trackball, a joystick, a pointing stick (rubber nub or nipple mounted centrally in a computer keyboard), a finger tracking device, a capacitive touchpad, a touchscreen, a gaze detector integral with an extended reality device, etc.
  • Some pointing devices may be connected to computer 100 using a wire or cable.
  • Other pointing devices may be connected to computer wirelessly, e.g., using technology such as Bluetooth.
  • Yet other pointing devices may be integrated with computer 100. For instance, laptop computers are often equipped with integral pointing devices such as pointing sticks and/or capacitive touchpads.
  • Display 114 may be controlled by computer 100 to render graphics, including a graphical element 116 sometimes referred to as a “cursor” or “pointer” that is controlled by pointing device 112.
  • Display 114 may include various edges (generically denoted as 118), such as a top edge 118T, a right edge 118R, a bottom edge 118B, and a left edge 118L.
  • Pointing device 112 may be operated by a user to generate spatial input data based on spatial input provided from the user. This spatial input data is generated by pointing device and provided to OS 106, e.g., over a first or “primary” channel 120.
  • first channel 120 between pointing device 112 and OS 106 is implemented depends on the type of pointing device 112 that is used. If pointing device 120 is a personal system (“PS”)/2 pointing device, then first channel 120 may be a wired or wireless PS/2 communication pathway in which spatial input data generated at pointing device 112 are communicated to OS 106 through a device driver forming part of hardware interface 104. Similarly, if pointing device 120 is a Universal Serial Bus (“USB”) pointing device, then first channel 120 may take the form of a wired or wireless USB communication pathway in which spatial input data generated at pointing device 112 are communicated to OS 106 through a USB driver for pointing device 112.
  • PS personal system
  • USB Universal Serial Bus
  • OS 106 then can use this spatial input data to, for instance, alter a location on display at which cursor 116 is rendered.
  • OS 106 can also make this spatial input data available to application(s) 108 operating on top of OS 106.
  • application(s) 108 often don’t have direct access to the spatial input data generated by pointing device 112, but instead access it through OS 106.
  • cursor 116 Once cursor 116 has reached an edge 118 of display edge and can go no farther, additional spatial input data that would move cursor 116 farther in the same direction — referred to herein as “out-of-bounds spatial input data” — while still provided by the pointing device 112, may be disregarded by OS 106 and not made available to application(s) 108.
  • Edge gesture module 110 may leverage this out-of-bounds spatial input data by, for instance, establishing, activating, or opening a second, “secondary,” or “side” channel 122 directly between pointing device 112 and edge gesture module 110.
  • second channel 122 may be implemented entirely separately from first channel 120, e.g., over a distinct communication medium.
  • edge gesture module 110 may detect when the user makes an “edge gesture.” An edge gesture may be detected where the user continues to operate pointing device 112 in a direction that would otherwise move cursor 116 beyond edge 118 of display 114. An edge gesture may be associated, e.g., in a lookup table/library, with a variety of different responsive actions. Thus, by providing an edge gesture, a user can signal an intent to trigger performance of one of these various responsive actions.
  • Second channel 122 may be implemented in a variety of ways.
  • second channel 122 is implemented as a logical channel that piggybacks on the same physical communication channel that connects pointing device 112 to computer 100.
  • pointing device 112 is a USB device
  • pointing device 112 may be defined logically as a “slave” USB device.
  • a slave USB device may be split into multiple logical units, any of which can be leveraged as second channel 122.
  • a slave USB pointing device may be defined logically as a USB hub with multiple logical sub-devices attached (e.g., one logical sub-device for standard processing, another logical sub-device for extra data processing), as a single USB device with multiple “heads,” or as a human interface device (“HID”) with multiple logical interfaces.
  • second channel 122 may take the form of one logical interface of HID device that is already used, for example, to perform firmware updates of pointing device 112.
  • a wireless USB pointing device 112 may operate pursuant to the following stack, starting from the most abstracted layers and ending at a physical layer:
  • USB HID Interface System (7) USB HID Protocol
  • edge gesture module 110 may be distributed as part of or along with a device driver for pointing device 112.
  • computer 100 may install a device driver associated with pointing device 112, e.g., by retrieving the device driver from an online repository or by installing a device driver that is packaged with OS 106.
  • edge gesture module 110 may also be installed on computer 100.
  • edge gesture module 110 may be implemented in whole or in part on pointing device 112.
  • edge gesture module 110 may cause pointing device 112 to begin providing out-of-bounds spatial input data generated in response to actuation of pointing device 112 to edge gesture module 110.
  • edge gesture module 110 may transmit a request or command to pointing device 112, e.g., using USB or other similar protocols. This request or command may cause pointing device 112 to return out-of-bounds spatial information to edge gesture module 110.
  • Edge gesture module 110 may then trigger performance of a responsive action. The responsive action may be performed by various components, such as by edge gesture module 110, OS 106, and/or an application 108.
  • Fig. 2 depicts an example of how different components of Fig. 1 may interact to implement selected aspects of the present disclosure.
  • OS 106 may await spatial input data from pointing device 112.
  • OS 106 may receive spatial data from pointing device 112 generated by user operation of pointing device 112. For example, a user may provide spatial input by moving an isotonic mouse across a flat surface, or by operating a movable component of an isometric mouse such as a trackball.
  • a determination may be made of whether cursor 116 has reached an edge 118 of display 114.
  • this determination is depicted as being performed by OS 106, which means the determination is performed by computer 100.
  • this determination may be made by a different component of computer 100, such as by edge gesture module 110 that executes on top of OS 106 (as shown in Fig. 1 ).
  • the determination of block 206 may be performed in whole or in part on pointing device 112, e.g., based on data about a location of cursor 116 that is obtained by pointing device 112 from OS 106.
  • edge gesture module 110 may monitor the location of cursor 116 based on data about its location provided by OS 106. For instance, when edge gesture module 110 is installed, it may create a hook that intercepts mouse event messages from OS 106 before they reach an application 108. Alternatively, edge gesture module 110 may periodically poll OS 106 for a current location of cursor 116.
  • edge gesture module 110 requests out-of-bounds spatial input data from pointing device 112. For example, edge gesture module 110 may transmit a request or command to pointing device 112 that causes pointing device 112 to provide new spatial input data it generates (in response to user operation) to edge gesture module 110. In some examples, and as shown at block 211 , edge gesture module 110 may open, activate, and/or establish second channel 122 with pointing device 112. In other examples in which second channel 112 is already established, block 211 may be omitted and the request of block 210 may be relayed over this existing second channel 122.
  • edge gesture module 110 may receive, from pointing device 112, out-of-bounds spatial input data.
  • pointing device 112 may provide edge gesture module 110 with updated spatial input data periodically, e.g., every one hundred milliseconds.
  • edge gesture module 110 may close second channel 122 that was opened at block 211. For example, to reduce an amount of data that is exchanged, edge gesture module 110 may transmit a command/request to pointing device 112 that causes pointing device 112 to cease providing out-of-bounds spatial input data to edge gesture module 110 through second channel 122. In other examples, second channel 122 may remain open and available for communication between edge gesture module 110 and pointing device 112, in which case block 216 may be omitted.
  • edge gesture module 110 may determine whether various criteria for detecting an edge gesture are met. For example, edge gesture module 110 may compare a velocity, direction, and/or distance conveyed by the out-of-bounds spatial input data with various thresholds that are to be satisfied in order to detect an edge gesture. Suppose the user moves the cursor to the edge and provides minimal additional spatial input in the same direction — that may not signify an intent of the user to trigger any responsive action. Instead, the user may, for instance, have slightly overshot a side scroll bar of an application 108 such as a web browser. Stronger, faster, and/or more deliberate out-of-bounds spatial input, on the other hand, may constitute an edge gesture.
  • control may pass back to block 212.
  • a responsive action may be triggered.
  • the responsive action is triggered by edge gesture module, but the action itself may be performed by any component, such as edge gesture module 110, OS 106, and/or an application 108 (not depicted in Fig. 2).
  • edge gesture module 110 the responsive action is triggered by edge gesture module
  • OS 106 the action itself may be performed by any component, such as edge gesture module 110, OS 106, and/or an application 108 (not depicted in Fig. 2).
  • a non-exhaustive set of example responsive actions are depicted and described in association with Fig. 3.
  • control may then pass back to block 212.
  • a favorite music application or playlist 330 may be triggered. If an edge gesture is detected near the center of top edge 118T or of bottom edge 118B, then a responsive action represented by the upwards chevron 332 or the downwards chevron 338 may be performed. For example, the responsive action may be adjusting a setting of an output device, such as the volume of a speaker, the brightness or contrast of display 114, etc. If an edge gesture is detected on the right side of top edge 118T, or even at the top right corner, a favorite file or folder 334 may be opened, launched, brought to the foreground, etc.
  • a responsive action represented by the respective sideways chevron 336 or 340 may be performed. These responsive actions may include, for instance, adjusting the setting of an output device (as described previously), switching to a different application (e.g., by moving the current foreground application to the background and bringing a background application into the foreground), etc.
  • the responsive action represented by sideways chevron(s) 336 and/or 340 may include activation or deactivation of a peripheral device.
  • one physical communication pathway may be used for the first computer and another physical communication pathway may be used for the second computer.
  • pointing device 112 may be operably coupled with the first computer using wire(s), but may also include an onboard wireless transceiver (e.g., Bluetooth) that is not used when the user operates the first computer.
  • pointing device 112 may activate its onboard wireless transceiver to wirelessly couple pointing device 112 with a corresponding wireless transceiver on the second computing device. Thereafter, spatial input data generated by pointing device 112 may be transmitted wirelessly to an OS operating the second computer. If the user later performs an edge gesture to signify an intent to transition back to the first computer, then a similar sequence of operations may be performed in reverse.
  • Fig. 4 is a flow diagram that illustrates an example method 400, in accordance with an example of the present disclosure.
  • operations of method 400 will be described as being performed by a system configured with selected aspects of the present disclosure, such as computer 100 and/or pointing device 112 of Fig. 1 .
  • Other implementations may include additional operation(s) than those illustrated in Fig. 4, may perform operation(s) of Fig. 4 in a different order and/or in parallel, and/or may omit any of the operations of Fig. 4.
  • the system may detect a graphical element such as cursor 116 controlled by pointing device 112 at an edge 118 of display 114.
  • edge gesture module 110 may obtain position coordinates of cursor 116 from OS 106, and may determine based on those coordinates that cursor 116 has reached a boundary of a displayable area and can be moved no farther in the direction of the boundary.
  • the graphical element e.g., cursor 116
  • OS 106 the graphical element e.g., cursor 116
  • an application such as edge gesture module 110 executing on top of OS 106 may then obtain spatial input data from pointing device 112.
  • Fig. 5 illustrates an example system 500 with example executable instructions to perform example operations, in accordance with an example of the present disclosure.
  • the system of Fig. 5 includes a processor 570 and memory 572 storing instructions (blocks 502-506) that, when executed, perform a process 500.
  • the instructions of blocks 502-506 are executed as an application such as edge gesture module 110 that is separate from operating system 106, and that may be implemented in whole or in part by computer 100 and/or by circuitry 113 onboard pointing device 112.
  • Block 602 includes computer-readable instructions that cause processor 670 to determine that cursor 116 has been moved in a direction to reach edge 118 of display 114 based on spatial input data received from pointing device 112 through first channel 120.
  • Block 604 includes computer-readable instructions that cause processor 670 to detect an edge gesture based on out-of-bounds spatial input data received from pointing device 112 through second channel 122. In various examples, the out-of-bounds spatial input data indicates additional spatial input in the direction.
  • Block 606 includes computer-readable instructions that cause processor 670 to perform a responsive action that is associated with the edge gesture.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Hardware Design (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

Examples are described herein for leveraging out-of-bounds spatial input data to trigger various responsive actions. In various examples, a graphical element controlled by a pointing device may be detected at an edge of a display. Based on the detecting, the pointing device may be caused to provide out-of-bounds spatial input data. Based on the out-of-bounds spatial input data, user intent to trigger performance of a responsive action may be determined, and may trigger performance of the responsive action.

Description

RESPONSE ACTIONS BASED ON SPATIAL INPUT DATA
BACKGROUND
[0001] Various input devices may be used by users to input spatial data, thereby moving a graphical element such as a cursor or pointer around a display. These input devices are often referred to as “pointing devices” or “motion tracking pointing devices.” Perhaps the most common example of a pointing device is a computer mouse. Another example that is frequently found on laptop computers is a capacitive touchpad, which may be operated with a user’s finger, a stylus, etc.
BRIEF DESCRIPTION OF THE DRAWINGS
[0002] The present application may be more fully appreciated in connection with the following detailed description taken in conjunction with the accompanying drawings.
[0003] Fig. 1 illustrates an example environment in which aspects of the present disclosure may be implemented.
[0004] Fig. 2 depicts an example of how different components of Fig. 1 may interact to implement selected aspects of the present disclosure.
[0005] Fig. 3 depicts examples of responsive actions that may be performed based on detected edge gestures.
[0006] Fig. 4 is a flow diagram that illustrates an example method, in accordance with an example of the present disclosure.
[0007] Fig. 5 illustrates an example system with example executable instructions to perform example operations, in accordance with an example of the present disclosure.
[0008] Fig. 6 schematically depicts an example computer-readable medium with a processor, in accordance with an example of the present disclosure. DETAILED DESCRIPTION
[0009] In many cases it is possible to operate a pointing device to continue inputting spatial data in the same direction after the cursor under its control has reached an edge of a display. If the computer system to which the pointing device is connected is equipped with multiple displays, this may cause the cursor to travel from one display to another. However, if there are not multiple displays, or if there are no more displays beyond the edge at which the cursor is located, the cursor may remain at the edge of the display no matter how much additional spatial input data indicative of the direction is received.
[0010] The position of the cursor on the display may be controlled at the operating system (“OS”) level, based on spatial input data generated by spatial input provided by a user at a pointing device. The OS in turn provides applications access to this cursor position. Once the cursor has reached the display’s edge and can go no farther, additional spatial input data that would otherwise move the cursor farther in the same direction — referred to herein as “out-of-bounds spatial input data” — may be disregarded by the OS and not made available to the applications.
[0011] Examples are described herein for leveraging out-of-bounds spatial input data to trigger various responsive actions. In various examples, a determination may be made that a graphical element such as a cursor has reached an edge or boundary of a display. Even though the graphical element cannot be moved farther in the same direction, a user may continue to operate the pointing device to input spatial data in the same direction. If the resulting spatial input data received from the pointing device satisfies a criterion, such as a minimum velocity or distance, an “edge gesture” may be detected. This edge gesture may signify an intent of the user to trigger performance of a responsive action.
[0012] A variety of different responsive actions may be performed in response to detection of an edge gesture. In some examples, the responsive action may be selected from a library of responsive actions based on a characteristic of the edge gesture, such as its location, duration, velocity, etc. For example, an edge gesture detected at one side (vertical boundary) or another of a display may cause a wireless pointing device to be wirelessly coupled to a different computer.
[0013] As another example, an edge gesture detected at a top or bottom of a display may trigger adjustment of a setting of an output device. For example, a speaker’s volume may be turned up or down, or the display’s brightness and/or contrast may be turned up or down. As another example, detecting an edge gesture at particular locations along a display’s boundary may open or launch various files, folders, and/or applications. As another example, detecting an edge gesture may cause a document to advance or retreat by a page, or by a number of pages that may be determined based on a velocity of the edge gesture.
[0014] As noted previously, an OS may disregard or not capture out-of- bounds spatial input, and reconfiguring an OS to capture out-of-bounds spatial input data may not be practical for a variety of reasons. Accordingly, in various examples, computer-readable instructions may be provided that implement a software application or module separate from, e.g., “on top of,” an OS. This application may be able to obtain spatial input data directly from the pointing device, e.g., through a side channel.
[0015] This application may detect when a graphical element has reached an edge of a display, and in response, may transmit a request to the pointing device to provide the out-of-bounds spatial input data directly to the application (whereas otherwise the spatial input data is ignored by the OS). Based on this out-of-bounds spatial input data, the application may trigger performance of a responsive action. In some examples, the application may detect additional out-of-bounds spatial input data in a different direction, e.g., signifying that the user has moved the graphical element inward away from the edge of the display. In response, the application may transmit another request to the pointing device to cease providing out-of-bounds spatial input data, avoiding unnecessary exchange of data.
[0016] Fig. 1 illustrates an example environment in which aspects of the present disclosure may be implemented. A computer 100 includes various components, including hardware 102, a hardware interface 104, an OS 106, software application(s) 108 (or “apps”) that execute on top of (e.g., are compiled to be compatible and/or are launched from) OS 106, and an edge gesture module 110. In other examples, all or a portion of edge gesture module 110 may be implemented by circuitry 113 (which may include memory if applicable) that is onboard pointing device 112. In yet other examples, edge gesture module 110 may be implemented as an integral part of OS 106. Computer 100 may take various forms, such as: a desktop computing device, a laptop computing device, a smart appliance such as a smart television (or a standard television equipped with a networked dongle with automated assistant capabilities), glasses of the user having a computing device, a virtual, augmented, and/or mixed reality computing device (collectively, “extended reality”), etc Additional and/or alternative types of computers may be provided. [0017] Hardware 102 may include various types of electronic components such as a processor/microprocessor, a graphical processing unit (“GPU”), a motherboard, memory, busses, input/output (“I/O”) ports, etc. Hardware interface 104 may include computer-readable instructions that allow operating system 106 and/or application(s) 108 to interact with hardware 102. Hardware interface 104 may include, for instance, computer-readable instructions sometimes referred to as “firmware” that control various hardware components, device drivers that enable operating system 106 to exchange data with various hardware/peripheral components, and so forth. Applications 108 may include any type of software application that may be executed on a computer, such as productivity applications (e.g., spreadsheet, word processing), communication applications (e.g., email, social media), web browsers, games, and so forth.
[0018] Computer 100 may be operably coupled with a variety of I/O devices, such as a keyboard (not depicted), a pointing device 112, a microphone (not depicted), various sensors, a display 114, etc. Pointing device 112 allows a user (not depicted) to input spatial data to OS 106. “Spatial input" refers to input from a user that identifies a location and/or movement. Pointing device 112 may take a variety of different forms, including but not limited to a computer mouse, a trackball, a joystick, a pointing stick (rubber nub or nipple mounted centrally in a computer keyboard), a finger tracking device, a capacitive touchpad, a touchscreen, a gaze detector integral with an extended reality device, etc. Some pointing devices may be connected to computer 100 using a wire or cable. Other pointing devices may be connected to computer wirelessly, e.g., using technology such as Bluetooth. Yet other pointing devices may be integrated with computer 100. For instance, laptop computers are often equipped with integral pointing devices such as pointing sticks and/or capacitive touchpads.
[0019] Display 114 may be controlled by computer 100 to render graphics, including a graphical element 116 sometimes referred to as a “cursor” or “pointer” that is controlled by pointing device 112. Display 114 may include various edges (generically denoted as 118), such as a top edge 118T, a right edge 118R, a bottom edge 118B, and a left edge 118L. [0020] Pointing device 112 may be operated by a user to generate spatial input data based on spatial input provided from the user. This spatial input data is generated by pointing device and provided to OS 106, e.g., over a first or “primary” channel 120. The manner in which first channel 120 between pointing device 112 and OS 106 is implemented depends on the type of pointing device 112 that is used. If pointing device 120 is a personal system (“PS”)/2 pointing device, then first channel 120 may be a wired or wireless PS/2 communication pathway in which spatial input data generated at pointing device 112 are communicated to OS 106 through a device driver forming part of hardware interface 104. Similarly, if pointing device 120 is a Universal Serial Bus (“USB”) pointing device, then first channel 120 may take the form of a wired or wireless USB communication pathway in which spatial input data generated at pointing device 112 are communicated to OS 106 through a USB driver for pointing device 112.
[0021] OS 106 then can use this spatial input data to, for instance, alter a location on display at which cursor 116 is rendered. OS 106 can also make this spatial input data available to application(s) 108 operating on top of OS 106. In other words, application(s) 108 often don’t have direct access to the spatial input data generated by pointing device 112, but instead access it through OS 106. Once cursor 116 has reached an edge 118 of display edge and can go no farther, additional spatial input data that would move cursor 116 farther in the same direction — referred to herein as “out-of-bounds spatial input data” — while still provided by the pointing device 112, may be disregarded by OS 106 and not made available to application(s) 108.
[0022] Edge gesture module 110 may leverage this out-of-bounds spatial input data by, for instance, establishing, activating, or opening a second, “secondary,” or “side” channel 122 directly between pointing device 112 and edge gesture module 110. As used herein in association with channels 120-122, the terms “first,” “primary,” “second,” “secondary,” and “side” do not necessarily imply that one channel is dependent upon another. For example, second channel 122 may be implemented entirely separately from first channel 120, e.g., over a distinct communication medium.
[0023] In some examples, second channel 122 may be implemented using the same physical communication pathway as first channel 120, such as the PS/2 or USB pathway described previously. However, second channel 122 may be used more directly than first channel 120 to pass information between pointing device 112 and edge gesture module 110. In some examples, second channel 122 may bypass OS 106 altogether. In other examples, second channel 122 may be implemented a logical interface and/or a logical device that may or may not be defined and/or facilitated by OS 106. In other examples, second channel 122 may be implemented using a different physical communication pathway from first channel 120. For example, if first channel 120 is implemented over one set of wire(s), second channel 122 may be implemented over a different set of wire(s), or even wirelessly (e.g., using Bluetooth).
[0024] By receiving out-of-bounds spatial input data through second channel 122, edge gesture module 110 may detect when the user makes an “edge gesture.” An edge gesture may be detected where the user continues to operate pointing device 112 in a direction that would otherwise move cursor 116 beyond edge 118 of display 114. An edge gesture may be associated, e.g., in a lookup table/library, with a variety of different responsive actions. Thus, by providing an edge gesture, a user can signal an intent to trigger performance of one of these various responsive actions.
[0025] Second channel 122 may be implemented in a variety of ways. In some examples, second channel 122 is implemented as a logical channel that piggybacks on the same physical communication channel that connects pointing device 112 to computer 100. In some examples in which pointing device 112 is a USB device, pointing device 112 may be defined logically as a “slave” USB device. A slave USB device may be split into multiple logical units, any of which can be leveraged as second channel 122. For example, a slave USB pointing device may be defined logically as a USB hub with multiple logical sub-devices attached (e.g., one logical sub-device for standard processing, another logical sub-device for extra data processing), as a single USB device with multiple “heads," or as a human interface device (“HID”) with multiple logical interfaces. For example, second channel 122 may take the form of one logical interface of HID device that is already used, for example, to perform firmware updates of pointing device 112.
[0026] As an illustrative example, a wireless USB pointing device 112 may operate pursuant to the following stack, starting from the most abstracted layers and ending at a physical layer:
(8) USB HID Interface System (7) USB HID Protocol
(6) USB Device Protocol
(5) Virtual USB hardware on top of Bluetooth
(4) Bluetooth logical protocol
(3) Bluetooth data protocol
(2) Bluetooth packet protocol
(1 ) electromagnetic waves.
Any of layers (2)-(8) could be logically split into multiple logical communication pathways, with one of those multiple logical communication pathways being used as second channel 122. In some examples, layer (8) is split into two logical communication pathways, with one USB HID interface being used as second channel 122.
[0027] As noted previously, in other examples, second channel 122 may be implemented over a different physical communication pathway than first channel 120. For example, pointing device 112 may be equipped with a wireless transceiver (e.g., Bluetooth) that can be used to wirelessly exchange data with a remote device, such as a wireless transceiver of computer 100, even while pointing device 112 is otherwise operably coupled with computer 100 using wire(s).
[0028] In some examples, edge gesture module 110 may be distributed as part of or along with a device driver for pointing device 112. When pointing device 112 is plugged into computer 100 for the first time, computer 100 may install a device driver associated with pointing device 112, e.g., by retrieving the device driver from an online repository or by installing a device driver that is packaged with OS 106. During this process, edge gesture module 110 may also be installed on computer 100. In other examples, edge gesture module 110 may be implemented in whole or in part on pointing device 112.
[0029] As noted previously, while cursor 116 is moved within edges 118 of display, OS 106 may provide its current location to applications 108. Once cursor 116 reaches an edge, however, edge gesture module 110 may cause pointing device 112 to begin providing out-of-bounds spatial input data generated in response to actuation of pointing device 112 to edge gesture module 110. For example, edge gesture module 110 may transmit a request or command to pointing device 112, e.g., using USB or other similar protocols. This request or command may cause pointing device 112 to return out-of-bounds spatial information to edge gesture module 110. Edge gesture module 110 may then trigger performance of a responsive action. The responsive action may be performed by various components, such as by edge gesture module 110, OS 106, and/or an application 108.
[0030] Fig. 2 depicts an example of how different components of Fig. 1 may interact to implement selected aspects of the present disclosure. At block 202, OS 106 may await spatial input data from pointing device 112. At block 204, OS 106 may receive spatial data from pointing device 112 generated by user operation of pointing device 112. For example, a user may provide spatial input by moving an isotonic mouse across a flat surface, or by operating a movable component of an isometric mouse such as a trackball.
[0031] At block 206, a determination may be made of whether cursor 116 has reached an edge 118 of display 114. In Fig. 2 this determination is depicted as being performed by OS 106, which means the determination is performed by computer 100. However, in other examples this determination may be made by a different component of computer 100, such as by edge gesture module 110 that executes on top of OS 106 (as shown in Fig. 1 ). In yet other examples, the determination of block 206 may be performed in whole or in part on pointing device 112, e.g., based on data about a location of cursor 116 that is obtained by pointing device 112 from OS 106. In some examples, edge gesture module 110 may monitor the location of cursor 116 based on data about its location provided by OS 106. For instance, when edge gesture module 110 is installed, it may create a hook that intercepts mouse event messages from OS 106 before they reach an application 108. Alternatively, edge gesture module 110 may periodically poll OS 106 for a current location of cursor 116.
[0032] Whichever component performs the determination at block 206, if the answer is no, control passes back to block 208, and OS 106 renders cursor 116 at its new location. Then, control passes back to block 202, at which point OS 106 awaits additional spatial input data. However, if the answer at block 206 is no, then control may pass to edge gesture module 110, as depicted in Fig. 2.
[0033] At block 210, edge gesture module 110 requests out-of-bounds spatial input data from pointing device 112. For example, edge gesture module 110 may transmit a request or command to pointing device 112 that causes pointing device 112 to provide new spatial input data it generates (in response to user operation) to edge gesture module 110. In some examples, and as shown at block 211 , edge gesture module 110 may open, activate, and/or establish second channel 122 with pointing device 112. In other examples in which second channel 112 is already established, block 211 may be omitted and the request of block 210 may be relayed over this existing second channel 122.
[0034] Whether second channel 122 is newly opened at block 211 or already exists, at block 212, edge gesture module 110 may receive, from pointing device 112, out-of-bounds spatial input data. In some examples, pointing device 112 may provide edge gesture module 110 with updated spatial input data periodically, e.g., every one hundred milliseconds.
[0035] At block 214, edge gesture module 110 may determine whether the out-of-bounds spatial input data indicates movement away from the edge at which cursor 116 was detected at block 206. In other examples, OS 106 may continue to obtain/receive out-of-bounds spatial input data from pointing device 112, and may perform the determination of block 214 based on this out-of-bounds spatial input data.
[0036] If the answer at block 214 is yes, then at block 216, edge gesture module 110 may close second channel 122 that was opened at block 211. For example, to reduce an amount of data that is exchanged, edge gesture module 110 may transmit a command/request to pointing device 112 that causes pointing device 112 to cease providing out-of-bounds spatial input data to edge gesture module 110 through second channel 122. In other examples, second channel 122 may remain open and available for communication between edge gesture module 110 and pointing device 112, in which case block 216 may be omitted.
[0037] However, if the answer at block 214 is no, then at block 218, edge gesture module 110 may determine whether various criteria for detecting an edge gesture are met. For example, edge gesture module 110 may compare a velocity, direction, and/or distance conveyed by the out-of-bounds spatial input data with various thresholds that are to be satisfied in order to detect an edge gesture. Suppose the user moves the cursor to the edge and provides minimal additional spatial input in the same direction — that may not signify an intent of the user to trigger any responsive action. Instead, the user may, for instance, have slightly overshot a side scroll bar of an application 108 such as a web browser. Stronger, faster, and/or more deliberate out-of-bounds spatial input, on the other hand, may constitute an edge gesture.
[0038] If the answer at block 218 is no, then control may pass back to block 212. However, if the answer at block 218 is yes, then at block 220, a responsive action may be triggered. In Fig. 2, the responsive action is triggered by edge gesture module, but the action itself may be performed by any component, such as edge gesture module 110, OS 106, and/or an application 108 (not depicted in Fig. 2). A non-exhaustive set of example responsive actions are depicted and described in association with Fig. 3. In various examples, control may then pass back to block 212.
[0039] Fig. 3 depicts non-exhaustive examples of responsive actions that may be performed based on detected edge gestures. In Fig. 3, display 114 is depicted once again with its constituent edges 118T, 118R. 118B, and 118L. The rectangular shape of display 114 is not meant to be limiting, and other display shapes are contemplated, with more or less than four edges. Moreover, the responsive actions and locations depicted in Fig. 3 are merely examples — any number of responsive actions may be triggered by edge gestures detected at any number of edge locations, including at corners.
[0040] In Fig. 3, if an edge gesture is detected on the left side of top edge 118T, or even in the top left comer, a favorite music application or playlist 330 may be triggered. If an edge gesture is detected near the center of top edge 118T or of bottom edge 118B, then a responsive action represented by the upwards chevron 332 or the downwards chevron 338 may be performed. For example, the responsive action may be adjusting a setting of an output device, such as the volume of a speaker, the brightness or contrast of display 114, etc. If an edge gesture is detected on the right side of top edge 118T, or even at the top right corner, a favorite file or folder 334 may be opened, launched, brought to the foreground, etc.
[0041] If an edge gesture is detected near the center of right edge 118R or left edge 118L, then a responsive action represented by the respective sideways chevron 336 or 340 may be performed. These responsive actions may include, for instance, adjusting the setting of an output device (as described previously), switching to a different application (e.g., by moving the current foreground application to the background and bringing a background application into the foreground), etc. In some examples, the responsive action represented by sideways chevron(s) 336 and/or 340 may include activation or deactivation of a peripheral device. For example, in a multiple-monitor configuration, moving cursor 116 from a first monitor to a second monitor (e.g., both connected to the same computer) may cause a peripheral associated with the first monitor, such as a webcam, to be deactivated. Similarly, a peripheral associated with the second monitor, such as a webcam, may be activated.
[0042] In some examples, the responsive action associated with either chevron 336 or 340 (or chevron 332 or 338 in some examples) may include transitioning pointing device 112 from being wirelessly coupled with computer 100 to being wirelessly coupled with a different computer (with a different OS installed, for instance). For example, a user may configure edge gesture module 110 with information about how multiple computers such as rack servers are spatially arranged relative to each other. Based on this spatial information, edge gesture module 110 can detect when the user moves cursor 116 all the way to one side or the other, and can cause pointing device 112 to be transitioned (or to transition itself) to another computer that is selected based on its relative direction from computer 100.
[0043] Alternatively, in some examples in which the responsive action to-be- performed in response to an edge gesture is transitioning pointing device 112 from being operably coupled with one computer to being operably coupled with another, one physical communication pathway may be used for the first computer and another physical communication pathway may be used for the second computer. For example, pointing device 112 may be operably coupled with the first computer using wire(s), but may also include an onboard wireless transceiver (e.g., Bluetooth) that is not used when the user operates the first computer. When the user operates pointing device 112 to perform an edge gesture signifying intent to transition to the second computer, pointing device 112 may activate its onboard wireless transceiver to wirelessly couple pointing device 112 with a corresponding wireless transceiver on the second computing device. Thereafter, spatial input data generated by pointing device 112 may be transmitted wirelessly to an OS operating the second computer. If the user later performs an edge gesture to signify an intent to transition back to the first computer, then a similar sequence of operations may be performed in reverse.
[0044] Fig. 4 is a flow diagram that illustrates an example method 400, in accordance with an example of the present disclosure. For convenience, operations of method 400 will be described as being performed by a system configured with selected aspects of the present disclosure, such as computer 100 and/or pointing device 112 of Fig. 1 . Other implementations may include additional operation(s) than those illustrated in Fig. 4, may perform operation(s) of Fig. 4 in a different order and/or in parallel, and/or may omit any of the operations of Fig. 4.
[0045] At block 402, the system may detect a graphical element such as cursor 116 controlled by pointing device 112 at an edge 118 of display 114. For example, edge gesture module 110 may obtain position coordinates of cursor 116 from OS 106, and may determine based on those coordinates that cursor 116 has reached a boundary of a displayable area and can be moved no farther in the direction of the boundary. As noted previously, in some examples, the graphical element (e.g., cursor 116) is controlled by OS 106 until the graphical element reaches the edge of the display, at which point an application such as edge gesture module 110 executing on top of OS 106 may then obtain spatial input data from pointing device 112.
[0046] Based on the detecting at block 402, at block 404, the system may cause pointing device 112 to provide out-of-bounds spatial input data, e.g., to edge gesture module 110. For example, edge gesture module 110 may send a request or command to pointing device 112. Pointing device 112 and/or edge gesture module 110 may then establish second channel 122, e.g., as a USB HID device class, and pointing device 112 may provide spatial input data it generates based on spatial input provided by a user to edge gesture module 110.
[0047] Based on the out-of-bounds spatial input provided by pointing device 112, at block 406, the system, e.g., by way of edge gesture module 110, may determine user intent to trigger performance of a responsive action. In some examples, the determination of block 406 may be based on the out-of-bounds spatial input data indicating additional movement in a direction that caused the graphical element to reach the edge of the display. For example, when a user attempts to drag cursor 116 beyond a particular edge 118 of display 114, that movement may be detected as a particular edge gesture that is associated in a library with a particular responsive action.
[0048] Based on the determining of block 406, at block 408, the system, e.g., by way of edge gesture module 110, may trigger performance of the responsive action. For example, edge gesture module 110 may request that OS 106 perform an action such as launching a particular application 108, opening a particular file or folder, and so forth.
[0049] In some examples, a user may bring cursor 116 back within the boundaries of display 114 to resume normal operation of pointing device 112. For example, the system, e.g., by way of edge gesture module 110 or OS 106, may detect additional out-of-bounds spatial input data in a different direction. Based on the detected additional out-of-bounds spatial input data, the system may transmit a request to pointing device 112 to cease providing out-of-bounds spatial input data. [0050] In some examples in which techniques described herein are employed with extended reality devices, the graphical element’s location may correspond to a direction of a user’s gaze. Thus, when the user moves his or her gaze to an edge or boundary of a display of an extended reality device, the graphical element’s relocation to the edge may be detected at block 402. At block 404, the extended reality device may be caused to capture additional movement of the user’s gaze beyond the edge or boundary of the extended reality display. This additional movement may generate out-of-bounds spatial input data that can be used at block 406 to detect the user’s intent to trigger performance of a responsive actions, as described herein.
[0051] Fig. 5 illustrates an example system 500 with example executable instructions to perform example operations, in accordance with an example of the present disclosure. The system of Fig. 5 includes a processor 570 and memory 572 storing instructions (blocks 502-506) that, when executed, perform a process 500. In some examples, the instructions of blocks 502-506 are executed as an application such as edge gesture module 110 that is separate from operating system 106, and that may be implemented in whole or in part by computer 100 and/or by circuitry 113 onboard pointing device 112.
[0052] Block 502 includes computer-readable instructions that cause processor 570 to establish second channel 122 with pointing device 112 in response to a determination that as a result of operation of pointing device 112, a graphical element such as cursor 116 has reached a display boundary (e.g., edge 118). Block 504 includes computer-readable instructions that cause processor 570 to detect, as an edge gesture, additional operation of the pointing device in a direction of the display boundary based on spatial input data received through the side channel. Block 506 includes computer-readable instructions that cause processor 570 to trigger performance of a responsive action associated with the edge gesture in response to the additional operation.
[0053] Fig. 6 schematically depicts an example computer-readable medium 672 with a processor, in accordance with an example of the present disclosure. The computer-readable medium 672 may include instructions 674 which, when executed by the processor 670, cause the processor to perform selected aspects of the present disclosure. Processor 670 and/or computer-readable medium 672 may be implemented on computer 100 and/or on pointing device 112.
[0054] Block 602 includes computer-readable instructions that cause processor 670 to determine that cursor 116 has been moved in a direction to reach edge 118 of display 114 based on spatial input data received from pointing device 112 through first channel 120. Block 604 includes computer-readable instructions that cause processor 670 to detect an edge gesture based on out-of-bounds spatial input data received from pointing device 112 through second channel 122. In various examples, the out-of-bounds spatial input data indicates additional spatial input in the direction. Block 606 includes computer-readable instructions that cause processor 670 to perform a responsive action that is associated with the edge gesture.
[0055] It shall be recognized, in light of the description provided, that the elements and procedures described above may be implemented in a computer environment using hardware, computer-readable instructions, firmware, and/or combinations of these. Although described specifically throughout the entirety of the instant disclosure, representative examples of the present disclosure have utility over a wide range of applications, and the above discussion is not intended and should not be construed to be limiting, but is offered as an illustrative discussion of aspects of the disclosure.

Claims

What is claimed is:
1 . A method implemented using a processor, comprising: detecting a graphical element controlled by a pointing device at an edge of a display: based on the detecting, causing the pointing device to provide out-of-bounds spatial input data; based on the out-of-bounds spatial input data, determining user intent to trigger performance of a responsive action; and based on the determining, triggering performance of the responsive action.
2. The method of claim 1 , wherein the determining is based on the out-of- bounds spatial input data indicating additional movement in a direction that caused the graphical element to reach the edge of the display.
3. The method of claim 2, comprising: detecting additional out-of-bounds spatial input data in a different direction; and based on the detected additional out-of-bounds spatial input data, transmitting a request to the pointing device to cease providing out-of-bounds spatial input data.
4. The method of claim 1 , wherein the graphical element is controlled by an operating system until the graphical element reaches the edge of the display, and the determining is performed by an application that executes on top of the operating system.
5. The method of claim 4, wherein the triggering is performed by the application.
6. The method of claim 5, wherein the responsive action is performed by the operating system.
7. The method of claim 1 , wherein the responsive action comprises transitioning the pointing device from being wirelessly coupled with a first computer operating a first operating system to being wirelessly coupled with a second computer operating a second operating system that is different from the first operating system.
8. A system comprising a processor and memory storing instructions that, in response to execution of the instructions by the processor, cause the processor to: in response to a determination that as a resuit of operation of a pointing device, a graphical element has reached a display boundary, establish a side channel with the pointing device; based on spatial input data received through the side channel, detect, as an edge gesture, additional operation of the pointing device in a direction of the display boundary: and in response to the additional operation, trigger performance of a responsive action associated with the edge gesture.
9. The system of claim 8, wherein the responsive action comprises: opening a file or folder; adjusting a setting of an output device; or activating or deactivating a peripheral device.
10. The system of claim 8, wherein the side channel is established using a sub-device of a Universal Serial Bus (“USB”) hub that is associated with the pointing device.
11 . The system of claim 8, wherein the side channel is established over a Universal Serial Bus (“USB”) human interface device (“HID”) interface.
12. The system of claim 8, wherein the instructions are executed as an application that is separate from an operating system that controls the system.
13. A non-transitory computer-readable medium comprising instructions that, in response to execution of the instructions by a processor, cause the processor to: determine that a cursor has been moved in a direction to reach an edge of a display based on spatial input data received from a pointing device through a first channel; detect an edge gesture based on out-of-bounds spatial input data received from the pointing device through a second channel, wherein the out-of-bounds spatial input data indicates additional spatial input in the direction; and perform a responsive action that is associated with the edge gesture.
14. The non-transitory computer-readable medium of claim 13, comprising instructions to select the responsive action from a library of responsive actions based on a location of the edge of the display.
15. The non-transitory computer-readable medium of claim 13, comprising instructions to compare a velocity conveyed by the out-of-bounds spatial input data with a threshold, wherein the edge gesture is detected upon satisfaction of the threshold.
PCT/US2020/051792 2020-09-21 2020-09-21 Responsive actions based on spatial input data WO2022060370A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2020/051792 WO2022060370A1 (en) 2020-09-21 2020-09-21 Responsive actions based on spatial input data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2020/051792 WO2022060370A1 (en) 2020-09-21 2020-09-21 Responsive actions based on spatial input data

Publications (1)

Publication Number Publication Date
WO2022060370A1 true WO2022060370A1 (en) 2022-03-24

Family

ID=80776347

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2020/051792 WO2022060370A1 (en) 2020-09-21 2020-09-21 Responsive actions based on spatial input data

Country Status (1)

Country Link
WO (1) WO2022060370A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080168403A1 (en) * 2007-01-06 2008-07-10 Appl Inc. Detecting and interpreting real-world and security gestures on touch and hover sensitive devices
US20130249806A1 (en) * 2012-03-20 2013-09-26 Sony Corporation Method and apparatus for enabling touchpad gestures
US20140118281A1 (en) * 2012-10-26 2014-05-01 Cirque Corporation DETERMINING WHAT INPUT TO ACCEPT BY A TOUCH SENSOR AFTER INTENTIONAL AND ACCIDENTAL LIFT-OFF and SLIDE-OFF WHEN GESTURING OR PERFORMING A FUNCTION
US20150268789A1 (en) * 2014-03-18 2015-09-24 Pixart Imaging Inc. Method for preventing accidentally triggering edge swipe gesture and gesture triggering
US20160357388A1 (en) * 2015-06-05 2016-12-08 Apple Inc. Devices and Methods for Processing Touch Inputs Over Multiple Regions of a Touch-Sensitive Surface

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080168403A1 (en) * 2007-01-06 2008-07-10 Appl Inc. Detecting and interpreting real-world and security gestures on touch and hover sensitive devices
US20130249806A1 (en) * 2012-03-20 2013-09-26 Sony Corporation Method and apparatus for enabling touchpad gestures
US20140118281A1 (en) * 2012-10-26 2014-05-01 Cirque Corporation DETERMINING WHAT INPUT TO ACCEPT BY A TOUCH SENSOR AFTER INTENTIONAL AND ACCIDENTAL LIFT-OFF and SLIDE-OFF WHEN GESTURING OR PERFORMING A FUNCTION
US20150268789A1 (en) * 2014-03-18 2015-09-24 Pixart Imaging Inc. Method for preventing accidentally triggering edge swipe gesture and gesture triggering
US20160357388A1 (en) * 2015-06-05 2016-12-08 Apple Inc. Devices and Methods for Processing Touch Inputs Over Multiple Regions of a Touch-Sensitive Surface

Similar Documents

Publication Publication Date Title
US7802202B2 (en) Computer interaction based upon a currently active input device
US10133396B2 (en) Virtual input device using second touch-enabled display
KR102345039B1 (en) Disambiguation of keyboard input
JP5730667B2 (en) Method for dual-screen user gesture and dual-screen device
KR102061360B1 (en) User interface indirect interaction
EP3370140B1 (en) Control method and control device for working mode of touch screen
TWI512601B (en) Electronic device, controlling method thereof, and computer program product
US20120113044A1 (en) Multi-Sensor Device
US20140340308A1 (en) Electronic device and screen content sharing method
WO2018019050A1 (en) Gesture control and interaction method and device based on touch-sensitive surface and display
EP2776905B1 (en) Interaction models for indirect interaction devices
US10890982B2 (en) System and method for multipurpose input device for two-dimensional and three-dimensional environments
US20130293477A1 (en) Electronic apparatus and method for operating the same
KR102198596B1 (en) Disambiguation of indirect input
CN107037874B (en) Heavy press and move gestures
US20100271300A1 (en) Multi-Touch Pad Control Method
US9870061B2 (en) Input apparatus, input method and computer-executable program
WO2022060370A1 (en) Responsive actions based on spatial input data
CN110727522A (en) Control method and electronic equipment
KR20200019426A (en) Inferface method of smart touch pad and device therefor
KR20150122021A (en) A method for adjusting moving direction of displaying object and a terminal thereof
KR20150111651A (en) Control method of favorites mode and device including touch screen performing the same
US10042440B2 (en) Apparatus, system, and method for touch input
KR101371524B1 (en) Mouse Device For Controlling Remote Access
KR102205235B1 (en) Control method of favorites mode and device including touch screen performing the same

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20954298

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20954298

Country of ref document: EP

Kind code of ref document: A1