CN116888562A - Mapping a computer-generated touch pad to a content manipulation area - Google Patents

Mapping a computer-generated touch pad to a content manipulation area Download PDF

Info

Publication number
CN116888562A
CN116888562A CN202180074044.7A CN202180074044A CN116888562A CN 116888562 A CN116888562 A CN 116888562A CN 202180074044 A CN202180074044 A CN 202180074044A CN 116888562 A CN116888562 A CN 116888562A
Authority
CN
China
Prior art keywords
touch pad
computer
limb
electronic device
location
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202180074044.7A
Other languages
Chinese (zh)
Inventor
A·G·保罗斯
B·R·布拉什尼茨基
N·P·乔
A·R·尤加南丹
A·M·博恩斯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apple Inc
Original Assignee
Apple Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Apple Inc filed Critical Apple Inc
Priority claimed from PCT/US2021/041928 external-priority patent/WO2022051033A1/en
Publication of CN116888562A publication Critical patent/CN116888562A/en
Pending legal-status Critical Current

Links

Abstract

A method is performed at an electronic device having one or more processors, a non-transitory memory, a display, and a limb tracker. The method includes obtaining limb tracking data via the limb tracker. The method includes displaying a computer-generated representation of a touch pad spatially associated with a physical surface. The physical surface is viewable within the display along with a content manipulation area separate from the computer-generated representation of the touch pad. The method includes identifying a first location within the computer-generated representation of the touch pad based on the limb tracking data. The method includes mapping the first location to a corresponding location within the content manipulation area. The method includes displaying an indicator indicating the mapping. The indicator can overlay the corresponding location within the content manipulation area.

Description

Mapping a computer-generated touch pad to a content manipulation area
Technical Field
The present disclosure relates to mapping of visual areas, and in particular to input-driven mapping of visual areas.
Background
The electronic device may enable manipulation of the displayed content based on input from an integrated input system, such as limb tracking input. Manipulating content with input from an integrated input system introduces a number of problems. For example, when a physical object obscures a portion of a user's limb, the reliability of the limb tracking input correspondingly decreases. As another example, content having a relatively high depth relative to the display (such as computer-generated objects located in the scene background) may be difficult for the user to manipulate, introducing tracking inaccuracy.
Disclosure of Invention
According to some implementations, a method is performed at an electronic device having one or more processors, non-transitory memory, a display, and a limb tracker. The method includes obtaining limb tracking data via a limb tracker. The method includes displaying a computer-generated representation of a touch pad spatially associated with a physical surface on a display. The physical surface is viewable within the display along with a content manipulation area separate from the computer-generated representation of the touch pad. The method includes identifying a first location within the computer-generated representation of the touch pad based on the limb tracking data. The method includes mapping the first location to a corresponding location within the content manipulation area. The method includes displaying an indicator on a display indicating the mapping. The indicator can overlay the corresponding location within the content manipulation area.
According to some implementations, an electronic device includes one or more processors, a non-transitory memory, a display, and a limb tracker. The one or more programs are stored in the non-transitory memory and configured to be executed by the one or more processors. The one or more programs include instructions for performing or causing the operations of any of the methods described herein. According to some implementations, a non-transitory computer-readable storage medium has instructions stored therein, which when executed by one or more processors of an electronic device, cause the device to perform or cause the operations of any one of the methods described herein. According to some implementations, an electronic device includes means for performing or causing the operations of any one of the methods described herein. According to some implementations, an information processing apparatus for use in an electronic device includes means for performing or causing performance of operations of any one of the methods described herein.
Drawings
For a better understanding of the various embodiments, reference should be made to the following detailed description taken in conjunction with the accompanying drawings, in which like reference numerals designate corresponding parts throughout the figures thereof.
Fig. 1 is a block diagram of an example of a portable multifunction device according to some implementations.
Fig. 2 is a block diagram of an example of a finger wearable device according to some implementations.
Fig. 3A-3W are examples of an electronic device mapping a computer-generated touch pad to a content manipulation area according to some implementations.
FIG. 4 is an example of a flow chart of a method of mapping a computer-generated touch pad to a content manipulation area, according to some implementations.
FIG. 5 is another example of a flow chart of a method of mapping a computer-generated touch pad to a content manipulation area, according to some implementations.
Detailed Description
An electronic device including an integrated input system may manipulate the display of a computer-generated object based on input from the integrated input system. For example, the integrated input system includes a limb tracking input system and/or an eye tracking input system. As one example, based on limb tracking input from a limb tracking input system, an electronic device determines that a corresponding limb of a user meets a proximity threshold relative to a particular computer-generated object. Thus, the electronic device manipulates a particular computer-generated object based on limb tracking input. However, manipulating computer-generated objects with inputs from an integrated input system introduces a number of problems. For example, when a physical object obscures (e.g., blocks) a portion of a user's limb, the reliability of the limb tracking input correspondingly decreases. As another example, limited mobility of the user's eyes and instability of the user's limbs reduce the efficiency associated with manipulating computer-generated objects. As yet another example, computer-generated objects having a relatively high depth relative to the display (such as computer-generated objects located in a scene background) may be difficult for a user to manipulate, introducing limb tracking and eye tracking inaccuracies.
In contrast, various implementations disclosed herein include methods, electronic devices, and systems for mapping between a computer-generated representation of a trackpad and spatially distinct content manipulation regions based on limb tracking data. For example, in some implementations, the electronic device includes a communication interface provided for communicating with the finger wearable device, and the electronic device obtains finger manipulation data from the finger wearable device via the communication interface. The finger manipulation data may be included in the limb tracking data. As another example, in some implementations, the electronic device includes a computer vision system that outputs limb identification data (e.g., object identification relative to image data). The limb identification data may be included in limb tracking data.
The electronic device displays an indicator indicating the mapping. For example, based on the finger manipulation data, the electronic device determines that the finger wearable device is hovering over or in contact with a center of the computer-generated representation of the touch pad. Thus, the electronic device displays the indicator at the center of the content manipulation area. In some implementations, by displaying an indication of the mapping, the electronic device provides feedback to the user that characterizes engagement of the finger wearable device with the content manipulation area. This feedback reduces the number of erroneous (e.g., undesired) inputs that the electronic device receives from the finger wearable device, thereby reducing the resource utilization of the electronic device.
Accordingly, various implementations disclosed herein enable a user to efficiently engage (e.g., manipulate) content within a content manipulation area. For example, when the finger manipulation data indicates that the finger wearable device is drawing a circle on the computer-generated representation of the touch pad, the electronic device displays a corresponding representation of the circle within the content manipulation area. Thus, the electronic device provides more control and accuracy when engaged with the content manipulation area than other devices.
The finger wearable device may be worn by a user's finger. In some implementations, the electronic device tracks the finger in six degrees of freedom (6 DOF) based on the finger manipulation data. Thus, even when the physical object obscures a portion of the finger wearable device, the electronic device continues to receive finger manipulation data from the finger wearable device. On the other hand, when the physical object obscures the limb of the user, other devices that utilize limb tracking cannot track the limb. In addition, the electronic device enables object engagement (e.g., disambiguation, manipulation, etc.) based on the finger manipulation data, regardless of the apparent distance between the finger wearable device and the content manipulation area, resulting in better control and accuracy.
Reference will now be made in detail to the implementations, examples of which are illustrated in the accompanying drawings. Numerous specific details are set forth in the following detailed description in order to provide a thorough understanding of the various described implementations. It will be apparent, however, to one of ordinary skill in the art that the various described implementations may be practiced without these specific details. In other instances, well-known methods, procedures, components, circuits, and networks have not been described in detail so as not to unnecessarily obscure aspects of the implementations.
It will also be understood that, although the terms "first," "second," etc. may be used herein to describe various elements in some cases, these elements should not be limited by these terms. These terms are only used to distinguish one element from another element. For example, a first contact may be named a second contact, and similarly, a second contact may be named a first contact without departing from the scope of the various described implementations. The first contact and the second contact are both contacts, but they are not the same contact unless the context clearly indicates otherwise.
The terminology used in the description of the various embodiments herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used in the description of various described implementations and in the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms "comprises," "comprising," "includes," and/or "including," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
As used herein, the term "if" is optionally interpreted to mean "when … …" or "at … …" or "in response to a determination" or "in response to detection" depending on the context. Similarly, the phrase "if determined … …" or "if detected [ stated condition or event ]" is optionally interpreted to mean "upon determining … …" or "in response to determining … …" or "upon detecting [ stated condition or event ]" or "in response to detecting [ stated condition or event ]" depending on the context.
A person may interact with and/or perceive a physical environment or physical world without resorting to an electronic device. The physical environment may include physical features, such as physical objects or surfaces. Examples of physical environments are physical forests comprising physical plants and animals. A person may directly perceive and/or interact with a physical environment through various means, such as hearing, vision, taste, touch, and smell. In contrast, a person may interact with and/or perceive a fully or partially simulated augmented reality (XR) environment using an electronic device. The XR environment may include Mixed Reality (MR) content, augmented Reality (AR) content, virtual Reality (VR) content, and so forth. With an XR system, some of the physical movement of a person or representation thereof may be tracked and, in response, characteristics of virtual objects simulated in the XR environment may be adjusted in a manner consistent with at least one laws of physics. For example, the XR system may detect movements of the user's head and adjust the graphical content and auditory content presented to the user (similar to how such views and sounds change in a physical environment). As another example, the XR system may detect movement of an electronic device (e.g., mobile phone, tablet, laptop, etc.) presenting the XR environment, and adjust graphical content and auditory content presented to the user (similar to how such views and sounds change in a physical environment). In some cases, the XR system may adjust features of the graphical content in response to other inputs (e.g., voice commands) such as representations of physical movements.
Many different types of electronic systems may enable a user to interact with and/or perceive an XR environment. Exemplary non-exclusive lists include head-up displays (HUDs), head-mounted systems, projection-based systems, windows or vehicle windshields with integrated display capabilities, displays formed as lenses placed on the eyes of a user (e.g., contact lenses), headphones/earphones, input systems with or without haptic feedback (e.g., wearable or handheld controllers), speaker arrays, smartphones, tablets, and desktop/laptop computers. The head-mounted system may have an opaque display and one or more speakers. Other head-mounted systems may be configured to accept an opaque external display (e.g., a smart phone). The head-mounted system may include one or more image sensors for capturing images or video of the physical environment, and/or one or more microphones for capturing audio of the physical environment. The head-mounted system may have a transparent or translucent display instead of an opaque display. The transparent or translucent display may have a medium through which light is directed to the eyes of the user. The display may utilize various display technologies such as uLED, OLED, LED, liquid crystal on silicon, laser scanning light sources, digital light projection, or combinations thereof. Optical waveguides, optical reflectors, holographic media, optical combiners, and combinations thereof or other similar techniques may be used for the media. In some implementations, the transparent or translucent display may be selectively controlled to become opaque. Projection-based systems may utilize retinal projection techniques that project a graphical image onto a user's retina. Projection systems may also project virtual objects into a physical environment (e.g., as holograms or onto physical surfaces).
Fig. 1 is a block diagram of an example of a portable multifunction device 100 (also sometimes referred to herein as an "electronic device 100" for brevity) according to some implementations. The electronic device 100 includes memory 102 (which optionally includes one or more computer-readable storage media), a memory controller 122, one or more processing units (CPUs) 120, a peripheral interface 118, an input/output (I/O) subsystem 106, speakers 111, a display system 112, an Inertial Measurement Unit (IMU) 130, an image sensor 143 (e.g., a camera), a contact strength sensor 165, an audio sensor 113 (e.g., a microphone), an eye tracking sensor 164 (e.g., included within a head-mounted device (HMD)), a limb tracking sensor 150, and other input or control devices 116. In some implementations, the electronic device 100 corresponds to one of a mobile phone, a tablet, a laptop, a wearable computing device, a head-mounted device (HMD), a head-mounted housing (e.g., the electronic device 100 slides into or is otherwise attached to the head-mounted housing), and so forth. In some implementations, the head-mounted housing is shaped to form a receiver for receiving the electronic device 100 with a display.
In some implementations, peripheral interface 118, one or more processing units 120, and memory controller 122 are optionally implemented on a single chip, such as chip 103. In some other implementations, they are optionally implemented on separate chips.
The I/O subsystem 106 couples input/output peripheral devices on the electronic device 100, such as the display system 112 and other input or control devices 116, to the peripheral device interface 118. The I/O subsystem 106 optionally includes a display controller 156, an image sensor controller 158, an intensity sensor controller 159, an audio controller 157, an eye tracking controller 160, one or more input controllers 152 for other input or control devices, an IMU controller 132, a limb tracking controller 180, a privacy subsystem 170, and a communication interface 190. One or more input controllers 152 receive electrical signals from/transmit electrical signals to other input or control devices 116. Other input control devices 116 optionally include physical buttons (e.g., push buttons, rocker buttons, etc.), dials, slider switches, joysticks, click wheels, and the like. In some alternative implementations, one or more input controllers 152 are optionally coupled with (or not coupled with) any of the following: a keyboard, an infrared port, a Universal Serial Bus (USB) port, a stylus, a finger wearable device, and/or a pointing device such as a mouse. The one or more buttons optionally include an up/down button for volume control of the speaker 111 and/or the audio sensor 113. The one or more buttons optionally include a push button. In some implementations, other input or control devices 116 include a positioning system (e.g., GPS) that obtains information regarding the position and/or orientation of electronic device 100 relative to a particular object. In some implementations, other input or control devices 116 include depth sensors and/or time-of-flight sensors that acquire depth information characterizing a particular object.
The display system 112 provides an input interface and an output interface between the electronic device 100 and a user. The display controller 156 receives electrical signals from and/or transmits electrical signals to the display system 112. The display system 112 displays visual output to a user. Visual output optionally includes graphics, text, icons, video, and any combination thereof (collectively, "graphics"). In some implementations, some or all of the visual output corresponds to a user interface object. As used herein, the term "affordance" refers to a user-interactive graphical user interface object (e.g., a graphical user interface object configured to respond to input directed to the graphical user interface object). Examples of user interactive graphical user interface objects include, but are not limited to, buttons, sliders, icons, selectable menu items, switches, hyperlinks, or other user interface controls.
The display system 112 may have a touch-sensitive surface, sensor, or set of sensors that receive input from a user based on haptic and/or tactile contact. The display system 112 and the display controller 156 (along with any associated modules and/or sets of instructions in the memory 102) detect contact (and any movement or interruption of the contact) on the display system 112 and translate the detected contact into interactions with user interface objects (e.g., one or more soft keys, icons, web pages, or images) displayed on the display system 112. In an exemplary implementation, the point of contact between the display system 112 and the user corresponds to a finger or finger wearable device of the user.
Display system 112 optionally uses LCD (liquid crystal display) technology, LPD (light emitting polymer display) technology, or LED (light emitting diode) technology, but other display technologies are used in other implementations. Display system 112 and display controller 156 optionally detect contact and any movement or interruption thereof using any of a variety of touch sensing technologies now known or later developed, including but not limited to capacitive, resistive, infrared, and surface acoustic wave technologies, as well as other proximity sensor arrays or other elements for determining one or more points of contact with display system 112.
The user optionally uses any suitable object or appendage, such as a stylus, finger wearable device, finger, etc., to contact the display system 112. In some implementations, the user interface is designed to work with finger-based contacts and gestures, which may not be as accurate as stylus-based input due to the large contact area of the finger on the touch screen. In some implementations, the electronic device 100 translates the finger-based coarse input into a precise pointer/cursor position or command for performing the action desired by the user.
The speaker 111 and the audio sensor 113 provide an audio interface between the user and the electronic device 100. The audio circuit receives audio data from the peripheral interface 118, converts the audio data into an electrical signal, and transmits the electrical signal to the speaker 111. The speaker 111 converts electrical signals into sound waves that are audible to humans. The audio circuit also receives electrical signals converted from sound waves by an audio sensor 113 (e.g., a microphone). The audio circuitry converts the electrical signals to audio data and transmits the audio data to the peripheral interface 118 for processing. The audio data is optionally retrieved from and/or transmitted to the memory 102 and/or RF circuitry by the peripheral interface 118. In some implementations, the audio circuit further includes a headset jack. The headset jack provides an interface between audio circuitry and removable audio input/output peripherals such as output-only headphones or a headset having both an output (e.g., a monaural or binaural) and an input (e.g., a microphone).
An Inertial Measurement Unit (IMU) 130 includes an accelerometer, gyroscope, and/or magnetometer to measure various force, angular rate, and/or magnetic field information relative to the electronic device 100. Thus, according to various implementations, the IMU 130 detects one or more position change inputs of the electronic device 100, such as the electronic device 100 being rocked, rotated, moved in a particular direction, and so forth.
The image sensor 143 captures still images and/or video. In some implementations, the image sensor 143 is located on the back of the electronic device 100, opposite the touch screen on the front of the electronic device 100, so that the touch screen can be used as a viewfinder for still image and/or video image acquisition. In some implementations, another image sensor 143 is located on the front side of the electronic device 100 such that an image of the user is acquired (e.g., for self-photographing, for video conferencing while the user views other video conference participants on a touch screen, etc.). In some implementations, the image sensor is integrated within the HMD.
The contact strength sensor 165 detects the strength of a contact on the electronic device 100 (e.g., a touch input on a touch-sensitive surface of the electronic device 100). The contact intensity sensor 165 is coupled to an intensity sensor controller 159 in the I/O subsystem 106. The contact strength sensor 165 optionally includes one or more piezoresistive strain gauges, capacitive force sensors, electrical force sensors, piezoelectric force sensors, optical force sensors, capacitive touch-sensitive surfaces, or other strength sensors (e.g., sensors for measuring force (or pressure) of a contact on a touch-sensitive surface). The contact strength sensor 165 receives contact strength information (e.g., pressure information or a surrogate for pressure information) from the physical environment. In some implementations, at least one contact intensity sensor 165 is juxtaposed or adjacent to a touch-sensitive surface of the electronic device 100. In some implementations, at least one contact strength sensor 165 is located on a side of the electronic device 100.
The eye tracking sensor 164 detects an eye gaze of a user of the electronic device 100 and generates eye tracking data indicative of the user's eye gaze. In various implementations, the eye-tracking data includes data indicating a fixed point (e.g., a point of interest) of a user on a display panel, such as a display panel within a head-mounted device (HMD), head-mounted housing, or head-up display.
The limb tracking sensor 150 acquires limb tracking data indicative of the position of the limb of the user. For example, in some implementations, the limb tracking sensor 150 corresponds to a hand tracking sensor that obtains hand tracking data indicative of the position of a user's hand or finger within a particular object. In some implementations, the limb tracking sensor 150 utilizes computer vision techniques to estimate the pose of the limb based on the camera images.
In various implementations, the electronic device 100 includes a privacy subsystem 170 that includes one or more privacy setting filters associated with user information, such as user information included in limb tracking data, eye gaze data, and/or body position data associated with a user. In some implementations, the privacy subsystem 170 selectively prevents and/or limits the electronic device 100 or portions thereof from acquiring and/or transmitting user information. To this end, the privacy subsystem 170 receives user preferences and/or selections from the user in response to prompting the user for user preferences and/or selections. In some implementations, the privacy subsystem 170 prevents the electronic device 100 from acquiring and/or transmitting user information unless and until the privacy subsystem 170 acquires informed consent from the user. In some implementations, the privacy subsystem 170 anonymizes (e.g., scrambles or obfuscates) certain types of user information. For example, the privacy subsystem 170 receives user input specifying which types of user information the privacy subsystem 170 anonymizes. As another example, the privacy subsystem 170 anonymizes certain types of user information that may include sensitive and/or identifying information independent of user designation (e.g., automatically).
The electronic device 100 includes a communication interface 190 provided for communication with a finger wearable device, such as the finger wearable device 200 shown in fig. 2 or the finger wearable device 320 in fig. 3A-3W. For example, the communication interface 190 corresponds to one of a bluetooth interface, an IEEE 802.11x interface, a Near Field Communication (NFC) interface, and the like. According to various implementations, the electronic device 100 obtains finger manipulation data from a finger wearable device via the communication interface 190, as will be described further below.
Fig. 2 is a block diagram of an example of a finger wearable device 200. The finger wearable device 200 includes memory 202 (which optionally includes one or more computer-readable storage media), memory controller 222, one or more processing units (CPUs) 220, peripheral interface 218, RF circuitry 208, and input/output (I/O) subsystem 206. These components optionally communicate via one or more communication buses or signal lines 203. Those of ordinary skill in the art will appreciate that the finger wearable device 200 shown in fig. 2 is one example of a finger wearable device, and that the finger wearable device 200 optionally has more or fewer components than shown, optionally combines two or more components, or optionally has a different configuration or arrangement of these components. The various components shown in fig. 2 are implemented in hardware, software, firmware, or any combination thereof (including one or more signal processing circuits and/or application specific integrated circuits).
The finger wearable device 200 includes a power system 262 for powering the various components. The power system 262 optionally includes a power management system, one or more power sources (e.g., battery, alternating Current (AC)), a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator (e.g., light Emitting Diode (LED)), and any other components associated with the generation, management, and distribution of power in the portable device and/or portable accessory.
Memory 202 optionally includes high-speed random access memory, and also optionally includes non-volatile memory, such as one or more flash memory devices or other non-volatile solid-state memory devices. Access to memory 202 by other components of finger wearable device 200, such as CPU 220 and peripheral interface 218, is optionally controlled by memory controller 222.
Peripheral interface 218 may be used to couple input and output peripherals of finger wearable device 200 to CPU 220 and memory 202. The one or more processors 220 run or execute various software programs and/or sets of instructions stored in the memory 202 to perform various functions of the finger wearable device 200 and process data.
In some implementations, peripheral interface 218, CPU 220, and memory controller 222 are optionally implemented on a single chip, such as chip 204. In some implementations, they are implemented on separate chips.
The RF (radio frequency) circuit 208 receives and transmits RF signals, also referred to as electromagnetic signals. RF circuitry 208 converts/converts electrical signals to/from electromagnetic signals and communicates with electronic device 100 or electronic device 310, a communication network, and/or other communication devices via electromagnetic signals. RF circuitry 208 optionally includes well known circuitry for performing these functions including, but not limited to, an antenna system, an RF transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, a codec chipset, a Subscriber Identity Module (SIM) card, memory, and the like. RF circuitry 208 optionally communicates via wireless communication with networks such as the internet (also known as the World Wide Web (WWW)), intranets, and/or wireless networks such as cellular telephone networks, wireless Local Area Networks (LANs), and/or Metropolitan Area Networks (MANs), and other devices. The wireless communication optionally uses any of a variety of communication standards, protocols, and/or technologies including, but not limited to, global system for mobile communications (GSM), enhanced Data GSM Environment (EDGE), high Speed Downlink Packet Access (HSDPA), high Speed Uplink Packet Access (HSUPA), evolution-only data (EV-DO), HSPA, hspa+, dual cell HSPA (DC-HSPA), long Term Evolution (LTE), near Field Communication (NFC), wideband code division multiple access (W-CDMA), code Division Multiple Access (CDMA), time Division Multiple Access (TDMA), bluetooth, wireless fidelity (Wi-Fi) (e.g., IEEE 802.11a, IEEE 802.11ac, IEEE 802.11ax, IEEE 802.11b, IEEE 802.11g, and/or IEEE 802.11 n), voice over internet protocol (VoIP), wi-MAX, email protocols (e.g., internet Message Access Protocol (IMAP) and/or post protocol (POP)), messages (e.g., extensible messaging and presence protocol (XMPP), protocols for instant messaging and presence initiation with extended session initiation (sime), messages and presence (pls)), instant messaging and/or short messaging protocols including any other suitable date or communication protocols not yet being developed herein.
The I/O subsystem 206 couples input/output peripherals on the finger wearable device 200, such as other input or control devices 216, with the peripheral interface 218. The I/O subsystem 206 optionally includes one or more position sensor controllers 258, one or more intensity sensor controllers 259, a haptic feedback controller 261, and one or more other input controllers 260 for other input or control devices. One or more other input controllers 260 receive electrical signals from/transmit electrical signals to other input or control devices 216. Other input or control devices 216 optionally include physical buttons (e.g., push buttons, rocker buttons, etc.), dials, slide switches, click wheels, and the like. In some implementations, other input controllers 260 are optionally coupled to (or not coupled to) any of the following: an infrared port and/or a USB port.
In some implementations, the finger wearable device 200 includes one or more position sensors 266 that output position data associated with the finger wearable device 200. The position data indicates a position, orientation, or movement of the finger wearable device 200, such as a rotational or translational movement of the finger wearable device 200. For example, the position sensor 266 includes an Inertial Measurement Unit (IMU) that provides 3D rotation data, such as roll, pitch, and yaw information. To this end, the IMU may include a combination of accelerometers, gyroscopes, and magnetometers. As another example, the position sensor 266 includes a magnetic sensor that provides 3D position data and/or 3D orientation data (such as the position of the finger wearable device 200). For example, the magnetic sensor measures a weak magnetic field in order to determine the position of the finger wearable device 200.
In some implementations, the finger wearable device 200 includes one or more contact strength sensors 268 for detecting the strength (e.g., force or pressure) of contact of a finger wearing the finger wearable device 200 on a physical object. The one or more contact intensity sensors 268 output contact intensity data associated with the finger wearable device 200. As one example, the contact strength data indicates a force or pressure of a flick gesture associated with a finger wearing the finger wearable device 200 flicking on a surface of a physical table. The one or more contact intensity sensors 268 may include an interferometer. The one or more contact strength sensors 268 may include one or more piezoresistive strain gauges, capacitive force sensors, electrical force sensors, piezoelectric force sensors, optical force sensors, capacitive touch-sensitive surfaces, or other strength sensors.
The finger wearable device 200 optionally includes one or more haptic output generators 263 for generating haptic output on the finger wearable device 200. In some implementations, the term "haptic output" refers to a physical displacement of an accessory (e.g., finger wearable device 200) of an electronic device (e.g., electronic device 100) relative to a previous position of the accessory, a physical displacement of a component of the accessory relative to another component of the accessory, or a displacement of the component relative to a centroid of the accessory that is to be detected by a user with its tactile sensation. For example, in the event that the accessory or component of the accessory is in contact with a touch-sensitive surface of the user (e.g., a finger, palm, or other portion of the user's hand), the haptic output generated by the physical displacement will be interpreted by the user as a tactile sensation corresponding to a perceived change in a physical characteristic of the accessory or component of the accessory. For example, movement of a component (e.g., the housing of the finger wearable device 200) is optionally interpreted by a user as a "click" on a physically actuated button. In some cases, the user will feel a tactile sensation, such as a "click," even when the physical actuation button associated with the finger wearable device that is physically pressed (e.g., displaced) by the user's movement is not moved. While such interpretation of touches by a user will be limited by the user's individualized sensory perception, many sensory perceptions of touches are common to most users. Thus, when a haptic output is described as corresponding to a particular sensory perception (e.g., "click") of a user, unless otherwise stated, the generated haptic output corresponds to a physical displacement of the electronic device or component thereof that would generate that sensory perception of a typical (or ordinary) user.
Fig. 2 shows a haptic output generator 263 coupled to a haptic feedback controller 261. The tactile output generator 263 optionally includes one or more electroacoustic devices such as speakers or other audio components, and/or electromechanical devices that convert energy into linear motion such as motors, solenoids, electroactive polymers, piezoelectric actuators, electrostatic actuators, or other tactile output generating components (e.g., components that convert electrical signals into tactile output on an electronic device). The haptic output generator 263 receives the haptic feedback generation instruction from the haptic feedback system 234 and generates a haptic output on the finger wearable device 200 that can be perceived by the user of the finger wearable device 200.
In some implementations, the software components stored in memory 202 include an operating system 226, a communication system (or instruction set) 228, a location system (or instruction set) 230, a contact strength system (or instruction set) 232, a haptic feedback system (or instruction set) 234, and a gesture interpretation system (or instruction set) 236. Further, in some implementations, the memory 202 stores device/global internal states associated with the finger wearable device. The device/global internal state includes one or more of the following: sensor status, including information obtained from various sensors of the finger wearable device and other input or control devices 216; a position state including information regarding a position (e.g., position, orientation, tilt, roll, and/or distance) of the finger wearable device relative to an electronic device (e.g., electronic device 100); and location information about the absolute location of the finger wearable device.
The operating system 226 includes various software components, and/or drivers for controlling and managing general system tasks (e.g., memory management, power management, etc.), and facilitates communication between the various hardware and software components.
The communication system 228 facilitates communication with other devices (e.g., the electronic device 100 or the electronic device 310) and further includes various software components (e.g., for processing data received by the RF circuitry 208) adapted to be directly coupled to the other devices or indirectly coupled to the other devices via a network (e.g., the internet, a wireless LAN, etc.).
The location system 230 optionally detects location information about the finger wearable device 200 in combination with location data from one or more location sensors 266. The location system 230 optionally includes software components for performing various operations related to detecting the location of the finger wearable device 200 and detecting changes in the location of the finger wearable device 200 in a particular frame of reference. In some implementations, the location system 230 detects a location state of the finger wearable device 200 relative to the electronic device and detects a change in the location state of the finger wearable device 200 relative to the electronic device. As noted above, in some implementations, the electronic device 100 or 310 uses information from the location system 230 to determine the location status of the finger wearable device 200 relative to the electronic device, and changes in the location status of the finger wearable device 200.
The contact intensity system 232 optionally detects contact intensity information associated with the finger wearable device 200 in conjunction with contact intensity data from one or more contact intensity sensors 268. The contact strength system 232 includes software components for performing various operations related to the detection of contact, such as detecting the strength and/or duration of contact between the finger wearable device 200 and the desktop. Determining movement of the point of contact optionally includes determining a velocity (magnitude), a speed (magnitude and direction), and/or an acceleration (change in magnitude and/or direction) of the point of contact, the movement of the point of contact being represented by a series of contact intensity data.
The haptic feedback system 234 includes various software components for generating instructions for use by the haptic output generator 263 to generate haptic output at one or more locations on the finger wearable device 200 in response to user interaction with the finger wearable device 200.
The finger wearable device 200 optionally includes a gesture interpretation system 236. The gesture interpretation system 236 coordinates with the location system 230 and/or the contact intensity system 232 to determine gestures performed by the finger wearable device. For example, the gesture includes one or more of the following: pinch gestures, pull gestures, pinch and pull gestures, rotation gestures, flick gestures, and the like. In some implementations, the finger wearable device 200 does not include a gesture interpretation system, and the electronic device or system (e.g., gesture interpretation system 445 in fig. 4) determines the gesture performed by the finger wearable device 200 based on finger manipulation data from the finger wearable device 200. In some implementations, a portion of the gesture determination is performed at the finger wearable device 200 and a portion of the gesture determination is performed at the electronic device/system. In some implementations, the gesture interpretation system 236 determines a time period associated with the gesture. In some implementations, the gesture interpretation system 236 determines a contact strength associated with the gesture, such as an amount of pressure associated with a finger (wearing the finger wearable device 200) tap on the physical surface.
Each of the modules and applications identified above corresponds to a set of executable instructions for performing one or more of the functions described above, as well as the methods described in the present disclosure (e.g., computer-implemented methods and other information processing methods described herein). These systems (i.e., sets of instructions) need not be implemented in separate software programs, procedures or modules, and thus various subsets of these modules are optionally combined or otherwise rearranged in various embodiments. In some implementations, the memory 202 optionally stores a subset of the systems and data structures described above. Further, memory 202 optionally stores additional systems and data structures not described above.
Fig. 3A-3W are examples of an electronic device 310 that maps a computer-generated touch pad to a content manipulation area according to some implementations. While pertinent features are shown, those of ordinary skill in the art will recognize from this disclosure that various other features are not shown for the sake of brevity and so as not to obscure more pertinent aspects of the exemplary implementations disclosed herein.
As shown in fig. 3A, the electronic device 310 is associated with (e.g., operates in accordance with) an operating environment 300. In some implementations, the electronic device 310 is similar to and modified from the electronic device 100 of fig. 1. In some implementations, the electronic device 310 generates one of the XR settings described above.
The electronic device 310 includes a display 312 associated with a viewable area 314 of the operating environment 300. For example, in some implementations, the electronic device 310 includes an image sensor associated with a field of view corresponding to the viewable area 314, and the electronic device 310 synthesizes the pass-through image data from the image sensor with computer-generated content. As another example, in some implementations, the electronic device 310 includes a see-through display 312 that enables ambient light to enter from a portion of the physical environment associated with the viewable area 314. The operating environment 300 includes a physical table 302 and a physical light 304 located atop the physical table 302. The viewable area 314 includes a portion of the physical table 302 and the physical lights 304.
The finger wearable device 320 may be worn on a finger of the first hand 52 of the user 50. In some implementations, the finger wearable device 320 is similar to and modified from the finger wearable device 200 shown in fig. 2.
In some implementations, the electronic device 310 includes a communication interface (e.g., the communication interface 190 in fig. 1) provided for communicating with the finger wearable device 320. The electronic device 310 establishes a communication link with the finger wearable device 320, as indicated by communication link line 322. Establishing a link between the electronic device 310 and the finger wearable device 320 is sometimes referred to as pairing or network sharing. Those of ordinary skill in the art will appreciate that the electronic device 310 may communicate with the finger wearable device 320 according to various communication protocols (such as bluetooth, IEEE 802.11x, NFC, etc.). The electronic device 310 obtains finger manipulation data from the finger wearable device 320 via the communication interface. For example, the electronic device 310 obtains a combination of location data (e.g., output by a location sensor of the finger wearable device 320) and contact strength data (e.g., output by a contact strength sensor of the finger wearable device 320).
In some implementations, the second hand 54 of the user 50 holds the electronic device 310. For example, in some implementations, the electronic device 310 corresponds to one of a smart phone, a laptop, a tablet, and the like.
In some implementations, the electronic device 310 corresponds to a Head Mounted Device (HMD) that includes an integrated display (e.g., a built-in display) that displays a representation of the operating environment 300. In some implementations, the electronic device 310 includes a head-mounted housing. In various implementations, the head-mounted housing includes an attachment region to which another device having a display may be attached. In various implementations, the headset housing is shaped to form a receiver for receiving another device (e.g., electronic device 310) that includes a display. For example, in some implementations, the electronic device 310 slides/snaps into or is otherwise attached to the headset housing. In some implementations, a display of a device attached to the headset housing presents (e.g., displays) a representation of the operating environment 300. For example, in some implementations, the electronic device 310 corresponds to a mobile phone that is attachable to a headset housing.
In some implementations, the electronic device 310 includes an image sensor, such as a scene camera. For example, the image sensor obtains image data characterizing the operating environment 300, and the electronic device 310 synthesizes the image data with computer-generated content to generate display data for display on the display 312. The display data may be characterized by an XR environment. For example, the image sensor obtains image data representing a portion of the physical table 302 and the physical lights 304, and the generated display data displayed on the display 312 displays a corresponding representation of the physical table 302 and the physical lights 304 (see fig. 3B).
In some implementations, the electronic device 310 includes a see-through display. The see-through display allows ambient light from the physical environment to pass through the see-through display, and the representation of the physical environment is a function of the ambient light. For example, a see-through display is a translucent display, such as a glass with optical see-through. In some implementations, the see-through display is an additional display that allows optical see-through of the physical surface, such as an Optical HMD (OHMD). For example, unlike pure composition using video streams, the additional display can reflect the projected image from the display while enabling the user to visually see through the display. In some implementations, the see-through display includes a photochromic lens. The HMD adds computer-generated objects to the ambient light entering the see-through display in order to enable display of the operating environment 300. For example, the see-through display 312 allows ambient light from the operating environment 300 including a portion of the physical table 302 and the physical lights 304, and thus the see-through display 312 displays a corresponding representation of the physical table 302 and the physical lights 304 (see fig. 3B).
As shown in fig. 3B, the electronic device 310 displays a representation of a portion of the physical table 302 (sometimes referred to hereinafter as a "portion of the physical table 302" or "physical table 302" for brevity) and a representation of the physical light 304 (sometimes referred to hereinafter as "physical light 304" for brevity) on the display 312. Further, the finger wearable device 320 moves into the viewable area 314, and thus the display 312 displays a representation of the finger wearable device 320 (sometimes referred to as "finger wearable device 320" hereinafter for brevity).
Those of ordinary skill in the art will appreciate that in various implementations, the finger wearable device 320 is outside the viewable area 314. However, the electronic device 310 obtains finger manipulation data from the finger wearable device 320 via the communication interface. Thus, the electronic device 310 may perform mapping according to various implementations disclosed herein, regardless of whether the finger wearable device 320 is visible on the display 312 or through any of the image sensors 143.
As shown in fig. 3C, the electronic device 310 displays a computer-generated representation of the touch pad 324 (sometimes referred to hereinafter as "touch pad 324" for brevity) on the display 312. In some implementations, the electronic device 310 displays the touch pad 324 in response to establishing a communication link with the finger wearable device 320. In some implementations, the electronic device 310 displays one or more touch pad manipulation affordances 326a-326c. The touchpad manipulation affordances 326a-326c are provided for manipulating the touchpad 324 and will be described below.
The touch pad 324 is spatially associated with a physical surface. For example, referring to fig. 3C, electronic device 310 covers touch pad 324 on the surface of table 324. To this end, in some implementations, the electronic device 310 identifies a physical surface (e.g., via instance segmentation or semantic segmentation relative to image data) and overlays the touch pad 324 on a portion of the identified physical surface.
In some implementations, the touch pad 324 is a function of a dimensional standard. For example, the electronic device 310 determines one or more dimensional characteristics (e.g., length, width, area) associated with the surface of the physical table 302 and generates the touch pad 324 to meet the dimensional criteria. As one example, referring to fig. 3C, the touch pad 324 is sized and positioned to fit on the surface of the physical table 302.
In some implementations, the touch pad 324 is a function of occlusion criteria associated with a physical object. For example, referring to fig. 3C, the electronic device 310 identifies the physical lamp 304. Thus, the electronic device 310 positions the touch pad 324 on the display 312 such that the physical lights 304 do not obscure the touch pad 324.
In some implementations, the touch pad 324 is a function of the location of the finger wearable device 320 on the display 312. For example, in response to identifying the finger wearable device 320 within the viewable area 314, the electronic device 310 displays the touch pad 324 at a location on the display 312 such that the finger wearable device 320 hovers over a portion of the touch pad 324.
The physical surface (e.g., of physical table 302) is visible within display 312 along with a content manipulation area 330 separate from touch pad 324. For example, the content manipulation area 330 includes application content, such as web browser content, word processing content, drawing application content, and the like. Based on the finger manipulation data from the finger wearable device 320, the electronic device 310 determines a mapping between the touch pad 324 and the content manipulation area 330. Details regarding the mapping are provided below.
In some implementations, the electronic device 310 generates the content manipulation area 330. For example, the content manipulation area 330 corresponds to a virtual display screen, such as a virtual tablet.
In some implementations, the content manipulation area 330 is associated with an auxiliary device such as a real world laptop, a real world tablet, or the like. For example, the electronic device 310 is communicatively coupled to an auxiliary device, and the auxiliary device includes an auxiliary display that displays the content manipulation area 330. The electronic device 310 may utilize computer vision to identify the secondary display. As one example, the electronic device 310 transmits instructions to the auxiliary device. The instructions instruct the auxiliary device to modify the content within the content manipulation area 330 based on the mapping on the auxiliary display.
The electronic device 310 identifies a first location within the touch pad 324 based on the finger manipulation data. For example, in response to establishing a communication link with the finger wearable device 320, the electronic device 310 identifies the first location as approximately the center point of the touch pad 324.
As another example, as shown in fig. 3D, when the finger wearable device 320 hovers over a portion of the touch pad 324, the electronic device 310 maps a respective location of the finger wearable device 320 to a first location within the touch pad 324 based on the finger manipulation data. The mapping is indicated by hover line 332, which is shown for illustrative purposes only. In some implementations, the electronic device 310 displays a first indicator 334 on the display 312 indicating the first location. Displaying the first indicator 334 provides feedback to the user 50, thereby reducing false (e.g., unintended) inputs from the finger wearable device 320 and reducing resource utilization of the electronic device 310.
In some implementations, the appearance of the first indicator 334 is a function of the distance between the touch pad 324 and the finger wearable device 320. For example, in some implementations, as the finger wearable device 320 moves downward toward the touch pad 324, the electronic device 310 increases the size of the first indicator 334 based on the finger manipulation data. As another example, in some implementations, the electronic device 310 changes the appearance of the first indicator 334 based on the location data and the contact intensity data corresponding to a portion of the touch pad 324 indicating that the finger wearable device 320 contacts the physical table 302. For example, in response to detecting that the finger wearable device 320 contacts a portion of the physical table 302, the electronic device 310 changes the object type associated with the first indicator 334 (e.g., changes from sphere to cube) or changes the attribute of the first indicator 334 (e.g., makes it brighter).
The electronic device 310 maps the first location within the touch pad 324 to a corresponding location within the content manipulation area 330. As shown in fig. 3D, the touch pad 324 is associated with a first size feature (e.g., a first display area) that is different from a second size feature (e.g., a second display area) associated with the content manipulation area 330. In some implementations, the electronic device 310 maps between the touch pad 324 and the content manipulation area 330 according to a common aspect ratio, despite the difference in the respective dimensional characteristics. For example, the touch pad 324 corresponds to a square of 30cm×30cm, and the content manipulation area 330 corresponds to a rectangle of 160cm×90cm (190 cm wide, 90cm high). Continuing with the previous example, in response to detecting a 30cm movement from the left edge of the touch pad 324 to the right edge of the touch pad 324, the electronic device 310 maps from the left edge of the content manipulation area 330 to the right edge of the content manipulation area 330. Thus, the electronic device 310 scales the 30cm movement (associated with the touch pad 324) to the 160cm movement (associated with the content manipulation area 330) in order to correctly map the movement associated with the touch pad 324 to the content manipulation area 330.
As shown in fig. 3D, the first position (indicated by 334) is near the upper edge of the touch pad 324 and is approximately halfway between the left edge of the touch pad 324 and the center vertical line of the touch pad 324. Accordingly, the electronic device 310 maps the first location to a corresponding location within the content manipulation area 330 and displays a second indicator 336 indicating the mapping, as shown in fig. 3E. The second indicator 336 covers a corresponding location within the content manipulation area 330. Displaying the second indicator 336 indicates to the user 50 a mapping between the current location within the touch pad 324 and the corresponding location within the content manipulation area 330. Thus, the second indicator 336 provides position feedback information to the user 50, thereby reducing false (e.g., unintended) inputs from the finger wearable device 320 and reducing resource utilization of the electronic device 310. Further, in contrast to other devices that update the position of the displayed cursor based on detecting a change in position relative to the physical touch pad, the electronic device 310 indicates the current position associated with the touch pad 324 as mapped to the content manipulation area 330 via the second indicator 336 regardless of detecting a change in position relative to the touch pad 324.
As shown in fig. 3F and 3G, the electronic device 310 updates the mapping based on finger manipulation data indicating movement of the finger wearable device 320 on the touch pad 324. As shown in fig. 3F, the finger wearable device 320 moves downward on the surface of the physical table 302, as indicated by the movement line 338 (shown for illustrative purposes only). Thus, based on finger manipulation data (e.g., 3D position data and contact strength data) obtained when the finger wearable device 320 is moved on the physical table 302, the electronic device 310 determines that the finger wearable device 320 is moved down on the touch pad 324. Based on the movement of the finger wearable device 320, the hover position relative to the touch pad 324 changes accordingly. Thus, as indicated by the updated hover line 339 in FIG. 3G (shown for illustrative purposes only), the electronic device 310 determines a second position within the touch pad 324 and moves the first indicator 334 to the second position. Further, the electronic device 310 maps the second location to an updated location within the content manipulation area 330. Accordingly, the electronic device 310 determines an updated location based on the second location and correspondingly moves the second indicator 336 downward, as shown in fig. 3G.
In some implementations, the finger wearing the finger wearable device 320 moves on the surface of the physical table 302 instead of hovering over the touch pad 324. As the finger moves over the physical table 302, the electronic device obtains location data and contact strength data from the finger wearable device 320. For example, based on position data and interferometer data from the finger wearable device 320, the electronic device 320 detects a deformation of the finger as the finger moves on the surface of the physical table 302 and determines that the finger is moving on the physical table 302 based in part on the deformation. Thus, the electronic device 310 determines an updated location on the touch pad 324 based on the data indicative of finger movement and maps it to a corresponding location within the content manipulation area 330.
Fig. 3H-3L illustrate various examples of manipulating the touch pad 324. As shown in fig. 3H, the electronic device 310 displays a first trackpad manipulation affordance 326a, a second trackpad manipulation affordance 326b, and a third trackpad manipulation affordance 326c on the display 312. Those of ordinary skill in the art will appreciate that in some implementations, the number of trackpad manipulation affordances and the respective corresponding trackpad manipulation operations may vary.
The electronic device 310 receives a first selection input 340 that selects the first touchpad manipulation affordance 326a, as shown in fig. 3H. The first trackpad manipulation affordance 326a is associated with a trackpad movement operation. In some implementations, the first selection input 340 is directed to a spatial location on the display 312 that corresponds to a spatial location of the first touch pad manipulation affordance 326a on the display 312. In some implementations, the particular selection input corresponds to one of a limb tracking input, an eye tracking input, a stylus touch input, a finger manipulation input via the finger wearable device 320, and the like. In response to receiving the first selection input 340 in fig. 3H, the electronic device 310 selects the first trackpad manipulation affordance 326a and a corresponding trackpad movement operation, as shown by the gray overlay displayed within the first trackpad manipulation affordance 326a in fig. 3I.
As shown in fig. 3I, the electronic device 310 receives a first manipulation input 342 associated with the touch pad 324. The first manipulation input 342 corresponds to a left drag of the touch pad 324. In some implementations, the particular manipulation input corresponds to one of a limb tracking input, an eye tracking input, a stylus touch input, a finger manipulation input via the finger wearable device 320, and the like.
In response to receiving the first manipulation input 342 in fig. 3I, the electronic device 310 manipulates the touch pad 324 in accordance with the touch pad movement operation associated with the selected first touch pad manipulation affordance 326 a. That is, as shown in fig. 3I and 3J, the electronic device 310 moves the touch pad 324 to the left on the physical table 302 based on the size of the first manipulation input 342. In addition, the electronic device 310 moves the one or more trackpad manipulation affordances 326a-326c to the left so as to maintain the position of the one or more trackpad manipulation affordances 326a-326c relative to the trackpad 324.
As shown in fig. 3J, the electronic device 310 receives a second selection input 344 that selects the second touchpad manipulation affordance 326 b. The second touchpad manipulation affordance 326b is associated with a touchpad resizing operation. In response to receiving the second selection input 344 in fig. 3J, the electronic device 310 selects the second trackpad manipulation affordance 326b and a corresponding trackpad resizing operation, as illustrated by the gray overlay displayed within the second trackpad manipulation affordance 326b in fig. 3K.
As shown in fig. 3K, the electronic device 310 receives a second manipulation input 346 associated with the touch pad 324. The second manipulation input 346 corresponds to an expansion of the touch pad 324 toward the bottom edge of the display 312. In response to receiving the second manipulation input 346 in fig. 3K, the electronic device 310 manipulates the touch pad 324 in accordance with a touch pad resizing operation associated with the selected second touch pad manipulation affordance 326 b. That is, as shown in fig. 3K and 3L, the electronic device 310 resizes (e.g., expands) the touch pad 324 downward and rightward toward the bottom edge of the display 312 based on the magnitude of the second manipulation input 346. In addition, the electronic device 310 moves the one or more trackpad manipulation affordances 326a-326c downward and to the right so as to maintain the position of the one or more trackpad manipulation affordances 326a-326c relative to the trackpad 324. According to various implementations, based on input from the user 50, the electronic device 310 resizes the touch pad 324 or changes an aspect ratio associated with the touch pad 324 while maintaining a common aspect ratio between the touch pad 324 and the content manipulation area 330.
Fig. 3M to 3Q illustrate examples of manipulating content displayed within the content manipulation area 330 based on corresponding mappings. As shown in fig. 3M, the content manipulation area 330 includes content including a tree 350. Further, the content manipulation area 330 includes one or more affordances 351 provided for implementing corresponding content manipulation operations with respect to a portion of content. As shown in fig. 3M, one or more affordances 351 correspond to one or more drawing tools, with a pencil drawing tool currently selected. Those of ordinary skill in the art will appreciate that the number and type of affordances may vary in some implementations.
As shown in fig. 3N, the finger wearable device 320 moves within the viewable area 314 and, thus, the display 312 displays the finger wearable device 320. Further, as indicated by tap indicator 352 (shown for illustrative purposes only), finger wearable device 320 begins performing a tap gesture on touch pad 324. When the finger wearable device 320 performs a flick gesture, the electronic device 310 receives finger manipulation data from the finger wearable device 320. For example, the electronic device 310 receives 3D rotation data from IMU sensors integrated in the finger wearable device 320 and 3D position data from magnetic sensors integrated in the finger wearable device 320. As another example, the electronic device 310 also receives contact strength data from a contact strength sensor integrated in the finger wearable device 320. The electronic device 310 maps the corresponding location of the finger wearable device 320 to a third location within the touch pad 324 based on the finger manipulation data. In some implementations, the electronic device 310 displays an indicator corresponding to the third location on the display 312. The electronic device 310 maps the third location within the touch pad 324 to a corresponding location within the content manipulation area 330.
As shown in fig. 3O, the finger wearable device 320 completes performing the flick gesture. Based on the finger manipulation data, the electronic device 310 determines that the finger wearable device 320 performs a flick gesture. For example, the electronic device 310 determines that the finger wearable device 320 is performing a flick gesture based on position data indicating that the movement of the finger wearable device 320 is downward and ending within the touch pad 324. As another example, based on the contact intensity data, the electronic device 310 detects a threshold amount of pressure caused by a finger wearing the finger wearable device 320 tapping on the physical table 302. The electronic device 310 displays a fourth indicator 354 indicating the mapping. The fourth indicator 354 provides available feedback to the user 50, such as a location on the tree 350 where drawing indicia will be displayed, as will be described below.
As shown in fig. 3P and 3Q, the finger wearable device 320 moves on the touch pad 324 as indicated by the movement line 356 (shown for illustrative purposes only). Thus, as shown in fig. 3Q, the electronic device 320 updates the position of the fourth indicator 354 within the content manipulation area 330 according to the magnitude of movement of the finger wearable device 320. Further, because the pencil drawing tool is currently selected, the electronic device 320 displays the pencil mark 358 within the content manipulation area 330 according to the movement amount value of the finger wearable device 320.
In some implementations, in response to receiving an input directed to a particular one of the one or more affordances 351, the electronic device 310 changes the currently selected affordance to the particular one of the one or more affordances 351. In some implementations, the input can be one of limb tracking input, eye tracking input, stylus input, input from the finger wearable device 320, and the like. For example, the finger wearable device 320 moves to a position on the display 312 corresponding to a pen drawing tool affordance. Thus, the electronic device 310 changes the selection from a pencil drawing operation to a pen drawing operation associated with the pen drawing tool affordance. Thus, subsequent finger manipulation inputs directed into the touch pad 324 cause the electronic device 310 to display pen (non-pencil) marks at mapped locations within the content manipulation area 330.
Fig. 3R-3W illustrate examples of manipulating the content manipulation area 330 based on determining that the mapping satisfies a proximity threshold relative to the affordance 359. Although in fig. 3R-3W, the finger wearable device 320 is outside of the field of view corresponding to the viewable area 314 (and thus not shown on the display 312), one of ordinary skill in the art will appreciate that in some implementations, mapping occurs when the finger wearable device 320 is within the field of view corresponding to the viewable area 314.
As shown in fig. 3R, electronic device 310 displays affordance 359 on display 312. The affordance 359 is associated with a manipulation operation relative to the content manipulation region 330. For example, in response to determining selection of affordance 359, electronic device 310 changes the appearance (e.g., resizes or relocates) of content manipulation area 330, replicates content manipulation area 330, invokes application content displayed within content manipulation area 330, and the like. The affordance 359 can be positioned outside of the content manipulation region 330. As will be described below, the electronic device 310 displays various indicators to the user 50 to assist the user 50 in selecting the affordance 359.
As further shown in fig. 3R, while the finger wearable device 320 is outside of the field of view corresponding to the viewable area 314, the electronic device identifies a respective location associated with the finger wearable device 320 based on the finger manipulation data. That is, the electronic device 310 displays a fifth indicator 360 indicating the corresponding location, as shown in fig. 3R.
Further, respective locations associated with the finger wearable device 320 are spatially associated with the touch pad 324. That is, the respective location hovers over the touch pad 324, as indicated by the hover indicator 362 (shown for illustrative purposes only). Accordingly, the electronic device 310 maps the corresponding location to a corresponding location within the content manipulation area 330 and displays a sixth indicator 364 indicating the mapping within the content manipulation area 330.
As shown in fig. 3S, the finger wearable device 320 moves to the left, as indicated by the movement line 366. As the finger wearable device 320 moves to the left, the electronic device 310 obtains finger manipulation data. Based on the finger manipulation data, the electronic device 310 updates a respective location associated with the finger wearable device 320. Notably, as indicated by the updated hover indicator 362 shown in fig. 3T, the updated respective location is not associated with the touch pad 324 (e.g., no longer hovering over the touch pad 324). However, as shown in FIG. 3T, the electronic device 310 continues to display the fifth indicator 360 at a location corresponding to the updated corresponding location in order to provide the user 50 with available feedback. Further, in some implementations, the electronic device 310 continues to display the sixth indicator 364 based on the updated respective position relative to the touch pad 324. That is, the electronic device 310 moves the sixth indicator 364 to the left to a position outside the content manipulation area 330. Continuing to display the sixth indicator 364 provides feedback to the user 50 regarding the mapping and thus assists the user 50 in selecting the affordance 359.
As shown in fig. 3U, the finger wearable device 320 is moved further to the left, as indicated by the movement line 368. As the finger wearable device 320 moves further to the left, the electronic device 310 obtains finger manipulation data. Based on the finger manipulation data, the electronic device 310 updates the respective location associated with the finger wearable device 320 and accordingly repositions the fifth indicator 360 further to the left as shown in fig. 3U and 3V. Further, the electronic device 310 maps the updated position of the fifth indicator 360 to the updated position of the sixth indicator 364. Thus, as shown in fig. 3U and 3V, the electronic device 310 moves the sixth indicator 364 further to the left to the updated position. The updated position of the sixth indicator 364 meets the proximity threshold with respect to the affordance 359. For example, the updated position of the sixth indicator 364 satisfies the proximity threshold when the updated position is less than a threshold distance from the affordance 359 or within the affordance.
Based on the updated position of the sixth indicator 364 meeting the proximity threshold, the electronic device 310 selects the affordance 359. In response to selection of affordance 359, electronic device 310 resizes content manipulation region 330 to include affordance 359 as shown in FIG. 3W. Those of ordinary skill in the art will appreciate that selection of affordances 359 may result in different manipulation operations with respect to content manipulation region 330, as described above.
Although the examples described with reference to fig. 3A-3W relate to mapping based on finger manipulation data from the finger wearable device 320, various implementations include performing similar mapping based on limb identification data from an integrated computer vision system. For example, in some implementations, the electronic device 310 obtains image data and performs computer vision techniques (e.g., semantic segmentation) in conjunction with the image data to identify limbs of the user 50. Thus, using computer vision techniques, the electronic device 310 determines the position of the limb within the touch pad 324 and maps the position to a corresponding position within the content manipulation area 330 accordingly.
FIG. 4 is an example of a flow chart of a method 400 of mapping a computer-generated touch pad to a content manipulation area, according to some implementations. In various implementations, the method 400, or portions thereof, is performed by an electronic device (e.g., the electronic device 100 of fig. 1, the electronic device 310 of fig. 3A-3W). In various implementations, the method 400, or portions thereof, is performed by a head-mounted device (HMD). In some implementations, the method 400 is performed by processing logic (including hardware, firmware, software, or a combination thereof). In some implementations, the method 400 is performed by a processor executing code stored in a non-transitory computer readable medium (e.g., memory). In various implementations, some of the operations in method 400 are optionally combined, and/or the order of some of the operations is optionally changed.
As represented by block 402, the method 400 includes obtaining limb tracking data via a limb tracker. For example, in some implementations, the limb tracker includes a communication interface provided for communicating with the finger wearable device. As another example, in some implementations, the limb tracker includes a computer vision system.
As represented by block 404, in some implementations, the method 400 includes obtaining finger manipulation data from a finger wearable device via a communication interface, wherein the finger manipulation data is included in limb tracking data. For example, as described with reference to fig. 3A-3W, the electronic device 310 obtains various types of finger manipulation data from the finger wearable device 320. The finger manipulation data may indicate location (e.g., 6 degrees of freedom) and contact strength (e.g., force or pressure) information associated with the finger wearable device. In some implementations, the finger manipulation data indicates a gesture performed by the finger wearable device. According to various implementations, the finger manipulation data corresponds to sensor data associated with one or more sensors integrated within the finger wearable device. For example, as represented by block 406, the sensor data includes location data output from one or more location sensors integrated in the finger wearable device. As one example, the position data indicates rotational motion (e.g., IMU data) and/or translational motion (e.g., magnetic sensor data) of the finger wearable device, such as shown in fig. 3F, 3G, 3P, and 3Q. In some implementations, the magnetic sensor data is output by a magnetic sensor integrated within the finger wearable device, wherein the magnetic sensor senses a weak magnetic field. As another example, as represented by block 408, the sensor data includes contact intensity data output from a contact intensity sensor integrated in the finger wearable device, such as a flick gesture on the physical table 302 as shown in conjunction with fig. 3N and 3O. As one example, the contact intensity data includes interferometer data indicative of tap pressure associated with a gesture performed by the finger wearable device. The interferometer data may come from interferometers integrated within the finger wearable device. For example, the interferometer data indicates a pressure level associated with a finger wearing the finger wearable device, contacting the physical object. As one example, the finger wearable device senses (e.g., via a contact intensity sensor) deflection of the finger pad when the finger contacts the physical surface. Accordingly, various implementations disclosed herein enable a user to feel a physical surface (and texture of the physical surface) with which the user is interacting. As yet another example, in some implementations, the sensor data includes a combination of position data and contact intensity data.
As represented by block 410, in some implementations, the method 400 includes obtaining limb identification data from a computer vision system, wherein the limb identification data is included in limb tracking data. For example, computer vision systems perform computer vision techniques (e.g., object recognition) to identify limbs relative to image data.
As represented by block 412, the method 400 includes displaying on a display a computer-generated representation of a touch pad spatially associated with a physical surface. As represented by block 410, the physical surface is viewable within the display along with a content manipulation area separate from the computer-generated representation of the touch pad. For example, referring to fig. 3C, the electronic device 310 displays a touch pad 324 on the display 312 that is spatially associated with (e.g., overlaid on) the physical table 302. Further, as shown in fig. 3C, the content manipulation area 330 is visible within the display 312 and separate from the touch pad 324. In some implementations, the content manipulation area 330 and the touch pad 324 are substantially orthogonal to each other. For example, referring to fig. 3C, the touch pad 324 is overlaid on a horizontal surface of the physical table 302 with the content manipulation area 330 oriented vertically. As another example, the content manipulation area includes application content, such as web browser content, word processing content, drawing application content, and the like. For example, in some implementations, the application content is displayed within the content manipulation area 330, or within both the content manipulation area and the touch pad. Displaying a touch pad overlaid on a physical surface provides the user with available feedback, such as tactile feedback caused by the user's finger (wearing a finger wearable device) contacting the physical surface. In some implementations, the method 400 includes sizing the touch pad to fit on a physical surface to avoid situations where the touch pad is too large or too far away (e.g., a relatively high depth value) from the user to effectively maneuver.
In some implementations, as represented by block 416, the content manipulation area corresponds to a computer-generated content manipulation area, such as a display screen of a virtual tablet. To this end, in some implementations, the method 400 includes displaying a computer-generated content manipulation area when displaying a computer-generated representation of a touch pad.
In some implementations, as represented by block 418, the content manipulation area corresponds to a real-world content manipulation area, such as a display screen of an auxiliary (e.g., accessory) device. To this end, in some implementations, an electronic device performing the method 400 is communicatively coupled to an auxiliary device (e.g., a tablet, a laptop, a smart phone), and the auxiliary device includes an auxiliary display that displays the content manipulation area.
As represented by block 420, when displaying the computer-generated representation of the trackpad, the method 400 includes identifying a first location within the computer-generated representation of the trackpad based on limb tracking data. For example, referring to fig. 3D, the electronic device 310 determines a first location (indicated by the first indicator 334) within the touch pad 324 based on location data (e.g., a combination of 3D location data and 3D rotation data) from the finger wearable device 320. As another example, referring to fig. 3N and 3O, in addition to utilizing location data, electronic device 310 also utilizes contact strength data from finger wearable device 320 in order to detect when finger wearable device 320 performs a flick gesture. Continuing with this example, the contact strength data indicates a pressure level associated with a finger wearing the finger wearable device 320 contacting the physical table 302. In some implementations, the electronic device determines, based on the finger manipulation data, that the finger wearable device meets a proximity threshold relative to the computer-generated representation of the touch pad. For example, the finger wearable device contacts or hovers over a computer generated representation of the touch pad, as shown in fig. 3N-3O and 3D, respectively. As another example, in some implementations, the electronic device identifies a first location within the computer-generated representation of the touch pad based on limb identification data from the computer vision system.
As represented by block 422, when displaying the computer-generated representation of the touch pad, the method 400 includes mapping the first location to a corresponding location within the content manipulation area. For example, referring to fig. 3E, the electronic device 310 maps a first location (indicated by the first indicator 334) within the touch pad 324 to a corresponding location (as indicated by the second indicator 336) within the content manipulation area 330. That is, because the first location is positioned near the upper edge of the touch pad 324, the electronic device 310 determines that the corresponding location is also positioned near the upper edge of the content manipulation area 330.
As represented by block 424, the method 400 includes displaying an indicator indicating the mapping when displaying the computer-generated representation of the touch pad. The indicator can overlay the corresponding location within the content manipulation area. For example, referring to fig. 3E, the electronic device 310 displays a second indicator 336 on the display 312 indicating the mapping. In some implementations, the method 400 includes displaying an indicator when the mapping location is within the content manipulation area and ceasing to display the indicator when the mapping location moves outside of the content manipulation area. In some implementations, the method 400 includes displaying an indicator when the mapped location is outside of the content manipulation area but less than a threshold distance from the affordance. For example, referring to fig. 3T, the electronic device 310 maintains the display of the sixth indicator 364 because the corresponding mapped position outside of the content manipulation area 330 is less than the threshold distance from the affordance 359. Those of ordinary skill in the art will appreciate that the indicators may correspond to any type of content.
As represented by block 426, in some implementations, the location of the indicator is based on a first distance between the representation of the limb and the computer-generated representation of the touch pad. To this end, in some implementations, the method 400 includes determining a position of a representation of a limb within an environment (e.g., an XR environment). In some implementations, the electronic device includes a computer vision system that determines a location of the representation of the limb as represented within image data obtained from the camera. In some implementations, the electronic device determines a location of the representation of the limb based on the finger manipulation data. As one example, referring to fig. 3R, based on the finger manipulation data, the electronic device 310 determines a respective location associated with the finger wearable device 320, as indicated by a fifth indicator 360. In the previous example, the fifth indicator 360 corresponds to a representation of a limb.
According to various implementations, the method 400 includes locating an indicator relative to a content manipulation area based on a first distance.
In some implementations, the method 400 includes displaying the indicator at a second distance from the content manipulation area, where the second distance is based on a function of the first distance. The second distance may be proportional to the first distance. For example, as the representation of the limb moves closer to the computer-generated representation of the touch pad, the indicator correspondingly moves closer to the content manipulation area (e.g., a lower z-value relative to the content manipulation area). As a counterexample, as the representation of the limb moves away from the computer-generated representation of the touch pad, the indicator correspondingly moves away from the content manipulation area (e.g., a higher z-value relative to the content manipulation area).
As another example, in some implementations, based on the first distance being a nominal value (e.g., the user's finger tapping the table 302, as shown in fig. 3Q), the method 400 includes positioning the indicator to be less than a threshold distance from the content manipulation area. The indicator may be positioned at a nominal distance from the content manipulation area. Continuing with this example, in some implementations, the method 400 includes maintaining the position of the indicator until the distance of the representation of the limb moving away from the computer-generated representation of the touch pad exceeds a threshold distance. Thus, in some implementations, the indicator appears to be stuck to the content manipulation area to some extent.
As represented by block 428, in some implementations, the appearance of the indicator is based on a first distance between the representation of the limb and the computer-generated representation of the touch pad. For example, in some embodiments, based on the first distance being a nominal value (e.g., the user's finger tapping the table 302, as shown in fig. 3Q), the method 400 includes setting the size of the indicator to a predetermined (e.g., nominal) size. In some implementations, the method 400 includes adjusting the size of the indicator based on a function of the first distance. For example, the method 400 includes increasing the size of the indicator as the first distance increases, and vice versa. The size of the adjustment indicator may be based on a linear or piecewise function of the first distance. As another example, the method 400 includes changing different features associated with the indicator, such as changing a color, shape, opacity, etc. associated with the indicator based on a function of the first distance.
In some implementations, the method 400 includes modifying both the location of the indicator and the appearance of the indicator based on the first distance. For example, as the first distance increases, the electronic device decreases the z-depth associated with the indicator (e.g., so as to appear to be moving away from the content manipulation area and toward the user) while increasing the size of the indicator.
As represented by block 430, in some implementations, the method 400 includes manipulating the content manipulation area based on the selection of the affordance. For example, referring to fig. 3R-3W, the electronic device 310 maps respective locations associated with the finger wearable device 320 to corresponding locations associated with the content manipulation area 330 based on the finger manipulation data. The electronic device 310 determines whether the corresponding location associated with the content manipulation area 330 meets a proximity threshold relative to the affordance 359. In response to determining that the proximity threshold is met, the electronic device 310 selects the affordance 359 and manipulates (e.g., resizes) the content manipulation region 330 in accordance with the manipulation operations associated with the affordance 359.
In some implementations, in response to detecting movement of the finger wearable device outside of the touch pad based on the finger manipulation data, the method 400 includes displaying an affordance (e.g., a selectable button) at a corresponding location outside of the content manipulation area. For example, in response to detecting movement of the finger wearable device beyond the upper right corner of the touch pad, the electronic device displays an affordance beyond the upper right corner of the content manipulation area. In some implementations, in response to detecting an input that selects (e.g., spatially points to) the affordance, the method 400 includes expanding the content manipulation area to include a location associated with the affordance. In some implementations, the method 400 includes displaying a focus selector (e.g., a cursor) based on a distance between a limb of the user (e.g., tracked via finger manipulation data or limb identification data) and the affordance. The focus selector indicates the position of the limb. For example, the method 400 includes maintaining the display of the focus selector when the distance of the limb from the affordance is less than or equal to a threshold distance, and ceasing to display the focus selector when the distance of the limb from the affordance is greater than the threshold distance.
FIG. 5 is another example of a flow chart of a method 500 of mapping a computer-generated touch pad to a content manipulation area, according to some implementations. In various implementations, the method 500, or portions thereof, is performed by an electronic device (e.g., the electronic device 100 of fig. 1, the electronic device 310 of fig. 3A-3W). In various implementations, the method 500, or portions thereof, is performed by a Head Mounted Device (HMD). In some implementations, the method 500 is performed by processing logic (including hardware, firmware, software, or a combination thereof). In some implementations, the method 500 is performed by a processor executing code stored in a non-transitory computer readable medium (e.g., memory). In various implementations, some of the operations in method 500 are optionally combined, and/or the order of some of the operations is optionally changed.
As represented by block 502, the method 500 includes obtaining limb tracking data via a limb tracker.
As represented by block 504, the method 500 includes displaying, on a display, a computer-generated representation of a touch pad spatially associated with a physical surface. The physical surface is viewable within the display along with a content manipulation area separate from the computer-generated representation of the touch pad. In some implementations, the content manipulation area includes affordances provided for implementing corresponding content manipulation operations with respect to a portion of the content manipulation area. For example, referring to fig. 3M, the content manipulation area 330 includes one or more affordances 351 respectively associated with one or more drawing tools. As one example, an input directed to a particular affordance results in activation of the corresponding operation. For example, an input pointing to the affordance of the pencil tool selects a pencil drawing operation as the currently active drawing operation.
As represented by block 506, in some implementations, the method 500 includes automatically positioning or automatically resizing a computer-generated representation of the touch pad. To this end, in some implementations, the method 500 includes identifying a physical surface and covering a computer-generated representation of a touch pad on the physical surface. For example, referring to fig. 3C, the electronic device 310 identifies the surface of the physical table 302 and covers the touch pad 324 on that surface. In some implementations, identifying the physical surface includes performing various computer vision techniques (e.g., instance segmentation or semantic segmentation), optionally with the aid of a neural network.
In some implementations, as represented by block 508, the method 500 includes determining one or more dimensional features associated with the physical surface, wherein the computer-generated representation of the touch pad satisfies a dimensional criterion with respect to the one or more dimensional features. Referring back to fig. 3C, the electronic device 310 determines the size and position of the touch pad 324 to fit on the surface of the physical table 302. In some implementations, the method 500 utilizes respective dimensional features associated with the touch pad and the content manipulation area to map between the touch pad and the content manipulation area according to a common aspect ratio, as described above.
In some implementations, as represented by block 510, the computer-generated representation of the touch pad satisfies occlusion criteria with respect to the physical object. For example, referring to fig. 3C, the electronic device 310 determines the position/size of the touch pad 324 such that the physical lights 304 do not obscure the touch pad 324. As another example, the electronic device 310 determines the position/size of the touch pad 324 such that the hand of the user 50 does not occlude a substantial portion of the touch pad 324.
According to various implementations, as represented by block 512, the method 500 includes manipulating a computer-generated representation of a touch pad based on one or more user inputs. To this end, when displaying a computer-generated representation of a touch pad, method 500 includes displaying a touch pad manipulation affordance associated with a touch pad manipulation operation. For example, referring to FIG. 3C, electronic device 310 displays one or more touch pad manipulation affordances 326a-326C on display 312. Further, the method 500 includes receiving a selection input selecting a touch pad manipulation affordance. For example, referring to fig. 3H, the electronic device receives a first selection input 340 selecting a first trackpad manipulation affordance 326a associated with a trackpad movement operation. As another example, referring to fig. 3J, the electronic device receives a second selection input 344 that selects a second trackpad manipulation affordance 326b associated with a trackpad resizing operation. Furthermore, the method 500 comprises: after receiving the selection input, receiving a manipulation input associated with the computer-generated representation of the touch pad, and manipulating the computer-generated representation of the touch pad according to the manipulation input and the touch pad manipulation operation. For example, when a touch pad movement operation is selected, the electronic device 310 receives the first manipulation input 342 in fig. 3I, and the electronic device 310 moves the touch pad 324 accordingly, as shown in fig. 3J. As another example, when the touch pad resizing operation is selected, the electronic device 310 receives the second manipulation input 346 in fig. 3K, and the electronic device 310 resizes the touch pad 324 accordingly, as shown in fig. 3L.
As represented by block 514, when displaying the computer-generated representation of the touch pad, the method 500 includes identifying a first location within the computer-generated representation of the touch pad based on the finger manipulation data.
As represented by block 516, when displaying the computer-generated representation of the touch pad, the method 500 includes mapping the first location to a corresponding location within the content manipulation area. For example, in some implementations, the method 500 includes determining, based on the finger manipulation data, that the finger wearable device corresponds to a respective spatial location hovering over a computer-generated representation of the touch pad. Continuing with the previous example, mapping a first location within the computer-generated representation of the touch pad to a corresponding location within the content manipulation area includes mapping a respective spatial location to the first location, and mapping the first location to a corresponding location within the content manipulation area. In some implementations, the mapping is based on a function of limb identification data from a computer vision system.
As represented by block 518, when displaying the computer-generated representation of the touch pad, the method 500 includes displaying an indicator indicating the mapping. For example, in some implementations, displaying the indicator includes modifying content displayed within the content manipulation area based on the mapped function. Modifying the content may include one or more of annotating, editing the content, and the like. As one example, referring to fig. 3P and 3Q, in response to receiving finger manipulation data indicating movement of the finger wearable device 320 on the touch pad 324, the electronic device 310 displays a pencil mark 358 within the content manipulation area 330. Continuing with the previous example, pencil mark 358 may be generated by electronic device 310 or may be generated by an auxiliary device having an auxiliary display displaying content manipulation area 330 (based on instructions from electronic device 310). For example, the electronic device 310 transmits instructions to the real world tablet computer displaying the content manipulation area 330, and the real world tablet computer displays the pencil mark 358 accordingly. In some implementations, the modified content is further based on a function of the non-networked input vector, as represented by block 522. The non-networking input vector may indicate a combination of eye tracking indicator values (e.g., eye position, eye velocity), limb tracking indicator values (e.g., limb position, limb smoothness), and the like. To this end, the electronic device includes a non-networked input system that receives a non-networked input vector. The non-networked input systems may include one or more of an eye tracking system, a limb tracking system, a stylus input system, a voice detection system, and the like.
The present disclosure describes various features, none of which alone are capable of achieving the benefits described herein. It should be understood that the various features described herein may be combined, modified or omitted as would be apparent to one of ordinary skill in the art. Other combinations and subcombinations than those specifically described herein will be apparent to one of ordinary skill and are intended to form part of the present disclosure. Various methods are described herein in connection with various flowchart steps and/or stages. It will be appreciated that in many cases, certain steps and/or stages may be combined together such that multiple steps and/or stages shown in the flowcharts may be performed as a single step and/or stage. In addition, certain steps and/or stages may be separated into additional sub-components to be performed independently. In some cases, the order of steps and/or stages may be rearranged, and certain steps and/or stages may be omitted entirely. In addition, the methods described herein should be understood to be broadly construed such that additional steps and/or stages other than those shown and described herein may also be performed.
Some or all of the methods and tasks described herein may be performed by a computer system and fully automated. In some cases, the computer system may include a plurality of different computers or computing devices (e.g., physical servers, workstations, storage arrays, etc.) that communicate and interoperate over a network to perform the functions described. Each such computing device typically includes a processor (or multiple processors) that execute program instructions or modules stored in a memory or other non-transitory computer-readable storage medium or device. The various functions disclosed herein may be implemented in such program instructions, but alternatively some or all of the disclosed functions may be implemented in dedicated circuitry (e.g., ASIC or FPGA or GP-GPU) of a computer system. Where a computer system includes multiple computing devices, the devices may or may not be co-located. The results of the disclosed methods and tasks may be persistently stored by converting a physical storage device such as a solid state memory chip and/or disk to a different state.
The various processes defined herein allow for the option of obtaining and utilizing personal information of a user. For example, such personal information may be utilized in order to provide an improved privacy screen on an electronic device. However, to the extent such personal information is collected, such information should be obtained with informed consent of the user. As described herein, a user should understand and control the use of his personal information.
Personal information will be used by the appropriate party only for legal and reasonable purposes. Parties utilizing such information will comply with privacy policies and practices that at least comply with the appropriate laws and regulations. Moreover, such policies should be perfect, user accessible, and considered to be in compliance with or above government/industry standards. Furthermore, parties may not be able to distribute, sell, or otherwise share such information, except for any reasonable and legal purpose.
However, the user may limit the extent to which parties can access or otherwise obtain personal information. For example, settings or other preferences may be adjusted so that a user may decide whether their personal information is accessible by various entities. Furthermore, while some of the features defined herein are described in the context of using personal information, aspects of these features may be implemented without the need to use such information. For example, if user preferences, account names, and/or location history are collected, the information may be obscured or otherwise generalized such that the information does not identify the corresponding user.
The present disclosure is not intended to be limited to the specific implementations shown herein. Various modifications to the implementations described in this disclosure may be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other implementations without departing from the spirit or scope of the disclosure. The teachings of the present invention provided herein may be applied to other methods and systems and are not limited to the above-described methods and systems and the elements and acts of the various implementations described above may be combined to provide further implementations. Thus, the novel methods and systems described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions, and changes in the form of the methods and systems described herein may be made without departing from the spirit of the disclosure. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the disclosure.

Claims (34)

1. A method, the method comprising:
at an electronic device having one or more processors, a non-transitory memory, a display, and a limb tracker:
obtaining limb tracking data via the limb tracker;
Displaying on the display a computer-generated representation of a touch pad spatially associated with a physical surface that is viewable within the display along with a content manipulation area separate from the computer-generated representation of the touch pad; and
when displaying the computer-generated representation of the touch pad on the display:
identifying a first location within the computer-generated representation of the touch pad based on the limb tracking data;
mapping the first location to a corresponding location within the content manipulation area; and
an indicator is displayed on the display indicating the mapping.
2. The method of claim 1, wherein the limb tracker comprises a communication interface provided for communication with a finger wearable device, wherein the limb tracking data comprises finger manipulation data from the finger wearable device via the communication interface, and wherein the first location is identified based at least on the finger manipulation data.
3. The method of claim 2, wherein the finger manipulation data corresponds to sensor data associated with one or more sensors integrated within the finger wearable device.
4. A method according to claim 3, wherein the sensor data comprises position data output from one or more position sensors integrated in the finger wearable device.
5. A method according to claim 3, wherein the sensor data comprises contact intensity data output from a contact intensity sensor integrated in the finger wearable device.
6. The method of claim 2, wherein the finger manipulation data is indicative of a gesture performed via the finger wearable device.
7. The method of any of claims 1-6, wherein the limb tracker includes a computer vision system that outputs limb identification data, wherein the limb identification data is included in the limb tracking data, and wherein the first location is identified based at least on the limb identification data.
8. The method of any of claims 1-7, wherein the content manipulation area corresponds to a computer-generated content manipulation area, the method further comprising:
the computer-generated content manipulation area is displayed on the display when the computer-generated representation of the touch pad is displayed.
9. The method of any of claims 1-7, wherein the electronic device is communicatively coupled to an auxiliary device, and wherein the auxiliary device includes an auxiliary display that displays the content manipulation area.
10. The method of any of claims 1 to 9, wherein the content manipulation area comprises affordances provided for effectuating corresponding content manipulation operations relative to a portion of the content manipulation area.
11. The method of any one of claims 1 to 10, further comprising:
identifying the physical surface; and
overlaying the computer-generated representation of the touch pad on the physical surface on the display.
12. The method of claim 11, further comprising determining one or more dimensional features associated with the physical surface, wherein the computer-generated representation of the touch pad meets a dimensional criterion with respect to the one or more dimensional features.
13. The method of any of claims 1 to 12, wherein the computer-generated representation of the touch pad meets occlusion criteria with respect to a physical object.
14. The method of any one of claims 1 to 13, further comprising:
Displaying a trackpad manipulation affordance associated with a trackpad manipulation operation on the display while displaying the computer-generated representation of the trackpad;
receiving a selection input for selecting the touch pad manipulation affordance;
after receiving the selection input, receiving a manipulation input associated with the computer-generated representation of the touch pad; and
manipulating the computer-generated representation of the touch pad according to the manipulation input and the touch pad manipulation operation.
15. The method of any one of claims 1 to 14, further comprising:
determining, based on the limb tracking data, that a respective limb corresponds to a respective spatial location hovering over the computer-generated representation of the touch pad;
wherein mapping the first location within the computer-generated representation of the touch pad to the corresponding location within the content manipulation area comprises:
mapping the corresponding spatial position to the first position, and
the first location is mapped to the corresponding location within the content manipulation area.
16. The method of any of claims 1-15, wherein mapping the first location to the corresponding location within the content manipulation area is according to a common aspect ratio.
17. The method of any of claims 1-16, wherein displaying the indicator includes modifying content displayed within the content manipulation area based on the mapping.
18. The method of claim 17, wherein the electronic device comprises a non-networked input system, the method further comprising receiving a non-networked input vector via the non-networked input system, wherein modifying the content is further based on the non-networked input vector.
19. The method of any of claims 1-18, further comprising displaying the content within the computer-generated representation of the touch pad.
20. The method of any one of claims 1 to 19, further comprising:
responsive to determining that a corresponding limb is moving from the first position to a second position based on the limb tracking data:
in accordance with a determination that the second location is outside of the computer-generated representation of the touch pad:
mapping the second location to a second corresponding location outside of the content manipulation area; and
an affordance is displayed at the second corresponding location outside of the content manipulation area.
21. The method of claim 20, responsive to receiving input selecting the affordance, expanding the content manipulation area to include the second corresponding location.
22. The method of any one of claims 1-21, wherein the electronic device corresponds to a Head Mounted Device (HMD).
23. The method of any one of claims 1 to 22, further comprising:
identifying a second location outside of the computer-generated representation of the trackpad based on the limb tracking data;
mapping the second location to a second corresponding location outside of the content manipulation area; and
the indicator is moved so as to cover the second corresponding position.
24. The method of claim 23, further comprising:
displaying an affordance on the display outside of the content manipulation area, wherein the affordance is associated with a manipulation operation; and
in response to determining that the second corresponding location meets a proximity threshold with respect to the affordance, the content manipulation area is manipulated in accordance with the manipulation operation.
25. The method of any of claims 1-24, further comprising determining a first distance between a representation of a limb and the computer-generated representation of the trackpad based on the limb tracking data.
26. The method of claim 25, wherein the indicator is displayed at a second distance from the content manipulation area, and wherein the second distance is based on the first distance.
27. The method of any one of claims 25 or 26, wherein an appearance of the indicator is based on the first distance.
28. The method of any of claims 1-27, wherein the indicator covers the corresponding location within the content manipulation area.
29. An electronic device, the electronic device comprising:
one or more processors;
a non-transitory memory;
a display;
a limb tracker; and
one or more programs, wherein the one or more programs are stored in the non-transitory memory and configured to be executed by the one or more processors, the one or more programs comprising instructions for:
obtaining limb tracking data via the limb tracker;
displaying on the display a computer-generated representation of a touch pad spatially associated with a physical surface that is viewable within the display along with a content manipulation area separate from the computer-generated representation of the touch pad; and
when displaying the computer-generated representation of the touch pad on the display:
identifying a first location within the computer-generated representation of the touch pad based on the limb tracking data;
Mapping the first location to a corresponding location within the content manipulation area; and
an indicator is displayed on the display indicating the mapping.
30. The electronic device of claim 29, wherein the limb tracker includes a communication interface provided for communication with a finger wearable device, wherein the limb tracking data includes finger manipulation data from the finger wearable device via the communication interface, and wherein the first location is identified based at least on the finger manipulation data.
31. The electronic device of any of claims 29 or 39, wherein the limb tracker comprises a computer vision system that outputs limb identification data, wherein the limb identification data is included in the limb tracking data, and wherein the first location is identified based at least on the limb identification data.
32. The electronic device of any of claims 29-31, wherein displaying the indicator includes modifying content displayed within the content manipulation area based on the mapping.
33. The electronic device of claim 32, wherein the electronic device comprises a non-networking input system, the one or more programs comprising instructions for receiving a non-networking input vector via the non-networking input system, wherein modifying the content is further based on the non-networking input vector.
34. A non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by an electronic device with one or more processors, a display, and a limb tracker, cause the electronic device to:
obtaining limb tracking data via the limb tracker;
displaying on the display a computer-generated representation of a touch pad spatially associated with a physical surface that is viewable within the display along with a content manipulation area separate from the computer-generated representation of the touch pad; and
when displaying the computer-generated representation of the touch pad:
identifying a first location within the computer-generated representation of the touch pad based on the limb tracking data;
mapping the first location to a corresponding location within the content manipulation area; and
an indicator is displayed on the display indicating the mapping.
CN202180074044.7A 2020-09-02 2021-07-16 Mapping a computer-generated touch pad to a content manipulation area Pending CN116888562A (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US63/073,758 2020-09-02
US202063107305P 2020-10-29 2020-10-29
US63/107,305 2020-10-29
PCT/US2021/041928 WO2022051033A1 (en) 2020-09-02 2021-07-16 Mapping a computer-generated trackpad to a content manipulation region

Publications (1)

Publication Number Publication Date
CN116888562A true CN116888562A (en) 2023-10-13

Family

ID=88266700

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202180074044.7A Pending CN116888562A (en) 2020-09-02 2021-07-16 Mapping a computer-generated touch pad to a content manipulation area

Country Status (1)

Country Link
CN (1) CN116888562A (en)

Similar Documents

Publication Publication Date Title
US20230008537A1 (en) Methods for adjusting and/or controlling immersion associated with user interfaces
US11954245B2 (en) Displaying physical input devices as virtual objects
US11429246B2 (en) Device, method, and graphical user interface for manipulating 3D objects on a 2D screen
US20230333650A1 (en) Gesture Tutorial for a Finger-Wearable Device
US11393164B2 (en) Device, method, and graphical user interface for generating CGR objects
CN116888562A (en) Mapping a computer-generated touch pad to a content manipulation area
US20230376110A1 (en) Mapping a Computer-Generated Trackpad to a Content Manipulation Region
US11960657B2 (en) Targeted drop of a computer-generated object
US11966510B2 (en) Object engagement based on finger manipulation data and untethered inputs
CN112136096B (en) Displaying a physical input device as a virtual object
US11693491B1 (en) Tracking a paired peripheral input device based on a contact criterion
US20230297168A1 (en) Changing a Dimensional Representation of a Content Item
US20230333651A1 (en) Multi-Finger Gesture based on Finger Manipulation Data and Extremity Tracking Data
US20230325047A1 (en) Merging Computer-Generated Objects
EP4254143A1 (en) Eye tracking based selection of a user interface element based on targeting criteria
US20230370578A1 (en) Generating and Displaying Content based on Respective Positions of Individuals
US20230162450A1 (en) Connecting Spatially Distinct Settings
CN116802589A (en) Object participation based on finger manipulation data and non-tethered input
CN117980870A (en) Content manipulation via a computer-generated representation of a touch pad

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination