CN117561497A - Touch pad input for augmented reality display device - Google Patents

Touch pad input for augmented reality display device Download PDF

Info

Publication number
CN117561497A
CN117561497A CN202280035340.0A CN202280035340A CN117561497A CN 117561497 A CN117561497 A CN 117561497A CN 202280035340 A CN202280035340 A CN 202280035340A CN 117561497 A CN117561497 A CN 117561497A
Authority
CN
China
Prior art keywords
touch input
receiving
touch
user interface
touch pad
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202280035340.0A
Other languages
Chinese (zh)
Inventor
卡伦·施托尔岑贝格
大卫·梅森霍尔德
马蒂厄·埃曼努尔·比尼奥
萨那·帕克
孙天异
约瑟夫·蒂莫西·福捷
卡韦赫·安瓦里普尔
丹尼尔·莫雷诺
凯尔·古德里奇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Snap Inc
Original Assignee
Snap Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US17/479,153 external-priority patent/US11880542B2/en
Application filed by Snap Inc filed Critical Snap Inc
Priority claimed from PCT/US2022/072367 external-priority patent/WO2022246399A1/en
Publication of CN117561497A publication Critical patent/CN117561497A/en
Pending legal-status Critical Current

Links

Abstract

Methods of receiving and processing content transmission input received by a head mounted device system including one or more display devices, one or more cameras, and a vertically arranged touch pad are disclosed. The method comprises the following steps: displaying the content items on one or more display devices; receiving touch input corresponding to a sending instruction on the touch panel; displaying the carousel of the potential receiver; receiving a horizontal touch input on the touch pad; scrolling the carousel left or right on one or more display devices in response to the horizontal touch input; receiving a tap touch input on the touch pad, the touch input for selecting a particular recipient; receiving additional touch input; and transmitting the content item to the selected recipient in response to the further touch input.

Description

Touch pad input for augmented reality display device
Data of related applications
The present application claims the benefit of U.S. patent application Ser. No. 17/479,153, filed on day 20 at 9 of 2021, and U.S. provisional patent application Ser. No. 63/190,662, filed on day 19 of 5 of 2021, the contents of which are incorporated herein by reference as if set forth explicitly.
Technical Field
The present disclosure relates generally to head mounted devices having a display for augmented or virtual reality, and more particularly to head mounted devices that include a touch pad for navigating content or a user interface provided by the head mounted device or an associated device.
Background
The head-mounted device may be implemented with a transparent or translucent display through which a user of the head-mounted device may view the surrounding environment. Such devices enable a user to view through the transparent or translucent display to view the surrounding environment, and also to see objects (e.g., virtual objects such as 3D renderings, images, videos, text, etc.) generated for display that appear as part of and/or superimposed on the surrounding environment. This is commonly referred to as "augmented reality".
The head-mounted device may also completely block the user's field of view and display the virtual environment through which the user may move or be moved. This is commonly referred to as "virtual reality". As used herein, the term "augmented reality" or "AR" refers to both augmented reality and virtual reality as conventionally understood unless the context indicates otherwise.
A user of the headset may access a messaging application or a social networking application to view content or share content with other users of the application. In some cases, real-time content or stored content may be viewed and enhanced or modified by a user. That is, images, video, or other media for enhancement may be captured from a real-time camera device, or may be retrieved from a local or remote data storage device.
As referred to herein, the phrase "augmented reality experience" includes or refers to various image processing operations corresponding to image modification, filtering, media overlay, transformation, and the like, as further described herein. In some examples, these image processing operations provide an interactive experience of a real-world environment in which objects, surfaces, background, lighting, etc. in the real world are enhanced by computer-generated perceptual information. In this context, an "augmented reality effect" includes a collection of data, parameters, and other assets required for applying a selected augmented reality experience to an image or video feed. In some examples, the augmented reality effect is provided by Snap corporation under the registered trademark lens.
In some examples, the augmented reality effect includes augmented reality (or "AR") content configured to modify or transform image data presented within a GUI of the head mounted device in some manner. For example, complex additions or transformations to the content image may be performed using AR effect data, such as adding rabbit ears to a person's head, adding floating hearts with background colors, changing the scale of features of a person, adding enhancements to landmarks in a scene viewed on a head mounted device, or many other such transformations. This includes both real-time modifications that modify an image when capturing the image using a camera associated with the headset, which image is then displayed by the headset with AR effect modifications, as well as modifications to stored content (e.g., video clips in a gallery) that may be modified using AR effect. Similarly, real-time video capture may be used with AR effects to show the user of the head-mounted device how the video image currently captured by the device's sensor will modify the captured data. Such data may be displayed on the screen only and not stored in memory, content captured by the device sensor may be recorded and stored in memory with or without AR effect modification (or both), or content captured by the device sensor may be sent to the server or another device over the network 102 with AR effect modification.
Thus, the AR effect and associated systems and modules for modifying content using the AR effect may involve: detection of objects (e.g., face, hand, body, cat, dog, surface, object, etc.), tracking of such objects as they leave, enter, and move around the field of view in the video frame, and modification or transformation of such objects as they are tracked. In various examples, different methods for implementing such transformations may be used. For example, some examples may involve generating a 3D mesh model of one or more objects, and implementing the transformation using a transformation and animated texture of the model within the video. In other examples, tracking of points on an object may be used to place an image or texture (which may be two-dimensional or three-dimensional) at the tracked location. In further examples, neural network analysis of video frames may be used to place images, models, or textures in content (e.g., images or video frames). Thus, the AR effect data may include both: images, models, and textures used to create transformations in content, as well as additional modeling and analysis information required to implement such transformations using object detection, tracking, and placement (displacement).
Drawings
To facilitate identification of a discussion of any particular element or act, one or more of the highest digits in a reference numeral refer to the figure number in which that element was first introduced.
Fig. 1 is a perspective view of a head mounted device according to some examples.
Fig. 2 illustrates additional views of the headset of fig. 1 according to some examples.
Fig. 3 is a block diagram illustrating a networking system 300 including details of the headset of fig. 1, according to some examples.
FIG. 4 is a diagrammatic representation of a networking environment in which the present disclosure may be deployed, according to some examples.
Fig. 5 is a perspective view of a head mounted device according to another example.
Fig. 6A-6D illustrate a series of user interface screens displayed by the headset of fig. 1 or 5 according to some examples.
Fig. 7A and 7B illustrate another series of user interface screens displayed by the headset of fig. 1 or 5 according to some examples.
Fig. 8A-8C illustrate another series of user interface screens displayed by the headset of fig. 1 or 5 according to some examples.
Fig. 9A-9F illustrate another series of user interface screens displayed by the headset of fig. 1 or 5 according to some examples.
Fig. 10 illustrates a user interface flow diagram that may be implemented by the headset of fig. 1 or 5, according to some examples.
Fig. 11 is a flowchart of operations performed by the head-mounted device of fig. 1 or 5 in response to receiving user input on a touch pad, according to some examples.
Fig. 12 is a flowchart of operations performed by the head-mounted device of fig. 1 or 5 in response to receiving user input on a touch pad, according to some examples.
Fig. 13 is a block diagram illustrating a software architecture in which the present disclosure may be implemented, according to some examples.
FIG. 14 is a diagrammatic representation of machine in the form of a computer system within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed, may be executed according to some examples.
Detailed Description
Known head-mounted devices, such as AR glasses, include a transparent or translucent display that enables a user to view through the transparent or translucent display to view the surrounding environment. Additional information or objects (e.g., virtual objects such as 3D renderings, images, video, text, etc.) are shown on the display and appear as part of and/or superimposed on the surrounding environment to provide the user with an augmented reality experience. The display may, for example, comprise a waveguide that receives the light beam from the projector, but any suitable display for presenting enhanced or virtual content to the wearer may be used.
Navigation of information or user interfaces of the head-mounted device may in some cases be provided by voice commands or inputs to an associated device, such as a smart phone. In the present case, a touch pad is provided on the head-mounted device, which may be used to provide x-y touch inputs and tap inputs to the head-mounted device. Because touchpads may have limited size and displays may also have limited display capabilities, or due to size, power, or other considerations (e.g., to provide a display through which a user may still view the real world), it may be necessary to simplify interactions between the head-mounted device and the user.
The described user interface flow uses a particular mode of headset hardware and software based capabilities, which typically has both different input options and different output capabilities than mobile phones and touch screens of mobile phones. For example, a display used in AR glasses may have a narrower field of view than a mobile phone screen, have a different aspect ratio, and may be capable of presenting less detail. Additionally, the touch interfaces described below are not juxtaposed or parallel to the display. These technical challenges may be addressed by providing a simplified user interface screen in which each scene displays fewer elements and has a specific response to received input, thereby improving the functionality of the head-mounted device and associated applications.
In one example, the head-mounted devices described herein use absolute, indirect positioning for user interaction. In this example, there is no free cursor nor is there any ability to select any displayed element as on a mobile phone touch screen. The headset indirectly translates user finger motion on a touchpad on the side of the headset to display coordinates in the case of absolute motion, a single fixed selector that responds to a particular set of touchpad gesture actions to make item selections, and a displayed navigation prompt to allow selection of logical transitions between pictures.
In another example, a method for receiving and processing content transmission input performed by one or more processors in a head mounted device system including one or more display devices, one or more cameras, and a substantially vertically arranged touch pad is disclosed. An example method includes: the method includes displaying content items on one or more display devices, receiving content selection touch input on a touch pad, displaying a carousel of potential recipients in response to the content selection touch input, receiving a first horizontal touch input on the touch pad, and scrolling the carousel of potential recipients left or right on the one or more display devices in response to the first horizontal touch input. The method further comprises the steps of: receiving a recipient selection touch input on the touch pad, the touch input for selecting a particular recipient; receiving content delivery touch input on a touch pad; and transmitting the content item to the specific recipient in response to the content transmission touch input.
Content selection touch inputs on the touch pad may include: receiving a tap touch input on the touch pad, the touch input for selecting a content item; displaying a plurality of user interface options in response to receiving the tap touch input; receiving a second horizontal touch input on the touch pad; moving the selection indicator relative to the plurality of user interface options based on the second horizontal touch input; and receiving a user interface selection touch input on the touch pad, the touch input for selecting a particular user interface option of the plurality of user interface options. The plurality of user interface options may include a delete option, a content viewer option, and a send option.
After receiving the recipient selection touch input, the method may further include: receiving a second horizontal touch input on the touch pad; scrolling a carousel of potential recipients to the left or right on one or more display devices in response to a second horizontal touch input; and receiving a further recipient selection touch input on the touch pad, the touch input for selecting the further recipient. The method may further include, prior to receiving the content delivery touch input: a vertical touch input on the touch pad is received, the vertical touch input being used to confirm selection of the recipient.
The example method may further include: receiving a vertical touch input on the touch pad prior to receiving the content delivery touch input; and in response to receiving the vertical touch input, eliminating display of the carousel of potential recipients.
The method may further comprise: receiving a third horizontal touch input on the touch pad after receiving the recipient selection touch input; scrolling the carousel of potential recipients to the left or right on one or more display devices in response to a third horizontal touch input; and receiving a further recipient selection touch input on the touch pad, the touch input for selecting the further recipient.
The method may further comprise: a vertical touch input on the touch pad is received prior to receiving the content delivery touch input, the vertical touch input being used to confirm selection of the recipient and the further recipient.
In another example, a headset system includes: one or more cameras, one or more display devices, a generally vertically arranged touch pad, and one or more processors. The head mounted device system further comprises: a memory storing instructions that, when executed by one or more processors, configure the system to perform operations corresponding to the above methods, including, but not limited to: displaying the content items on one or more display devices; receiving a first touch input on the touch pad; in response to the first touch input, displaying a carousel of potential recipients; receiving a first horizontal touch input on the touch pad; scrolling a carousel of potential recipients to the left or right on one or more display devices in response to a first horizontal touch input; receiving a first tap touch input on a touch pad, the touch input for selecting a particular recipient; receiving a second touch input; and transmitting the content item to the particular recipient in response to the second touch input.
In further examples, a non-transitory computer-readable storage medium includes instructions that, when executed by a head-mounted device system including one or more display devices, one or more cameras, and a substantially vertically arranged touch pad, cause the head-mounted device system to perform operations comprising: displaying the content items on one or more display devices; receiving a first touch input on the touch pad; in response to the first touch input, displaying a carousel of potential recipients; receiving a first horizontal touch input on the touch pad; scrolling a carousel of potential recipients to the left or right on one or more display devices in response to a first horizontal touch input; receiving a first tap touch input on a touch pad, the touch input for selecting a particular recipient; receiving a second touch input; and transmitting the content item to the particular recipient in response to the second touch input.
Content selection touch input operations defined by a memory or computer-readable storage medium of the head-mounted device may include: receiving a tap touch input on the touch pad, the touch input for selecting a content item; in response to receiving the tap touch input, displaying a plurality of user interface options; receiving a second horizontal touch input on the touch pad; moving a selection indicator relative to a plurality of user interface options based on the second horizontal touch input; and receiving a user interface selection touch input on the touch pad, the touch input for selecting a particular user interface option of the plurality of user interface options. The plurality of user interface options may include a delete option, a content viewer option, and a send option.
After receiving the recipient selection touch input, the operations may further include: receiving a second horizontal touch input on the touch pad; scrolling the carousel of potential recipients to the left or right on one or more display devices in response to a second horizontal touch input; and receiving a further recipient selection touch input on the touch pad, the touch input for selecting the further recipient. Prior to receiving the content delivery touch input, the operations may further include: a vertical touch input on the touch pad is received, the vertical touch input being used to confirm selection of the recipient.
Operations may further include: receiving a vertical touch input on the touch pad prior to receiving the content delivery touch input; and in response to receiving the vertical touch input, eliminating display of the carousel of potential recipients. Operations may further include: receiving a third horizontal touch input on the touch pad after receiving the recipient selection touch input; scrolling the carousel of potential recipients to the left or right on one or more display devices in response to a third horizontal touch input; and receiving a further recipient selection touch input on the touch pad, the touch input for selecting the further recipient. The operations may further include: a vertical touch input on the touch pad is received prior to receiving the content delivery touch input, the vertical touch input being used to confirm selection of the recipient and the further recipient. Other technical features may be readily apparent to one skilled in the art from the following figures, descriptions, and claims.
Fig. 1 is a perspective view of a head-mounted device (e.g., eyeglasses 100) according to some examples. The eyeglass 100 can comprise a frame 102, the frame 102 being made of any suitable material, such as plastic or metal, including any suitable shape memory alloy. In one or more examples, the frame 102 includes a first or left optical element holder 104 (e.g., a display or lens holder) and a second or right optical element holder 106 connected by a bridge 112. A first or left optical element 108 and a second or right optical element 110 may be disposed within the left and right optical element holders 104 and 106, respectively. Each of the right and left optical elements 110, 108 may be a lens, a display assembly, or a combination of the foregoing. Any suitable display assembly may be provided in the eyeglass 100.
The frame 102 additionally includes a left arm or temple piece 120 and a right arm or temple piece 122. In some examples, the entire frame 102 may be formed from a single piece of material to have a uniform or unitary construction.
Glasses 100 may include a computing device, such as computer 118, that may be of any suitable type for being carried by frame 102, and in one or more examples, may be of a suitable size and shape to be disposed at least partially in one of temple piece 120 and temple piece 122. The computer 118 may include one or more processors and memory, wireless communication circuitry, and a power supply. As discussed below, computer 118 includes low power circuitry, high speed circuitry, and a display processor. Various other examples may include these elements being configured differently or integrated together in different ways. Additional details of aspects of computer 118 may be implemented as shown by data processor 302 discussed below.
The computer 118 additionally includes a battery 116 or other suitable portable power supply. In one example, the battery 116 is disposed in the left temple piece 120 and electrically coupled to the computer 118 disposed in the right temple piece 122. The glasses 100 may include a connector or port (not shown), a wireless receiver, transmitter or transceiver (not shown), or a combination of such devices suitable for charging the battery 116.
The eyeglass 100 comprises an image capture device 114. Although two cameras are depicted, other examples contemplate the use of a single or additional (i.e., more than two) cameras. In one or more examples, glasses 100 include any number of input sensors or other input/output devices in addition to camera 114. Such sensors or input/output devices may additionally include biometric sensors, location sensors, motion sensors, and the like.
The eyeglass 100 can further comprise a touch pad 124, the touch pad 124 being mounted to one or both of the left and right temple pieces 120, 122 or being integrated with one or both of the left and right temple pieces 120, 122. The touch pad 124 is generally vertically disposed, with the touch pad 124 being approximately parallel to the user's temple in one example. As used herein, generally vertically aligned means that the touch pad is at least more vertical than horizontal, although preferably more vertical than vertical. Additional user input may be provided by one or more buttons 126, in the illustrated embodiment, the one or more buttons 126 are disposed on the outer upper edges of the left and right optical element holders 104, 106. The one or more touch pads 124 and buttons 126 provide a means by which the eyeglass 100 can receive input from a user of the eyeglass 100.
Fig. 2 shows the eyeglass 100 from the perspective of a wearer. Several elements shown in fig. 1 have been omitted for clarity. As depicted in fig. 1, the eyeglass 100 shown in fig. 2 includes a left optical element 108 and a right optical element 110 secured within each of the left optical element holder 104 and the right optical element holder 106, respectively.
The eyeglass 100 comprises: a forward optics assembly 202 comprising a right projector 204 and a right near-eye display 206; and a forward optical assembly 210 including a left projector 212 and a left near-eye display 216.
In one embodiment, the near-eye display is a waveguide. The waveguide comprises a reflective structure or a diffractive structure (e.g. a grating and/or an optical element such as a mirror, lens or prism). The light 208 emitted by the projector 204 encounters the diffractive structure of the waveguide of the near-eye display 206, which directs the light to the user's right eye to provide an image that overlays the view of the real world seen by the user on or in the right optical element 110. Similarly, light 214 emitted by projector 212 encounters a diffractive structure of a waveguide of near-eye display 216 that directs the light to the left eye of the user to provide an image of a real-world view seen by the user superimposed on or in left optical element 108.
However, it should be understood that other display technologies or configurations may be provided that may display images to a user in a forward field of view. For example, instead of providing projector 204 and waveguides, an LCD, LED, or other display panel or surface may be provided instead.
In use, a wearer of the glasses 100 will be presented with information, content, and various user interfaces on the near-eye display. As described in more detail below, in addition to providing voice input or touch input on an associated device, such as the client device 328 shown in fig. 3, a user may then interact with the eyewear 100 using the touch pad 124 and/or buttons 126.
Fig. 3 is a block diagram illustrating a networked system 300 including details of the glasses 100 according to some examples.
The networked system 300 includes the glasses 100, a client device 328, and a server system 332. The client device 328 may be a smart phone, tablet computer, tablet telephone, laptop computer, access point, or any other such device capable of connecting with the glasses 100 using both the low-power wireless connection 336 and the high-speed wireless connection 334. Client device 328 is connected to server system 332 via network 330. Network 330 may include any combination of wired and wireless connections. The server system 332 may be one or more computing devices that are part of a service or network computing system. Any elements of the client device 328 and the server system 332 and network 330 may be implemented using the details of the software architecture 1304 or machine 1400 depicted in fig. 13 and 14.
The eyeglass 100 comprises a data processor 302, a display 310, one or more cameras 308, and additional input/output elements 316. Input/output elements 316 may include a microphone, audio speaker, biometric sensor, additional sensors, or additional display elements integrated with data processor 302. Examples of input/output elements 316 are further discussed with respect to fig. 13 and 14. For example, input/output elements 316 may include any of I/O components 1406, I/O components 1406 including output components 1428, motion components 1436, and the like. An example of a display 310 is discussed in fig. 2. In the particular examples described herein, the display 310 includes a display for each of the left and right eyes of the user.
The data processor 302 includes an image processor 306 (e.g., a video processor), a GPU and display driver 338, a tracking module 340, an interface 312, low power circuitry 304, and high speed circuitry 320. The components of the data processor 302 are interconnected by a bus 342.
Interface 312 refers to any source of user commands provided to data processor 302. In one or more examples, the interface 312 is a physical button that, when pressed, sends a user input signal from the interface 312 to the low power processor 314. The low power processor 314 may process pressing such a button and then immediately releasing as a request to capture a single image, or vice versa. The low power processor 314 may treat pressing such a button for a first period of time as a request to capture video data when the button is pressed and to stop video capture when the button is released, wherein the video captured when the button is pressed is stored as a single video file. Alternatively, pressing the button for a long period of time may capture a still image. In other examples, interface 312 may be any mechanical switch or physical interface capable of accepting user input associated with a data request from camera 308. In other examples, interface 312 may have a software component or may be associated with a command received wirelessly from another source, such as from client device 328.
The image processor 306 includes circuitry for receiving signals from the camera 308 and processing those signals from the camera 308 into a format suitable for storage in the memory 324 or suitable for transmission to the client device 328. In one or more examples, the image processor 306 (e.g., a video processor) includes a microprocessor Integrated Circuit (IC) tailored to process sensor data from the camera device 308, and a volatile memory that the microprocessor uses in operation.
The low power circuitry 304 includes a low power processor 314 and low power wireless circuitry 318. These elements of low power circuitry 304 may be implemented as separate elements or may be implemented on a single IC as part of a single system-on-chip. The low power processor 314 includes logic for managing the other elements of the eyeglass 100. As described above, for example, low power processor 314 may accept user input signals from interface 312. The low power processor 314 may also be configured to receive input signals or instruction communications from the client device 328 via the low power wireless connection 336. The low power wireless circuitry 318 includes circuit elements for implementing a low power wireless communication system. Bluetooth (R) TM Smart, also known as Bluetooth TM Low power consumption is one standard implementation of a low power wireless communication system that may be used to implement low power wireless circuitry 318. In other examples, other low power communication systems may be used.
High-speed circuitry 320 includes a high-speed processor 322, memory 324, and high-speed radio circuitry 326. The high-speed processor 322 may be any processor capable of managing the high-speed communications and operations of any general-purpose computing system required by the data processor 302. The high speed processor 322 includes processing resources required to manage high speed data transmissions over the high speed wireless connection 334 using the high speed wireless circuitry 326. In some examples, high-speed processor 322 executes an operating system such as the LINUX operating system or other such operating system such as operating system 1312 of FIG. 13. The high speed processor 322, which executes the software architecture of the data processor 302, is used to manage data transfer with the high speed wireless circuitry 326, among any other responsibilities. In a particular example, the high-speed wireless circuitry 326 is configured to implement an Institute of Electrical and Electronics Engineers (IEEE) 802.11 communication standard, also referred to herein as Wi-Fi. In other examples, high-speed wireless circuitry 326 may implement other high-speed communication standards.
Memory 324 includes any storage device capable of storing image capture device data generated by image capture device 308 and image processor 306. Although memory 324 is shown as being integrated with high-speed circuitry 320, in other examples memory 324 may be a separate, stand-alone element of data processor 302. In certain such examples, the electrical wiring may provide a connection from the image processor 306 or the low power processor 314 to the memory 324 through a chip including the high speed processor 322. In other examples, the high-speed processor 322 may manage the addressing of the memory 324 such that the low-power processor 314 will boot the high-speed processor 322 whenever read or write operations involving the memory 324 are needed.
The tracking module 340 estimates the pose (post) of the glasses 100. For example, the tracking module 340 uses image data and corresponding inertial data from the camera 308 and positioning component 1440 along with GPS data to track position and determine the pose of the glasses 100 relative to a frame of reference (e.g., a real-world environment). The tracking module 340 continually collects and uses updated sensor data describing the movement of the glasses 100 to determine a three-dimensional pose of the updated glasses 100 that is indicative of changes in relative position and orientation with respect to physical objects in the real-world environment. The tracking module 340 allows the glasses 100 to visually place virtual objects relative to physical objects within a field of view of a user via the display 310.
The GPU and display driver 338 may use the gestures of the glasses 100 to generate frames of virtual or other content to be presented on the display 310 when the glasses 100 are operating in a conventional augmented reality mode. In this mode, the GPU and display driver 338 generate frames of updated virtual content based on the three-dimensional pose of the updated glasses 100, reflecting changes in the position and orientation of the user relative to the physical objects in the user's real environment.
One or more of the functions or operations described herein may also be performed in an application resident on the glasses 100 or on the client device 328 or on a remote server. For example, one or more of the functions or operations described herein may be performed by the application 1306, such as one of the messaging applications 1346.
Fig. 4 is a block diagram illustrating an example messaging system 400 for exchanging data (e.g., messages and associated content) over a network. The messaging system 400 includes multiple instances of the client device 328, each of which hosts several applications including a messaging client 402 and other applications 404. Each messaging client 402 is communicatively coupled to other instances of the messaging client 402 (e.g., hosted on respective other client devices 328), a messaging server system 406, and a third party server 408 via a network 330 (e.g., the internet). The messaging client 402 may also communicate with locally hosted applications 404 using an Application Program Interface (API).
The messaging client 402 is capable of communicating and exchanging data with other messaging clients 402 and with the messaging server system 406 via the network 330. The data exchanged between the messaging clients 402 and the messaging server system 406 includes functions (e.g., commands to activate the functions) as well as payload data (e.g., text, audio, video, or other multimedia data).
The messaging server system 406 provides server-side functionality to particular messaging clients 402 via the network 330. Although the specific functions of messaging system 400 are described herein as being performed by messaging client 402 or by messaging server system 406, the location of the specific functions within messaging client 402 or within messaging server system 406 may be a design choice. For example, it may be technically preferable that: certain techniques and functions are initially deployed within the messaging server system 406, but are then migrated to the messaging client 402 where the client device 328 has sufficient processing power.
The messaging server system 406 supports various services and operations provided to the messaging client 402. Such operations include sending data to the messaging client 402, receiving data from the messaging client 402, and processing data generated by the messaging client 402. As examples, the data may include message content, client device information, geolocation information, media enhancements and overlays, message content persistence conditions, social network information, and live event information. The exchange of data within the messaging system 400 is activated and controlled by functionality available via a User Interface (UI) of the messaging client 402.
Turning now specifically to messaging server system 406, application Program Interface (API) server 410 is coupled to application server 414 and provides a programming interface to application server 414. The application server 414 is communicatively coupled to a database server 416, which database server 416 facilitates access to a database 420, which database 420 stores data associated with messages processed by the application server 414. Similarly, a web server 424 is coupled to application server 414 and provides a web-based interface to application server 414. To this end, web server 424 processes incoming network requests through the hypertext transfer protocol (HTTP) and several other related protocols.
An Application Program Interface (API) server 410 receives and transmits message data (e.g., command and message payloads) between the client device 328 and the application server 414. In particular, an Application Program Interface (API) server 410 provides a set of interfaces (e.g., routines and protocols) that can be invoked or queried by the messaging client 402 to activate the functions of the application server 414. An Application Program Interface (API) server 410 exposes various functions supported by an application server 414, including: registering an account; a login function; sending a message from a particular messaging client 402 to another messaging client 402 via the application server 414, sending a media file (e.g., an image or video) from the messaging client 402 to the messaging server 412, and for possible access by the other messaging client 402; setting a collection of media data (e.g., a story); retrieving a friends list of the user of the client device 328; retrieving such a collection; retrieving the message and the content; adding and deleting entities (e.g., friends) to an entity graph (e.g., social graph); locating friends in the social graph; and open application events (e.g., related to messaging client 402).
Application server 414 hosts several server applications and subsystems, including, for example, messaging server 412, image processing server 418, and social network server 422. The messaging server 412 implements several message processing techniques and functions, particularly those related to the aggregation and other processing of content (e.g., text and multimedia content) included in messages received from multiple instances of the messaging client 402. As will be described in further detail, text and media content from multiple sources may be aggregated into a collection of content (e.g., referred to as a story or gallery). These sets are then made available to messaging client 402. Other processors and memory intensive processing of data may also be performed by messaging server 412 on the server side in view of the hardware requirements for such processing.
The application server 414 also includes an image processing server 418, which image processing server 418 is dedicated to performing various image processing operations, typically with respect to images or video within the payload of messages sent from the messaging server 412 or received at the messaging server 412.
Social networking server 422 supports various social networking functions and services, and makes these functions and services available to messaging server 412. To this end, social networking server 422 maintains and accesses entity graphs within database 420. Examples of functions and services supported by social-networking server 422 include identifying other users in messaging system 400 with whom a particular user has a relationship or that the particular user is "focusing on" and also identifying interests and other entities of the particular user.
Returning to the messaging client 402, features and functions of external resources (e.g., applications 404 or applets) are available to the user via the interface of the messaging client 402. In this context, "external" refers to the fact that the application 404 or applet is external to the messaging client 402. The external resources are typically provided by a third party, but may also be provided by the creator or provider of the messaging client 402. The messaging client 402 receives a user selection of an option to initiate or access a feature of such an external resource. The external resource may be an application 404 (e.g., a "local app") installed on the client device 328, or a small-scale version (e.g., an "applet") of an application hosted on the client device 328 or remote from the client device 328 (e.g., on the third-party server 408). A small-scale version of an application includes a subset of features and functions of the application (e.g., a full-scale, local version of the application) and is implemented using a markup language document. In one example, a small-scale version (e.g., an "applet") of an application is a web-based markup language version of the application and is embedded in the messaging client 402. In addition to using markup language documents (e.g., a..ml file), the applet may include scripting languages (e.g., a..js file or a. Json file) and style sheets (e.g., a..ss file).
In response to receiving a user selection of an option to initiate or access a feature of an external resource, the messaging client 402 determines whether the selected external resource is a web-based external resource or a locally installed application 404. In some cases, the application 404 locally installed on the client device 328 may be launched independently of the messaging client 402 and separately from the messaging client 402, for example, by selecting an icon corresponding to the application 404 on a home screen of the client device 328. As used herein, an icon may include one or both of a text element and a graphical element. A small-scale version of such an application may be launched or accessed via messaging client 402, and in some examples, no portion or limited portion of the small-scale application may be accessed outside of messaging client 402. The small-scale application may be launched by the messaging client 402 receiving markup language documents associated with the small-scale application, for example, from the third-party server 408 and processing such documents.
In response to determining that the external resource is a locally installed application 404, the messaging client 402 instructs the client device 328 to launch the external resource by executing locally stored code corresponding to the external resource. In response to determining that the external resource is a web-based resource, the messaging client 402 communicates with, for example, a third party server 408 to obtain a markup language document corresponding to the selected external resource. The messaging client 402 then processes the obtained markup language document to render the web-based external resource within the user interface of the messaging client 402.
The messaging client 402 may notify the user of the client device 328 or other users (e.g., "friends") related to such user of the activity occurring in one or more external resources. For example, the messaging client 402 may provide notifications to participants in a conversation (e.g., chat session) in the messaging client 402 regarding current or recent use of external resources by one or more members of the user group. One or more users may be invited to join an active external resource or initiate a recently used but currently inactive external resource (in the friend group). The external resources may provide each participant in the conversation, using the respective messaging client 402, with the ability to share items, conditions, states, or locations in the external resources with one or more members in the group of users entering into the chat session. The shared items may be interactive chat cards with which members of the chat may interact, for example, to initiate a corresponding external resource, to view specific information within the external resource, or to bring members of the chat to a specific location or state within the external resource. Within a given external resource, a response message may be sent to the user on the messaging client 402. The external resource may selectively include different media items in the response based on the current context of the external resource.
Fig. 5 is a perspective view of a head-mounted device (e.g., glasses 500) according to another example. It can be seen that in this example, touch pad 502 is integrated into frame 504, with the front portion of frame 504 surrounding the eyes of the user. The presence of the touch pad 502 in the glasses 500 is thus less obtrusive and the overall appearance of the headset is more aesthetically pleasing.
Navigating the user interface on the glasses 500 using the touchpad 502 is performed by presenting some user interface elements (e.g., icons or a gallery of content) using a carousel arrangement, which is shown in fig. 5 using a cylindrical metaphor. As seen by the wearer of the glasses 500, the user interface elements (e.g., text 508) are presented in a curved or curved-like arrangement, for example, on the surface of the cylinder 506. Forward and backward sliding inputs 510 received by the glasses 500 from the user along the touchpad 502 translate into rotations 512 of the user interface element about the cylinder 506. More specifically, a forward swipe gesture on right-hand side touchpad 502 will cause forward movement of the right-hand side of cylinder 506 such that the user interface element is perceptibly rotated in a counterclockwise direction (as viewed from above) around the surface of cylinder 506 and vice versa.
The up or down swipe gesture on the touch pad 502 typically translates into a selection of or transition into a different user interface screen, although the action may also translate into scrolling up or down, respectively, in the current user interface screen, such as zooming in or out of a zoomed image presented in the near-eye display 216 of the glasses 100.
As the forward or backward swipe gesture continues to be received by the glasses 500, the user interface element or content will typically disappear from the right or left side, respectively, while additional user interface elements or content will appear from the left or right side, respectively. Visual cues may be provided to illustrate the cylindrical metaphor as user interface elements or content move toward the edge of the display. For example, movement of a user interface element away from the center toward the edge may cause a reduction in the size of a particular element, and vice versa. Further, content or user interface elements located closer to the center may gradually become overlapped with content or user interface elements adjacent to the outside, where as these elements approach the edges of the cylinder, they become more stacked. Depending on the implementation, the content or user interface element itself may rotate away about their central axis as it moves toward the edge, or the content or user interface element itself may not rotate away about their central axis. That is, in some cases, elements remain user-oriented as the size of the elements decreases or becomes stacked.
Fig. 6 illustrates a series of user interface screens displayed by the head mounted device of fig. 1 or 5 according to some examples. More particularly, fig. 6A is an example of a user interface screen 602 shown on one or both of the displays 310/216 of the glasses shown in fig. 1 or 5 in some examples. As can be seen from the figure, the user interface screen 602 includes a carousel 610 of icons, which in the illustrated embodiment includes AR effect icons 612, 614, 616, 618 and 620 that occupy a left-to-right position on the user interface screen 602. Also disclosed is an empty icon 622 occupying a central location 624 in carousel 610. The circle showing the center location 624 is larger to indicate that the icon in this location is available for selection in this example, but may also indicate that the icon in this location has been selected or is in an active state. An empty icon 622, which in the example is solid, e.g., white or black, when in the center position 624 in the carousel 610 indicates that no icon or action is available or has been selected.
Further, it can be seen that the icons in carousel 610 overlap by progressively greater amounts in a direction from the center location 624 to the edge of user interface screen 602 to provide the cylindrical metaphor described above with reference to fig. 5. For example, AR effect icon 612 slightly overlaps AR effect icon 614, AR effect icon 614 overlaps AR effect icon 616 more, and AR effect icon 616 in turn overlaps AR effect icon 618 even more. In this example of a cylindrical metaphor, the icons are shown right side up throughout carousel 610 and are the same size (except for the icons in center location 624). In other implementations, the icons may also or alternatively be reduced in size or rotated away, as discussed above.
Sliding input forward or backward on touch pad 124/502 will cause the AR effect icons in carousel 610 to move to the left or right, as discussed in more detail below with reference to FIGS. 8A and 8B. The particular direction of carousel movement will depend on whether the left or right touch pad of the glasses 100/500 is used as discussed above. When the AR effect icon occupies the center position, the AR effect icon may be selected by tapping on touch pad 124/502.
The user interface screen 602 also includes navigational cues to provide the user with an indication of the type and direction of input required to transition between different screens or modes. The navigation cues provide an information illustration that includes both an orientation cue and a context cue for system navigation. The navigational cues show both the input action to be taken and the destination or result of the input action.
For example, navigation prompt 626 includes a content gallery icon 630 and a pointer 632. The content gallery icon 630 provides an informational illustration of the content gallery that is accessible, and the downward pointer 632 provides an informational illustration that sliding down on the touch pad 124/502 will initiate a transition to the gallery screen as indicated by the content gallery icon 630. The navigation prompt 626 is located in a position in the user interface screen 602 at the beginning of the direction of the desired user input. That is, the navigation prompt 626 is located at the top of the user interface screen 602 while the corresponding input action on the touchpad is downward toward the bottom of the touchpad.
Similarly, navigation prompt 628 includes pointer 634 and set icon 636. The setup icon 636 provides an informative indication of the setup screen that can be accessed, and an upward pointer 634 provides an informative indication that an upward sliding on the touch pad 124/502 will initiate a transition to the setup screen as indicated by the setup icon 636. The navigation prompt 628 is located in a position in the user interface screen 602 at the beginning of the direction of the desired user input. That is, the navigation prompt 628 is located at the bottom of the user interface screen 602, while the corresponding input action on the touchpad is directed upward toward the top of the touchpad 124/502.
Fig. 6B is an example of a transition screen between the user interface screen 602 and the content gallery user interface screen 606 of fig. 6C. As prompted by the content gallery icon 630 and pointer 632 discussed above, when the user interface screen 602 is displayed, the user interface screen 604 begins to be displayed in response to receiving a downward swipe on the touch pad 124/502.
As can be seen, in user interface screen 604, carousel 610, navigation prompt 626, and navigation prompt 628 all move downward, consistent with the downward swipe received on touch pad 124/502. Additionally, carousel 610 curves downward at center location 624 as an additional visual confirmation of the downward sliding input. When a sliding input is received, the content gallery icon 630 also transitions to a square. Since the received user input is in the opposite direction of the sliding motion to which the navigation prompt 628 corresponds, setting the navigation prompt 628 may also be reduced in size or turned to a gray display to de-emphasize the option.
When a continued downward swipe is received on touch pad 124/502 beyond a particular predetermined point, user interface screen 604 will transition to user interface screen 606 discussed below. If the downward sliding terminates before reaching the predetermined point, the user interface screen 604 will quickly return to the user interface screen 602.
FIG. 6C is an example of a content gallery user interface screen 606 as prompted by content gallery icon 630 and pointer 632 and shown in response to receiving a slide down on touch pad 124/502 when user interface screen 602 is displayed as discussed above.
The user interface screen 606 includes a carousel 646 of content items, which may be, for example, images or videos saved to a content gallery associated with the user and stored on the client device 328, glasses 100/500, or remotely. In one example, the content item includes an image or video that has been captured by the user using the client device 328 or glasses 100/500 while the selected AR effect has been applied.
In the illustrated example, it can be seen that carousel 646 includes content item 650 in a central location in user interface screen 606, as well as partially occluded content item 648 and content item 652. As before, a forward or backward sliding input on touch pad 124/502 will cause the content items in carousel 646 to move left or right, wherein the content items will sequentially replace the current content item in the center position as content item 650 moves left or right. The particular direction in which the content item 650 moves will depend on whether the left or right touch pad of the glasses 100/500 is used as discussed above. The content item 650 occupying the center position may be selected by tapping on the touch pad 124/502. The results of such a selection will be discussed below with reference to fig. 9A.
The user interface screen 606 also includes a navigation prompt 638, which navigation prompt 638 includes an AR carousel icon 640 and a pointer 642. As before, the navigation prompt 638 provides an informational indication that the AR effect carousel 610 shown in the user interface screen 602 may be accessed/returned, and the down pointer 642 provides an informational indication that sliding down on the touch pad 124/502 will initiate a transition to the user interface screen 602 indicated by the AR carousel icon 640. Further, as before, the navigation prompt 638 is located in a position in the user interface screen 606 at the start of the direction of the requested user input. That is, the navigation prompt 638 is located at the top of the user interface screen 606 while the corresponding input action on the touchpad is downward toward the bottom of the touchpad.
Also included is a menu indicator or overflow (overflow) indicator icon 644 that indicates additional user interface options are available in the user interface screen 606. The lack of a pointer indicates that these options are accessed not by a vertical sliding on the touch pad 124/touch pad 502, but by a long press on the touch pad 124/502. In one example, such a long press will open a carousel that can be navigated with a scroll input, selected with a touch input, and cancelled with a slide down UI icon. In one example, the carousel of selectable UI icons includes a zoom icon 912, a send icon 914, and a delete icon 916 discussed below with reference to fig. 9B.
Fig. 6D is an example of a transition user interface screen 608 displayed between user interface screen 606 and user interface screen 602 of fig. 6A. As prompted by AR carousel icon 640 and pointer 642 discussed above, upon display of user interface screen 606, user interface screen 608 begins to be shown in response to receiving a downward swipe on touch pad 124/502.
It can be seen that in user interface screen 608, carousel 646, navigation prompts 638 and icons 644 all move downward, consistent with the downward swipe received on touch pad 124/502. Additionally, the content item 650 in the center position 624 moves further down and reduces in size as an additional visual confirmation of the down-slide input. When a slide input is received, the content gallery icon 630 also increases in size.
When the continued downward sliding received on touch pad 124/502 exceeds a particular predetermined point, user interface screen 608 will transition to user interface screen 602 discussed above. If the downward sliding terminates before reaching the predetermined point, the user interface screen 608 will quickly return to the user interface screen 606.
Fig. 7A and 7B illustrate another series of user interface screens displayed by the head mounted device of fig. 1 or 5 according to some examples. More specifically, fig. 7A and 7B illustrate a series of user interface screens shown when an upward swipe is received on the touch panel while the user interface screen 602 is displayed. Upon receiving such an upward swipe prompted by navigation prompt 628, which includes settings icon 636 and pointer 634, settings user interface screen 702 is displayed. It can be seen that the user interface screen 702 includes various information about the glasses 100/500 or the client device 328, such as battery level, speaker volume, wi-Fi network identification and signal strength, display brightness, user name and head portrait, time, date, etc.
Upon display of user interface screen 702, upon receipt of a downward swipe on touch pad 124/502, as indicated by pointer 704, return to user interface screen 602.
Fig. 8A-8C illustrate another series of user interface screens displayed by the headset of fig. 1 or 5 according to some examples. More specifically, fig. 8A-8C illustrate example user interfaces depicting selection and application of AR effects using carousel 610 in fig. 6A.
Fig. 8A shows a user interface screen 602 with a hollow icon 622 occupying a center position 624 in the carousel 610. Upon receipt of the appropriate forward or backward sliding action on touch pad 124/502, the icons in carousel 610 rotate or scroll to the position shown in user interface screen 802, with the AR effect icon (e.g., AR effect icon 612) occupying center position 624. It can be seen that the AR effect icon 612 is more prominent not only by displaying it in a central location, but also by increasing in size compared to its previous location shown in the user interface screen 602. When the AR effect icon 612 occupies the center position 624, related information (bibliographic information) 806, such as the name of the AR effect corresponding to the AR effect icon 612 and its author, may also be displayed in the user interface screen 802.
The AR effect icon 612 shown in the center position 624 in the user interface screen 802 is now available for selection or activation. As illustrated by user interface screen 804 in one example of FIG. 8C, an application is received that causes an AR effect corresponding to AR effect icon 612 by a user's tap on touch pad 124/502. In this example, the real-time scene captured by one or more of the cameras 114 includes a building 808 that has been enhanced by several separate AR effects, such as a 3D text AR effect 810 and a floating sphere AR effect 812. As will be appreciated, the AR effect corresponding to any AR effect icon may include several separate AR effects.
The user interface screen 804 also includes a navigation prompt 814, the navigation prompt 814 including an "exit" text 816 and a downward pointer 818. Upon receiving a downward swipe on touch pad 124/502 as indicated by navigation prompt 814, user interface screen 804 will dismiss and return to user interface screen 802.
When the glasses 100/500 are in the state shown in fig. 8C and described above, capture of AR effects for a scene and its various applications may be initiated using one of the buttons 126 on the glasses. In one example, a short button press to the left button 126 is received to initiate video capture of the scene, followed by a long button press to the left button to terminate the ongoing video capture. If no video capture is in progress, a long button press to the left button is received to initiate still image capture. Of course, various combinations of button presses and button durations may be used to initiate and terminate capture of content items. Upon completion of content capture initiated by receipt of user input on button 126, a user interface screen 902 described below is displayed.
Fig. 9A-9F illustrate another series of user interface screens displayed by the headset of fig. 1 or 5 according to some examples. More specifically, fig. 9A-9F illustrate example user interfaces depicting forwarding of content items. Shown in fig. 9B is a content preview user interface screen 902, which content preview user interface screen 902 arrives by receiving a tap on touch pad 124/502 (to select content item 918 in a center position in carousel 646) when user interface screen 606 is displayed as shown in fig. 9A or after content capture terminates as described above with reference to fig. 8C. For clarity purposes, the remaining user interface screens of FIG. 9 will be described with reference to content item 918 in content gallery user interface screen 606, but it should be understood that these examples apply equally to just captured content.
The user interface screen 902 shown in FIG. 9B includes previews of content items 918, zoom icons 912, send icons 914, and delete icons 916. In the illustrated example, the send icon 914 is highlighted as an initial default value for the input to be received in the user interface screen 902. In this state, receipt of a tap on touch pad 124/touch pad 502 will cause a transition to recipient selection user interface screen 904. However, after the associated forward or backward sliding input is made on touch pad 124/502, zoom icon 912 or delete icon 916 may be highlighted (and thus selectable).
Upon selection of the zoom icon 912, receipt of a tap on the touch pad 124/502 will transition into a content viewer user interface (not shown) in which the content item 918 will be played back (in the case where the content item is video or gif or has a dynamic AR effect), or displayed in a larger format if the content item 918 is a still image. As before, receipt of a slide down in the content viewer user interface will return to the user interface screen 902.
When the delete icon 916 is selected, receipt of a tap on the touch pad 124/502 will discard the content item 918 (which may be after confirmation is requested), and return to the previous user interface, which may be the user interface screen 606 or the user interface screen 804.
When the send icon 914 is selected, receipt of a tap on touch pad 124/502 will transition to recipient selection user interface screen 904 of FIG. 9C, where a carousel 920 of potential recipients, such as recipient 922 and recipient 924, is shown. In this context, the recipient may be a single recipient, a group of recipients (e.g., "breakfast club" recipient 924, "best friends") or a non-human recipient (e.g., "my story" recipient 922). For example, selection of a non-human recipient may cause a content item to be added to a social media or messaging application feed, or a map indicating the location of the scene and identifying user information, provided that the appropriate rights to do so have been obtained.
As before, a sliding input forward or backward on touch pad 124/502 will cause the recipient in carousel 920 to move left or right, wherein the recipient sequentially replaces the recipient in the center position as carousel 920 scrolls left or right. The particular direction in which carousel 920 moves in response to sliding forward or backward will depend on whether the left or right touch pad of glasses 100/500 is used as discussed above with reference to fig. 1.
The recipient occupying the center position of carousel 920 may be selected by tapping on touch pad 124/502. The recipient occupying the central location may be highlighted, e.g., enclosed in a frame 928 or by using another highlighting technique, e.g., increasing in size or by adjusting color relative to an adjacent recipient.
While in user interface screen 904, receipt of a tap input on touch pad 124/502 will select (or cancel if already selected) a recipient 924 in a central location as shown in FIG. 9D. As shown, the recipient that has been selected is indicated using a hooking flag 930. Additionally, once the recipient has been selected, a navigation prompt 932 including a send icon 934 and pointer 936 is displayed. The navigation prompt 932 is located in a position in the user interface screen 908 that is at the beginning of the direction of the desired user input. That is, the navigation prompt 932 is located at the bottom of the user interface screen 602 while the corresponding input action on the touchpad is upward toward the top of the touchpad 124/502.
As shown in fig. 9E, after one recipient is selected, additional recipients may be selected by continuously scrolling carousel 920 in response to a forward or backward touch input on touch pad 124/502 to bring another recipient 922 into center location 624, the selection of another recipient 922 also being shown using a hooked marker 938. After one or more recipients have been selected as shown in user interface screen 906 or user interface screen 908, receipt of an up swipe on touch pad 124/502 as prompted by navigation prompt 932 will cause a transition to user interface screen 910.
As shown in fig. 9F, the user interface screen 910 includes a send icon 934 and a cloud or list 940 of selected recipients. Also included is a text prompt 942 indicating "tap send". Receipt of a tap input on the touch pad 124/500 will send the content item 918 to the selected recipient. The glasses 100/500 will then return to displaying the user interface screen 804 of fig. 8C or the user interface screen 606 of fig. 9A and 6C depending on how the user interface screen 902 is reached. In other examples, the glasses may return to another user interface screen, such as user interface screen 602.
As shown by the downward pointer 926, the user interface screens shown in fig. 9B-9F will be dismissed when a downward swipe is received on the touch pad 124/502. In one example, a downward swipe received when user interface screen 910 is displayed returns to user interface screen 908, while a downward swipe received when user interface screen 904, user interface screen 906, or user interface screen 908 is displayed returns to user interface screen 902, while a downward swipe received when user interface screen 902 is displayed returns to user interface screen 606.
FIG. 10 illustrates a user interface flow diagram 1000 that may be implemented by the glasses 100/500 according to some examples. Flowchart 1000 begins with eyeglasses 100/500 in a sleep state. The glasses 100/500 wake up upon receiving a touch input 1004 on the touch pad 124/502 or a press on one of the buttons 126. In response, with the glasses 100/500 locked, a user interface screen 1002 is presented prompting entry of a PIN code. It can be seen that user interface screen 1002 includes a keyboard display 1006 and an input field 1008.
As illustrated by tap and swipe input 1010, keyboard display 1006 is traversed by forward and backward swipe inputs received on touch pad 124/502, and upon receipt of a tap input on touch pad 124/502, the highlighted digit is selected for inclusion and display in input field 1008. Upon receipt of the correct PIN code, the glasses 100/500 will transition to the user interface screen 602 showing the AR effect carousel 610 described above.
Receiving a down-slide input 1012 transitions from user interface screen 602 to user interface screen 606 corresponding to the content gallery, and receiving a down-slide input 1012 transitions from content gallery user interface screen 606 back to user interface screen 602.
Within the user interface screen 606, a sliding input 1014 may be used to scroll between content items as discussed above with reference to FIG. 9A. As illustrated by input 1016, within user interface screen 606, a tap input is received that transitions to user interface screen 902 representing a preview of a content item as discussed in more detail above, while in user interface screen 902, a slide down is received that returns to user interface screen 606. Within the user interface screen 902, a sliding input 1018 may be used to select the zoom icon 912, the send icon 914, or the delete icon 916 as discussed above with reference to FIG. 9.
As illustrated by input 1020, receipt of an up-slide input transitions from user interface screen 602 to setup or system information user interface screen 702, while receipt of a down-slide input transitions from user interface screen 702 back to user interface screen 602.
Similarly, as shown by input 1022, from user interface screen 602 in which the icon is in center position 624, receipt of a tap input causes user interface screen 804 to display an application AR effect, and receipt of a slide down input transitions back from user interface screen 804 to user interface screen 602. Additional functions or screens associated with fig. 10 are described above with reference to fig. 6-9.
FIG. 11 is a diagram illustrating operations performed by the glasses 100/500 in response to receiving user input on a touch pad, according to some examples. For purposes of illustration, the operations of flowchart 1100 are described herein as occurring serially or linearly. However, multiple operations of flowchart 1100 may occur in parallel. Additionally, the operations of flowchart 1100 need not be performed in the order shown and/or one or more blocks of flowchart 1100 need not be performed and/or may be replaced by other operations.
The operations shown in FIG. 11 will typically be performed on the relevant hardware and data processor 302 in the glasses 100/500 or associated with the glasses 100/500. For clarity, flowchart 1100 is discussed herein with reference to such examples. Various implementations are of course possible, wherein some of the operations occur in an application in the client device 328, such as in the messaging application 1346, on the server system 332, or wherein one application on the client device 328 invokes another application or SDK to obtain the desired functionality. In one example, the messaging application 1346 running on the client device 328 performs operations in conjunction with related hardware and data processor 302 in or associated with the glasses 100.
The method begins at operation 1102, where the glasses 100/500 display a user interface screen 602 showing an AR effect carousel 610, with navigation cues 626 above the AR effect carousel 610 and navigation cues 628 below the AR effect carousel 610. As mentioned, the navigation prompts include icons indicating the result or destination of the sliding input on the touch pad and pointers indicating the direction of the sliding input, wherein the navigation prompts are located in positions in the user interface screen 602 at the start of the direction of the desired user input.
Upon receiving a vertical (i.e., up or down) touch input on touch pad 124/502 in operation 1104, glasses 100/500 display a corresponding user interface as indicated by navigation prompt 626 or navigation prompt 628 in operation 1106, e.g., user interface screen 606 or user interface screen 702 as described above with reference to fig. 6 and 7, respectively. Upon receiving a downward touch input in the user interface screen at operation 1108, the glasses return to displaying the AR effect carousel 610 in operation 1102 in operation 1110.
During display of the user interface screen at operation 1106, receipt of a horizontal (e.g., forward or backward) touch input at operation 1120 causes scrolling of the displayed items (if appropriate, e.g., for user interface screen 606 instead of for user interface screen 702) at operation 1122, followed by return to displaying the relevant user interface screen at operation 1106.
During display of the user interface screen at operation 1106, receipt of a tap input at operation 1112 causes a corresponding user interface screen (if appropriate, e.g., from user interface screen 606 instead of user interface screen 702) to be displayed at operation 1114, with the items selected by scrolling as described in operations 1120 through 1122 presented on the user interface screen. For example, content item 918 as presented in user interface screen 606. Additional touch inputs may then be received at operation 1116 and corresponding actions taken at operation 1118. Examples of such inputs and corresponding actions are discussed herein with reference to fig. 9, 10, and 13.
During the display of the AR effect user interface screen 602 by the glasses 100/500 at operation 1102, receipt of a horizontal (e.g., forward or backward) touch input at operation 1126 causes scrolling of the AR effect icon in operation 1128, as discussed above with reference to fig. 6. With the AR effect icon in the center position, upon receipt of a tap input at operation 1130, a user interface screen 804 is displayed, in which user interface screen 804 the corresponding AR effect is applied to the scene viewed by the camera 114 in operation 1132. The additional touch input received in operation 1134 will then cause a corresponding action to be taken in operation 1136. Examples of such inputs and corresponding actions are discussed herein with reference to fig. 8, 9, 10, and 13.
Upon completion of the corresponding action in operation 1118 or 1136, the glasses 100/300 generally return to the user interface screen 602 or the user interface screen 606 for additional operations as described herein.
FIG. 12 is a flow diagram 1200 illustrating content item selection and transmission operations performed by the glasses 100/500 in response to receiving user input on a touch pad, according to some examples. For purposes of illustration, the operations of flowchart 1200 are described herein as occurring serially or linearly. However, multiple operations of flowchart 1200 may occur in parallel. Additionally, the operations of flowchart 1200 need not be performed in the order shown and/or one or more blocks of flowchart 1200 need not be performed and/or may be replaced by other operations.
The operations shown in FIG. 12 will typically be performed on the relevant hardware and data processor 302 in the glasses 100/500 or associated with the glasses 100/500. For clarity, flowchart 1200 is discussed herein with reference to such examples. Various implementations are of course possible, wherein some of the operations occur in an application in the client device 328, such as in the messaging application 1346, on the server system 332, or wherein one application on the client device 328 invokes another application or SDK to obtain the desired functionality. In one example, the messaging application 1346 running on the client device 328 performs operations in conjunction with related hardware and data processor 302 in or associated with the glasses 100.
The method begins at operation 1202, where the glasses 100/500 display a content preview user interface screen 902. As discussed above, this screen may be reached by scrolling and tapping to select a content item in user interface screen 606 or by capturing user input from content received in user interface screen 804. The user interface screen 902 includes three options that may be selected in response to scrolling in response to a horizontal touch input (forward or backward) and touch input received on the touch pad 124/502 in operation 1204, as discussed above with reference to fig. 9B.
Upon receipt of the touch input when the delete icon 916 is selected, the content item shown in the user interface screen 902 is deleted in operation 1206 and the glasses 100/500 return to the previous user interface screen displayed before the user interface screen 902 in operation 1208. In one example, this would be user interface screen 606 or user interface screen 804.
Upon selection of the zoom icon 912, in operation 1204, upon receipt of a tap on the touch pad 124/502, a content viewer user interface is displayed by the glasses 100/500 in operation 1210, in which the content item shown in the user interface screen 902 is played back (in the case where the content item is video or gif or has a dynamic AR effect), or in a larger format if the content item is a still image. In operation 1212, receipt of a downward swipe on touch pad 124/502 will cause a return to displaying content preview user interface screen 902 at operation 1202 in operation 1214.
When the send icon 914 is selected, upon receipt of a tap on the touch pad 124/502 in operation 1204, an address book or recipient selection user interface screen 904 is displayed in operation 1216, and a carousel 920 of potential recipients is shown in the address book or recipient selection user interface screen 904.
As before, in operation 1218, a forward or backward sliding input on touch pad 124/502 will cause the recipient in recipient selection carousel 920 to move left or right in operation 1220, wherein the recipients will sequentially replace the recipients in the center position as the carousel scrolls left or right.
In operation 1224, the recipient occupying the center position in carousel 920 may be selected in response to the tap input on touch pad 124/502 in operation 1222 or, if already selected, deselected. As discussed above with reference to fig. 9D and 9E, the recipient that has been selected is indicated with a hooking flag, and once the recipient is selected, a "send" navigation prompt is displayed.
If a touch input corresponding to the transmission instruction is not received at operation 1226, the method returns to operation 1216 to allow selection of an additional recipient in operations 1218 through 1224. As shown in user interface screen 906 or user interface screen 908, after one or more recipients have been selected, receipt of a send input at operation 1126, such as an upward swipe on touch pad 124/502 as prompted by navigation prompt 932, will cause a transition to confirm user input screen, such as user interface screen 910, in operation 1228. A confirmation user input is received at operation 1230, such as in response to a tap input of the send icon 934, causing the glasses 100/500 or associated client device 328 to send the content item to the selected recipient in operation 1232.
Upon completion of the transmission of the content item to the selected recipient in operation 1232, the glasses 100/300 generally return to displaying the user interface screen 804 of fig. 8C or the user interface screen 606 of fig. 9A/6C in operation 1234 depending on how the content preview user interface screen 902 was reached. In other examples, the glasses may return to another user interface screen, such as user interface screen 602, at which point additional operations may be performed as described herein.
Fig. 13 is a block diagram 1300 illustrating a software architecture 1304 that may be installed on any one or more of the devices described herein. The software architecture 1304 is supported by hardware, such as the machine 1302, which includes a processor 1320, memory 1326 and I/O components 1338. In this example, the software architecture 1304 may be conceptualized as a stack of layers in which each layer provides a particular function. The software architecture 1304 includes layers such as an operating system 1312, libraries 1308, frameworks 1310, and applications 1306. In operation, the application 1306 calls the API call 1350 through the software stack and receives the message 1352 in response to the API call 1350.
Operating system 1312 manages hardware resources and provides common services. Operating system 1312 includes, for example: kernel 1314, service 1316, and driver 1322. The kernel 1314 serves as an abstraction layer between hardware and other software layers. For example, kernel 1314 provides memory management, processor management (e.g., scheduling), component management, networking, and security settings, among other functions. Service 1316 may provide other common services for other software layers. The driver 1322 is responsible for controlling or interfacing with the underlying hardware. For example, the driver 1322 may include a display driver, an imaging device driver, Or (b)Low power consumption driver, flash memory driver, serial communication driver (e.g.)As an example of the presence of a metal such as, universal Serial Bus (USB) driver), -a host device, and a host device>Drivers, audio drivers, power management drivers, etc.
Library 1308 provides a low-level public infrastructure used by applications 1306. The library 1308 may include a system library 1318 (e.g., a C-standard library), which system library 1318 provides functions such as memory allocation functions, string manipulation functions, mathematical functions, and the like. In addition, libraries 1308 may include API libraries 1324, such as media libraries (e.g., libraries for supporting presentation and manipulation of various media formats, such as moving Picture experts group-4 (MPEG 4), advanced video coding (H.264 or AVC), moving Picture experts group layer-3 (MP 3), advanced Audio Coding (AAC), adaptive Multi-Rate (AMR) audio codec, joint Picture experts group (JPEG or JPG) or Portable Network Graphics (PNG)), graphics libraries (e.g., openGL framework for presentation in two-dimensional (2D) and three-dimensional (3D) in graphical content on a display), database libraries (e.g., SQLite providing various relational database functions), web libraries (e.g., webKit providing web browsing functions), and the like. The library 1308 may also include various other libraries 1328 to provide many other APIs to the application 1306.
Framework 1310 provides a high-level public infrastructure used by applications 1306. For example, framework 1310 provides various Graphical User Interface (GUI) functions, advanced resource management, and advanced location services. Framework 1310 may provide a wide variety of other APIs that may be used by applications 1306, some of which may be specific to a particular operating system or platform.
In an example, applications 1306 may include a home application 1336, a contacts application 1330, a browser application 1332, a book reader application 1334, a location application 1342, a media application 1344, a messaging application 1346, a gaming application 1348, and a variety of other applications such as a third party application 1340. The application 1306 is a program that performs the functions defined in the program. One or more of the applications 1306 that are variously structured may be created using a variety of programming languages, such as an object-oriented programming language(e.g., objective-C, java or C++) or a procedural programming language (e.g., C language or assembly language). In a particular example, third party application 1340 (e.g., using ANDROID by an entity other than the vendor of the particular platform) TM Or IOS TM Applications developed in Software Development Kits (SDKs) may be, for example, in IOS TM 、ANDROID TMThe mobile operating system of Phone or another mobile operating system. In this example, third party application 1340 may call API call 1350 provided by operating system 1312 to facilitate the functions described herein.
Fig. 14 is a diagrammatic representation of machine 1400 within which instructions 1410 (e.g., software, programs, applications, applets, apps, or other executable code) may be executed that cause machine 1400 to perform any one or more of the methods discussed herein. For example, instructions 1410 may cause machine 1400 to perform any one or more of the methods described herein. The instructions 1410 transform a generic, un-programmed machine 1400 into a specific machine 1400 that is programmed to perform the functions described and illustrated in the manner described. The machine 1400 may operate as a standalone device or may be coupled (e.g., networked) to other machines. In a networked deployment, the machine 1400 may operate in the capacity of a server machine or a client machine in a server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. Machine 1400 may include, but is not limited to: a server computer, a client computer, a Personal Computer (PC), a tablet computer, a laptop computer, a netbook, a set-top box (STB), a PDA, an entertainment media system, a cellular telephone, a smart phone, a mobile device, a head mounted device (e.g., a smart watch), a smart home device (e.g., a smart appliance), other smart devices, a web device, a network router, a network switch, a network bridge, or any machine capable of executing instructions 1410 that specify actions to be taken by machine 1400, sequentially or otherwise. Furthermore, while only a single machine 1400 is illustrated, the term "machine" shall also be taken to include a collection of machines that individually or jointly execute instructions 1410 to perform any one or more of the methodologies discussed herein.
Machine 1400 may include a processor 1402, a memory 1404, and an I/O component 1406, which processor 1402, memory 1404, and I/O component 1406 may be configured to communicate with each other via a bus 1444. In an example, the processor 1402 (e.g., a Central Processing Unit (CPU), a Reduced Instruction Set Computing (RISC) processor, a Complex Instruction Set Computing (CISC) processor, a Graphics Processing Unit (GPU), a Digital Signal Processor (DSP), an ASIC, a Radio Frequency Integrated Circuit (RFIC), other processors, or any suitable combination thereof) may include, for example, the processor 1408 and the processor 1412 that execute the instructions 1410. The term "processor" is intended to include a multi-core processor, which may include two or more separate processors (sometimes referred to as "cores") that may concurrently execute instructions. Although fig. 14 shows multiple processors 1402, machine 1400 may include a single processor having a single core, a single processor having multiple cores (e.g., a multi-core processor), multiple processors having a single core, multiple processors having multiple cores, or any combination thereof.
The memory 1404 includes a main memory 1414, a static memory 1416, and a storage unit 1418, each of the main memory 1414, static memory 1416, and storage unit 1418 being accessible to the processor 1402 via the bus 1444. The main memory 1404, static memory 1416, and storage unit 1418 store instructions 1410 that implement any one or more of the methods or functions described herein. During execution of the instructions 1410 by the networked system 300, the instructions 1410 may also reside, completely or partially, within the main memory 1414, within the static memory 1416, within the machine-readable medium 1420, within the storage unit 1418, within at least one of the processors 1402 (e.g., within the cache memory of the processor), or within any suitable combination thereof.
I/O components 1406 may include various components for receiving input, providing output, generating output, sending information, exchanging information, capturing measurement results, and the like. The particular I/O components 1406 included in a particular machine will depend on the type of machine. For example, a portable machine such as a mobile phone may include a touch input device or other such input mechanism, while a headless server machine would likely not include such a touch input device. It is to be appreciated that I/O component 1406 can include many other components not shown in FIG. 14. In various examples, I/O components 1406 may include an output component 1428 and an input component 1432. The output component 1428 can include visual components (e.g., a display such as a Plasma Display Panel (PDP), a Light Emitting Diode (LED) display, a Liquid Crystal Display (LCD), a projector, or a Cathode Ray Tube (CRT)), acoustic components (e.g., speakers), haptic components (e.g., vibration motors, resistance mechanisms), other signal generators, and so forth. Input components 1432 may include alphanumeric input components (e.g., a keyboard, a touch screen configured to receive alphanumeric input, an optoelectronic keyboard, or other alphanumeric input components), point-based input components (e.g., a mouse, touchpad, trackball, joystick, motion sensor, or other pointing instrument), tactile input components (e.g., physical buttons, a touch screen providing location and/or force of a touch or touch gesture, or other tactile input components), audio input components (e.g., a microphone), and the like.
In further examples, I/O component 1406 may include: biometric component 1434, motion component 1436, environmental component 1438, or positioning component 1440, among various other components. For example, the biometric component 1434 includes components for detecting expressions (e.g., hand expressions, facial expressions, voice expressions, body gestures, or eye tracking), measuring biological signals (e.g., blood pressure, heart rate, body temperature, perspiration, or brain waves), identifying a person (e.g., voice recognition, retinal recognition, facial recognition, fingerprint recognition, or electroencephalogram-based recognition), and the like. The motion component 1436 includes an acceleration sensor component (e.g., accelerometer), a gravity sensor component, a rotation sensor component (e.g., gyroscope), and the like. The environmental component 1438 includes, for example, an illumination sensor component (e.g., a photometer), a temperature sensor component (e.g., one or more thermometers that detect ambient temperature), a humidity sensor component, a pressure sensor component (e.g., a barometer), an acoustic sensor component (e.g., one or more microphones that detect background noise), a proximity sensor component (e.g., an infrared sensor that detects nearby objects), a gas sensor (e.g., a gas detection sensor that detects the concentration of hazardous gases or measures contaminants in the atmosphere for safety), or other component that may provide an indication, measurement, or signal corresponding to the surrounding physical environment. The positioning component 1440 includes a position sensor component (e.g., a GPS receiver component), an altitude sensor component (e.g., an altimeter or barometer that detects barometric pressure at which altitude is available), an orientation sensor component (e.g., a magnetometer), and so forth.
Communication may be accomplished using a variety of techniques. The I/O component 1406 also includes a communication component 1442, the communication component 1442 being operable to couple the networked system 300 to a network 1422 or device 1424 via a coupling 1430 and a coupling 1426, respectively. For example, communication components 1442 may include a network interface component or another suitable device to interface with network 1422. In a further example of this embodiment, the method comprises, communication components 1442 may include wired communication components, wireless communication components cellular communication component, near Field Communication (NFC) component,Parts (e.g.)>Low power consumption),Components, and other communication components that provide communication via other modalities. Device 1424 may be another machine or any of a wide variety of peripheral devices (e.g., a peripheral device coupled via USB).
Further, communication component 1442 may detect an identifier or include a component operable to detect an identifier. For example, communication components 1442 may include Radio Frequency Identification (RFID) tag reader components, NFC smart tag detection components, optical reader components (e.g.An optical sensor for detecting: one-dimensional bar codes, such as Universal Product Code (UPC) bar codes; a multi-dimensional bar code, such as a Quick Response (QR) code, aztec code, data matrix, data glyph (Dataglyph), maxcode (MaxiCode), PDF417, super code, UCC RSS-2D bar code, and other optical codes), or an acoustic detection component (e.g., a microphone for identifying a marked audio signal). In addition, various information may be available via communications component 1442, e.g., via an Internet Protocol (IP) geolocated location, via The location of signal triangulation, the location of NFC beacon signals that may indicate a particular location via detection, etc.
Various memories (e.g., memory 1404, main memory 1414, static memory 1416, and/or memory of processor 1402) and/or storage unit 1418 may store one or more sets of instructions and data structures (e.g., software) implemented or used by any one or more of the methods or functions described herein. These instructions (e.g., instructions 1410), when executed by processor 1402, cause various operations to implement the disclosed examples.
The instructions 1410 may be transmitted or received over the network 1422 via a network interface device (e.g., a network interface component included in the communication component 1442) using a transmission medium and using any of a number of well-known transmission protocols (e.g., hypertext transfer protocol (HTTP)). Similarly, instructions 1410 may be transmitted or received to device 1424 via coupling 1426 (e.g., a peer-to-peer coupling) using a transmission medium.
"carrier wave signal" refers to any intangible medium capable of storing, encoding or carrying instructions for execution by a machine, and includes digital or analog communications signals or other intangible medium to facilitate communication of such instructions. The instructions may be transmitted or received over a network using a transmission medium via a network interface device.
"client device" refers to any machine that interfaces with a communication network to obtain resources from one or more server systems or other client devices. The client device may be, but is not limited to, a mobile phone, desktop computer, laptop computer, portable Digital Assistant (PDA), smart phone, tablet computer, super book, netbook, laptop computer, multiprocessor system, microprocessor-based or programmable consumer electronics, game console, set top box, or any other communication device that a user can use to access a network.
"communication network" refers to one or more portions of a network, the network may be an ad hoc network, an intranet, an extranet, a Virtual Private Network (VPN), a Local Area Network (LAN), a Wireless LAN (WLAN), a Wide Area Network (WAN), a Wireless WAN (WWAN), a Virtual Private Network (VPN) Metropolitan Area Networks (MANs), the Internet, portions of the Public Switched Telephone Network (PSTN), plain Old Telephone Service (POTS) networks, cellular telephone networks, wireless networks,A network, another type of network, or a combination of two or more such networks. For example, the network or portion of the network may comprise a wireless network or cellular network, and the coupling may be a Code Division Multiple Access (CDMA) connection, a global system for mobile communications (GSM) connection, or other type of cellular or wireless coupling. In this example, the coupling may implement any of various types of data transmission technologies, such as single carrier radio transmission technology (1 xRTT), evolution data optimized (EVDO) technology, general Packet Radio Service (GPRS) technology, enhanced data rates for GSM evolution (EDGE) technology, third generation partnership project (3 GPP) including 3G, fourth generation wireless (4G) networks, universal Mobile Telecommunications System (UMTS), high Speed Packet Access (HSPA), worldwide Interoperability for Microwave Access (WiMAX), long Term Evolution (LTE) standards, other data transmission technologies defined by various standards setting organizations, other long distance protocols, or other data transmission technologies.
"component" refers to a device, physical entity, or logic having boundaries defined by function or subroutine calls, branch points, APIs, or other techniques provided for partitioning or modularizing specific processing or control functions. The components may be combined with other components via their interfaces to perform machine processes. A component may be a packaged functional hardware unit designed for use with other components and typically part of a program that performs the specified function of the relevant function. The components may constitute software components (e.g., code implemented on a machine-readable medium) or hardware components. A "hardware component" is a tangible unit capable of performing certain operations and may be configured or arranged in some physical manner. In various examples, one or more computer systems (e.g., stand-alone computer systems, client computer systems, or server computer systems) or one or more hardware components of a computer system (e.g., processors or groups of processors) may be configured by software (e.g., an application or application part) as hardware components that operate to perform certain operations as described herein. The hardware components may also be implemented mechanically, electronically, or in any suitable combination thereof. For example, a hardware component may include specialized circuitry or logic permanently configured to perform certain operations. The hardware component may be a special purpose processor such as a Field Programmable Gate Array (FPGA) or an Application Specific Integrated Circuit (ASIC). The hardware components may also include programmable logic or circuitry that is temporarily configured by software to perform certain operations. For example, the hardware components may include software that is executed by a general purpose processor or other programmable processor. Once configured by such software, the hardware components become the specific machine (or specific component of a machine) that is uniquely customized to perform the configured functions, and are no longer general purpose processors. It will be appreciated that decisions on implementing hardware components mechanically in dedicated and permanently configured circuitry or in temporarily configured (e.g., through software configuration) circuitry may be driven due to cost and time considerations. Accordingly, the phrase "hardware component" (or "hardware-implemented component") should be understood to include a tangible entity, i.e., an entity that is physically constructed, permanently configured (e.g., hardwired), or temporarily configured (e.g., programmed) to operate in a certain manner or to perform certain operations described herein. Consider the example in which hardware components are temporarily configured (e.g., programmed), each of the hardware components need not be configured or instantiated at any one time. For example, where the hardware components include a general-purpose processor that is configured by software to be a special-purpose processor, the general-purpose processor may be configured at different times as respective different special-purpose processors (e.g., including different hardware components). The software configures one or more particular processors accordingly, for example, to constitute particular hardware components at one time and different hardware components at different times. A hardware component may provide information to and receive information from other hardware components. Accordingly, the hardware components described may be considered to be communicatively coupled. Where multiple hardware components are present at the same time, communication may be achieved by signal transmission (e.g., through appropriate circuitry and buses) between or among the two or more hardware components. In examples where multiple hardware components are configured or instantiated at different times, communication between such hardware components may be achieved, for example, by storing information in a memory structure accessible to the multiple hardware components and retrieving the information in the memory structure. For example, one hardware component may perform an operation and store an output of the operation in a memory device communicatively coupled thereto. Additional hardware components may then access the memory device at a later time to retrieve and process the stored output. The hardware component may also initiate communication with an input device or an output device, and may operate on a resource (e.g., collection of information). Various operations of the example methods described herein may be performed, at least in part, by one or more processors that are temporarily configured (e.g., by software) or permanently configured to perform the relevant operations. Whether temporarily configured or permanently configured, such a processor may constitute a processor-implemented component that operates to perform one or more operations or functions described herein. As used herein, "processor-implemented components" refers to hardware components implemented using one or more processors. Similarly, the methods described herein may be implemented, at least in part, by processors, with particular one or more processors being examples of hardware. For example, at least some of the operations of the method may be performed by one or more processors or processor-implemented components. In addition, one or more processors may also operate to support execution of related operations in a "cloud computing" environment or as a "software as a service" (SaaS). For example, at least some of the operations may be performed by a set of computers (as examples of machines including processors), where the operations may be accessed via a network (e.g., the internet) and via one or more suitable interfaces (e.g., APIs). The performance of certain of the operations may be distributed among processors, not only residing within a single machine, but also deployed across multiple machines. In some examples, the processor or processor-implemented components may be located in a single geographic location (e.g., within a home environment, an office environment, or a server farm). In other examples, the processor or processor-implemented components may be distributed across multiple geolocations.
"computer-readable medium" refers to both machine storage media and transmission media. Accordingly, these terms include both storage devices/media and carrier wave/modulated data signals. The terms "machine-readable medium," "computer-readable medium," and "device-readable medium" mean the same thing and may be used interchangeably in this disclosure.
"ephemeral message" refers to a message that can be accessed for a limited duration of time. The transient message may be text, images, video, etc. The access time for the ephemeral message may be set by the message sender. Alternatively, the access time may be a default setting or a setting specified by the recipient. The message is temporary regardless of the setup technique.
"machine storage medium" refers to a single or multiple storage devices and/or media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the executable instructions, routines, and/or data. Accordingly, the term should be taken to include, but is not limited to, solid-state memory, as well as optical and magnetic media, including memory internal or external to the processor. Specific examples of machine, computer, and/or device storage media include: nonvolatile memory including, for example, semiconductor memory devices such as erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), FPGA, and flash memory devices; magnetic disks, such as internal hard disks and removable disks; magneto-optical disk; CD-ROM and DVD-ROM discs. The terms "machine storage medium," "device storage medium," "computer storage medium" mean the same and may be used interchangeably in this disclosure. The terms "machine storage medium," computer storage medium, "and" device storage medium "expressly exclude carrier waves, modulated data signals, and other such medium, at least some of which are contained within the term" signal medium.
"processor" refers to any circuit or virtual circuit (physical circuit simulated by logic executing on an actual processor) that manipulates data values according to control signals (e.g., "commands," "operation code," "machine code," etc.) and generates corresponding output signals that are applied to an operating machine. For example, the processor may be a Central Processing Unit (CPU), a Reduced Instruction Set Computing (RISC) processor, a Complex Instruction Set Computing (CISC) processor, a Graphics Processing Unit (GPU), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Radio Frequency Integrated Circuit (RFIC), or any combination thereof. A processor may also be a multi-core processor having two or more independent processors (sometimes referred to as "cores") that may execute instructions simultaneously.
"signal medium" refers to any intangible medium capable of storing, encoding, or carrying instructions for execution by a machine, and includes digital or analog communications signals or other intangible medium to facilitate communication of software or data. The term "signal medium" shall be taken to include any form of modulated data signal, carrier wave, and the like. The term "modulated data signal" means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. The terms "transmission medium" and "signal medium" mean the same thing and may be used interchangeably in this disclosure.
Changes and modifications may be made to the disclosed examples without departing from the scope of the present disclosure. These and other changes or modifications are intended to be included within the scope of the present disclosure as expressed in the appended claims.

Claims (20)

1. A method performed by one or more processors in a head mounted device system for receiving and processing content transmission input, the head mounted device system comprising one or more display devices, one or more cameras, and a substantially vertically arranged touch pad, the method comprising:
displaying the content items on the one or more display devices;
receiving a content selection touch input on the touch pad;
displaying a carousel of potential recipients in response to the content selection touch input;
receiving a first horizontal touch input on the touch pad;
scrolling the carousel of potential recipients to the left or right on the one or more display devices in response to the first horizontal touch input;
receiving a recipient selection touch input on the touch pad, the touch input for selecting a particular recipient;
receiving content-delivery touch input on the touch pad; and
The content item is transmitted to the particular recipient in response to the content transmission touch input.
2. The method of claim 1, wherein receiving the content selection touch input on the touch pad comprises:
receiving a tap touch input on the touch pad, the touch input for selecting the content item;
in response to receiving the tap touch input, displaying a plurality of user interface options;
receiving a second horizontal touch input on the touch pad;
causing a selection indicator to move relative to the plurality of user interface options based on the second horizontal touch input; and
a user interface selection touch input on the touch pad is received for selecting a particular user interface option of the plurality of user interface options.
3. The method of claim 2, wherein the plurality of user interface options includes a delete option, a content viewer option, and a send option.
4. The method of claim 2, further comprising:
receiving a third horizontal touch input on the touch pad after receiving the recipient selection touch input;
scrolling the carousel of potential recipients to the left or right on the one or more display devices in response to the third horizontal touch input; and
An additional recipient selection touch input on the touch pad is received, the touch input for selecting an additional recipient.
5. The method of claim 4, further comprising:
a vertical touch input on the touch pad is received prior to receiving the content delivery touch input, the vertical touch input for confirming selection of the particular recipient and the additional recipients.
6. The method of claim 1, further comprising:
receiving a second horizontal touch input on the touch pad after receiving the recipient selection touch input;
scrolling the carousel of potential recipients to the left or right on the one or more display devices in response to the second horizontal touch input; and
an additional recipient selection touch input on the touch pad is received, the touch input for selecting an additional recipient.
7. The method of claim 1, further comprising:
a vertical touch input on the touch pad is received prior to receiving the content delivery touch input, the vertical touch input for confirming selection of the particular recipient.
8. The method of claim 1, further comprising:
Receiving a vertical touch input on the touch pad prior to receiving the content delivery touch input; and
in response to receiving the vertical touch input, eliminating display of a carousel of the potential recipients.
9. A head mounted device system comprising:
one or more camera devices;
one or more display devices;
a touch pad arranged substantially vertically;
one or more processors; and
a memory storing instructions that, when executed by the one or more processors, configure the system to perform operations comprising:
displaying the content items on the one or more display devices;
receiving a content selection touch input on the touch pad;
displaying a carousel of potential recipients in response to the content selection touch input;
receiving a first horizontal touch input on the touch pad;
scrolling the carousel of potential recipients to the left or right on the one or more display devices in response to the first horizontal touch input;
receiving a recipient selection touch input on the touch pad, the touch input for selecting a particular recipient;
receiving content-delivery touch input on the touch pad; and
The content item is transmitted to the particular recipient in response to the content transmission touch input.
10. The headset system of claim 9, wherein receiving a first touch input on the touch pad comprises:
receiving a tap touch input on the touch pad, the touch input for selecting the content item;
in response to receiving the tap touch input, displaying a plurality of user interface options;
receiving a second horizontal touch input on the touch pad;
causing a selection indicator to move relative to the plurality of user interface options based on the second horizontal touch input; and
a user interface selection touch input on the touch pad is received for selecting a particular user interface option of the plurality of user interface options.
11. The head-mounted device system of claim 10, wherein the plurality of user interface options comprises a delete option, a content viewer option, and a send option.
12. The headset system of claim 10, wherein the operations further comprise:
receiving a third horizontal touch input on the touch pad after receiving the recipient selection touch input;
Scrolling the carousel of potential recipients to the left or right on the one or more display devices in response to the third horizontal touch input; and
an additional recipient selection touch input on the touch pad is received, the touch input for selecting an additional recipient.
13. The headset system of claim 9, wherein the operations further comprise:
receiving a second horizontal touch input on the touch pad after receiving the recipient selection touch input;
scrolling the carousel of potential recipients to the left or right on the one or more display devices in response to the second horizontal touch input; and
an additional recipient selection touch input on the touch pad is received, the touch input for selecting an additional recipient.
14. The headset system of claim 9, wherein the operations further comprise:
a vertical touch input on the touch pad is received prior to receiving the content delivery touch input, the vertical touch input for confirming selection of the particular recipient.
15. The headset system of claim 9, wherein the operations further comprise:
Receiving a vertical touch input on the touch pad prior to receiving the content delivery touch input; and
in response to receiving the vertical touch input, eliminating display of a carousel of the potential recipients.
16. A non-transitory computer-readable storage medium comprising instructions that, when executed by a head-mounted device system comprising one or more display devices, one or more cameras, and a substantially vertically arranged touch pad, cause the head-mounted device system to perform operations comprising:
displaying the content items on the one or more display devices;
receiving a content selection touch input on the touch pad;
displaying a carousel of potential recipients in response to the content selection touch input;
receiving a first horizontal touch input on the touch pad;
scrolling the carousel of potential recipients to the left or right on the one or more display devices in response to the first horizontal touch input;
receiving a recipient selection touch input on the touch pad, the touch input for selecting a particular recipient;
receiving content-delivery touch input on the touch pad; and
The content item is transmitted to the particular recipient in response to the content transmission touch input.
17. The computer-readable storage medium of claim 16, wherein the operation of receiving a first touch input on the touch pad comprises:
receiving a tap touch input on the touch pad, the touch input for selecting the content item;
in response to receiving the tap touch input, displaying a plurality of user interface options;
receiving a second horizontal touch input on the touch pad;
causing a selection indicator to move relative to the plurality of user interface options based on the second horizontal touch input; and
a user interface selection touch input on the touch pad is received for selecting a particular user interface option of the plurality of user interface options.
18. The computer-readable storage medium of claim 17, wherein the plurality of user interface options includes a delete option, a content viewer option, and a send option.
19. The computer-readable storage medium of claim 17, wherein the instructions further cause the headset system to perform operations comprising:
Receiving a third horizontal touch input on the touch pad after receiving the recipient selection touch input;
scrolling the carousel of potential recipients to the left or right on the one or more display devices in response to the third horizontal touch input; and
an additional recipient selection touch input on the touch pad is received, the touch input for selecting an additional recipient.
20. The computer-readable storage medium of claim 16, wherein the instructions further cause the headset system to perform operations comprising:
receiving a second horizontal touch input on the touch pad after receiving the recipient selection touch input;
scrolling the carousel of potential recipients to the left or right on the one or more display devices in response to the second horizontal touch input; and
an additional recipient selection touch input on the touch pad is received, the touch input for selecting an additional recipient.
CN202280035340.0A 2021-05-19 2022-05-17 Touch pad input for augmented reality display device Pending CN117561497A (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US63/190,662 2021-05-19
US17/479,153 2021-09-20
US17/479,153 US11880542B2 (en) 2021-05-19 2021-09-20 Touchpad input for augmented reality display device
PCT/US2022/072367 WO2022246399A1 (en) 2021-05-19 2022-05-17 Touchpad input for augmented reality display device

Publications (1)

Publication Number Publication Date
CN117561497A true CN117561497A (en) 2024-02-13

Family

ID=89823797

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202280035340.0A Pending CN117561497A (en) 2021-05-19 2022-05-17 Touch pad input for augmented reality display device

Country Status (1)

Country Link
CN (1) CN117561497A (en)

Similar Documents

Publication Publication Date Title
US11698822B2 (en) Software development kit for image processing
US11604562B2 (en) Interface carousel for use with image processing software development kit
CN112639892A (en) Augmented reality personification system
US11809633B2 (en) Mirroring device with pointing based navigation
US20240094865A1 (en) Selecting items displayed by a head-worn display device
US20220197027A1 (en) Conversation interface on an eyewear device
WO2022246418A1 (en) Touchpad navigation for augmented reality display device
KR20230118687A (en) Re-center AR/VR content on eyewear devices
KR20230119004A (en) Conversational interface on eyewear devices
KR20230011349A (en) Trackpad on the back part of the device
KR20230116938A (en) Media content player on eyewear devices
CN117321534A (en) Touch pad navigation for augmented reality display device
US11880542B2 (en) Touchpad input for augmented reality display device
US20230154445A1 (en) Spatial music creation interface
CN117561497A (en) Touch pad input for augmented reality display device
US11863596B2 (en) Shared augmented reality session creation
US11972090B2 (en) Interface carousel for use with image processing software development kit
US20230377223A1 (en) Hand-tracked text selection and modification
US20230384928A1 (en) Ar-based virtual keyboard
WO2022212175A1 (en) Interface with haptic and audio feedback response
EP4315005A1 (en) Interface with haptic and audio feedback response
WO2022212174A1 (en) Interface with haptic and audio feedback response
CN116685941A (en) Media content item with haptic feedback enhancement
CN116670635A (en) Real-time video communication interface with haptic feedback

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination