US20190295056A1 - Augmented Reality and Messaging - Google Patents
Augmented Reality and Messaging Download PDFInfo
- Publication number
- US20190295056A1 US20190295056A1 US16/359,895 US201916359895A US2019295056A1 US 20190295056 A1 US20190295056 A1 US 20190295056A1 US 201916359895 A US201916359895 A US 201916359895A US 2019295056 A1 US2019295056 A1 US 2019295056A1
- Authority
- US
- United States
- Prior art keywords
- user
- avatar
- receiving
- implementations
- virtual action
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q20/00—Payment architectures, schemes or protocols
- G06Q20/08—Payment architectures
- G06Q20/12—Payment architectures specially adapted for electronic shopping systems
- G06Q20/123—Shopping for digital content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q20/00—Payment architectures, schemes or protocols
- G06Q20/30—Payment architectures, schemes or protocols characterised by the use of specific devices or networks
- G06Q20/32—Payment architectures, schemes or protocols characterised by the use of specific devices or networks using wireless devices
- G06Q20/327—Short range or proximity payments by means of M-devices
- G06Q20/3274—Short range or proximity payments by means of M-devices using a pictured code, e.g. barcode or QR-code, being displayed on the M-device
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/06—Buying, selling or leasing transactions
- G06Q30/0601—Electronic shopping [e-shopping]
- G06Q30/0641—Shopping interfaces
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/06—Buying, selling or leasing transactions
- G06Q30/0601—Electronic shopping [e-shopping]
- G06Q30/0641—Shopping interfaces
- G06Q30/0643—Graphical representation of items or shoppers
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
- G06T13/40—3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L51/00—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
- H04L51/04—Real-time or near real-time messaging, e.g. instant messaging [IM]
- H04L51/046—Interoperability with other network applications or services
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L51/00—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
- H04L51/07—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail characterised by the inclusion of specific contents
- H04L51/10—Multimedia information
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L51/00—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
- H04L51/21—Monitoring or handling of messages
- H04L51/222—Monitoring or handling of messages using geographical location information, e.g. messages transmitted or received in proximity of a certain spot or area
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L51/00—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
- H04L51/52—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L51/00—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
- H04L51/04—Real-time or near real-time messaging, e.g. instant messaging [IM]
Definitions
- the disclosed implementations relate generally to augmented reality, and more particularly, to methods and systems for messaging using augmented reality.
- Augmented Reality provides a view of the real world that is enhanced by digital information and media, such as video, graphics, and GPS overlays.
- AR applications continue to integrate with daily life in improving productivity, and efficiency.
- AR gains popularity messaging platforms continue to add social networking features.
- social messaging platforms that incorporate AR concepts. It is also desirable to provide a more realistic and interactive AR user experience through the messaging platforms.
- the device and/or the server system may be configured to allow users to generate and place personalized avatars in the real world, to interact with the avatars, to message other users using the avatars, and to have the avatars interact amongst themselves.
- the system allows a user to send AR video content to a recipient who can view the content in augmented reality.
- augmented reality content can not only be viewed in augmented reality by its creator, but that content can be sent to a recipient who may then experience that same content in augmented reality from their own device.
- a recipient can interact with received AR content in their own augmented reality world rather than watch a recorded video of the content in a sender's augmented reality or environment.
- the system allows a creator to place AR content tagged to specific locations. When a recipient is in proximity of the locations, the recipient can not only watch the AR content but also interact with that content in their own Augmenter Reality versus just watching a recorded video.
- Some implementations of the system allow a user to produce a virtual sales person, a virtual companion, virtual pets, and other virtual beings.
- the system allows a creator to create a three-dimensional animated avatar of themselves.
- the system allows a user to selects a surface to place an avatar upon.
- the system allows a creator to click a “View BLAVATARTM” icon to view his avatar in his own real world (sometimes called physical world) using augmented reality.
- the system allows a creator to select a microphone button and record an audio.
- the system allows a creator to select a friend and send or share the content.
- the system allows a recipient (e.g., a friend) to receive a notification that they have content (top open/view) from a sender.
- an application opens (e.g., in response to a recipient selecting a message) and the recipient can view an avatar in a received message.
- a recipient selects a “REAL WORLD” icon.
- an application launches a recipient's camera where he/she views a room the recipient is in and a sender's avatar (situated in the room) speaks to the recipient.
- a recipient can interact with the sender's avatar in the room.
- the system includes an application that allows a recipient to receive and engage with AR content.
- the content is sent to a cloud based server.
- a notification is delivered to a recipient.
- an application is launched.
- the recipient will then see the avatar sent by the creator.
- the recipient selects a “REAL WORLD” icon, an application launches the recipient's phone camera, and the recipient gets to see their current location (e.g., a room) through their camera.
- the recipient will see a “View BLAVATARTM” icon.
- the sender's avatar When the recipient selects the icon, the sender's avatar will show up in the camera view, and thus in the recipient's location. In some implementations, the recipient will be able to hear the message accompanying the content and even interact with the content as if the content were physically there in the same room as they are.
- the system allows a user to create an avatar using an application that would allow creators to create video content in Augmented Reality. In some implementations, the system allows a recipient to view that content in Augmented Reality from their own mobile device. In some implementations, the system includes modifications to mobile device hardware and/or software applications. In some implementations, a user can e-mail the AR content or a link to the content to another user. In some implementations, a user can view the AR content by selecting a link in an e-mail or view the content in the e-mail.
- the system allows a user to send Augmented Reality content to kids with autism, so the recipient kids have “someone” with them in the same room to communicate with.
- the system also can be used for medical reasons.
- the system allows a user to create a virtual sales person and let customers see and engage with the virtual sales person in their own real world
- a method for placing an avatar in a real world location.
- the method includes receiving a request from a first user to place an avatar of the first user in a first user-specified location in a physical environment.
- the method includes obtaining an avatar of the first user, and associating the avatar with a first location in the physical environment (e.g., by using a geo-tagging technique).
- the first location is a position on or around an object in the physical environment.
- the method also includes receiving a request from a second user to view the avatar.
- the method includes determining a second location of the second user in the physical environment, and determining if the second location is proximate to the first location. In accordance with a determination that the second location is proximate to the first location, the method includes obtaining a first image of the physical environment at the first location, creating an overlay image by overlaying a second image of the avatar onto the first image (e.g., in a manner consistent with an orientation of the second user), and causing the overlay image to be displayed on an electronic device used by the second user.
- the method includes retrieving an image from an image database, generating a generic image of the physical environment at the first location, receiving an image from the first user, and/or receiving an image from the second user
- the method includes capturing an image of the first user using a camera application executed on a first electronic device, and generating an avatar of the first user based on the image of the first user.
- generating the avatar of the first user includes applying an algorithm that minimizes perception of an uncanny valley in the avatar.
- the avatar is one of or more of an animated, emotional, and/or interactive three-dimensional (3D) representation of the first user.
- the method includes receiving a user input from the first user corresponding to one or more animated, emotional, and/or interactive 3D representation of the first user, and generating the avatar of the first user based on the user input.
- the method further includes uploading the avatar to a computer distinct from the first electronic device.
- the method includes determining if a third user is in the vicinity of the first location, and, in accordance with a determination that the third user is in the vicinity of the first location, notifying the third user about the avatar at the first location.
- the method includes associating an audio file with the first avatar, and, in response to receiving the request from the second user, and in accordance with the determination that the second location is proximate to the first location, causing the electronic device used by the second user to play the audio file in addition to displaying the overlay image.
- the method includes receiving the audio file from the first user via a microphone on an electronic device used by the first user.
- a non-transitory computer readable storage medium stores one or more programs.
- the one or more programs include instructions, which, when executed by a computing system causes the computing system to perform a method that supports user interaction with one or more avatars and/or interaction amongst the one or more avatars.
- the method includes receiving a request from a first user to place a first avatar of the first user at a first specific location in a physical environment.
- the first avatar is configured to perform a first virtual action.
- the method includes associating the first avatar with the first virtual action and the first specific location.
- the first virtual action comprises one of performing a gift exchange operation between the first user and the second user, displaying marketing information associated with the first user, and exchanging payments between the first user and the second user.
- the method upon detecting that a second user is proximate to the first specific location, includes sending operational information for the first avatar to a second device, wherein the operational information is configured when executed on the second device to enable first interactions of the second user with the first avatar, including causing the first avatar to perform the first virtual action.
- the second user's first interactions with the first avatar causes the first avatar to perform the first virtual action subject to constraints associated with the physical environment.
- the method also includes receiving, from the second device, first session information regarding the second user's first interactions with the first avatar, and updating online status of the first user, first avatar and/or the second user to reflect the first session information.
- the method includes updating databases that store information corresponding to the first user, first avatar, and/or second user to reflect the first session information.
- the method includes associating the first avatar with a resource, and performing the first virtual action comprises presenting the resource to the second user and enabling second interactions of the second user with the first avatar, including accepting or rejecting the resource.
- the method includes receiving, from the second device, second session information regarding the second user's second interactions with the first avatar. The method also includes, in response to receiving the second session information, determining, based on the second session information, if the second user's second interactions with the first avatar corresponds to the second user accepting the resource, and, in accordance with a determination that the second user accepted the resource, updating a resource table with the information that the resource has been accepted by the second user.
- the method includes receiving a request from a third user to place a second avatar of the third user at a second specific location in a physical environment.
- the second avatar is configured to perform a second virtual action.
- the method includes associating the second avatar with the second virtual action and the second specific location.
- the first virtual action is an action that relates to the second avatar and the second virtual action is an action that relates to the first avatar.
- the method includes sending operational information for the first avatar and the second avatar to a third device used by the third user.
- the operational information is configured when executed on the third device to cause the third device to display the first avatar and the second avatar, cause the first avatar to perform the first virtual action and the second avatar to perform the second virtual action, and display a result of the first virtual action and a result of the second virtual action, according to some implementations.
- the operational information causes the first avatar to perform the first virtual action and the second avatar to perform the second virtual action subject to constraints associated with the physical environment.
- the method includes determining whether the first virtual action and/or the second virtual action can be executed by the third device (e.g., by probing the third device). In accordance with a determination that the third device is unable to execute the first virtual action and/or the second virtual action, the method includes executing a part or whole of the first virtual action and/or the second virtual action on a fourth device distinct from the third device.
- an electronic device includes one or more processors, memory, a display, and one or more programs stored in the memory.
- the programs are configured for execution by the one or more processors and are configured to perform a method for generating and/or interacting with tags that embed avatars on matrix barcodes, according to some implementations.
- the method includes receiving a request from a first user to place a custom image on a matrix barcode associated with an online resource.
- the method includes obtaining the custom image from the first user, obtaining an image of the matrix barcode, creating a tag by overlaying the custom image onto the image of the matrix barcode, associating the tag with an avatar corresponding to the custom image, and creating a digital downloadable format of the tag for the first user, wherein the digital downloadable format is associated with the online resource.
- the method includes receiving a request from a second user to scan the tag.
- the method includes receiving scanner information corresponding to the tag from a first electronic device used by the second user, retrieving the avatar associated with the tag using the scanner information, and causing a second electronic device used by the second user to display the avatar, according to some implementations.
- the first action comprises one of playing an audio or a video file, displaying an animation sequence, and displaying product or company information corresponding to the online resource.
- the method includes causing the second electronic device used by the second user to scan the tag (e.g., using a barcode scanner).
- the method includes deciphering the avatar associated with the tag using a disambiguation algorithm (e.g., an algorithm that uses error codes to distinguish between barcodes).
- the method includes obtaining the custom image from the first user by receiving a request from the first user to create a custom image, and, in response to receiving the request, retrieving a first image from an image database, receiving an input from the first user to select one or more customizations to apply to the first image, and, in response to receiving the input, applying the one or more customizations to the first image thereby producing the custom image.
- FIG. 1 is an example operating environment in accordance with some implementations.
- FIG. 2 is a block diagram illustrating an example electronic device in an operating environment in accordance with some implementations.
- FIG. 3 is a block diagram illustrating an example server in the server system of an operating environment in accordance with some implementations.
- FIGS. 4A-4E illustrate examples of avatar creation and placement, avatar-user interaction, avatar-avatar interaction, avatar-based tag creation, and interaction with avatar-based tags in accordance with some implementations.
- FIG. 5A-5Z illustrate example user interfaces for avatar creation and placement, avatar-user interaction, avatar-avatar interaction, avatar-based tag creation, and interaction with avatar-based tags, in accordance with some implementations.
- FIGS. 6A-6E show a flow diagram illustrating a method for avatar creation and placement, in accordance with some implementations.
- FIGS. 7A-7C show a flow diagram illustrating a method for avatar-user interactions and avatar-avatar interactions, in accordance with some implementations.
- FIGS. 8A-8C show a flow diagram illustrating a method for avatar-based tag creation and user interaction with avatar-based tags, in accordance with some implementations.
- FIGS. 9A and 9B illustrate snapshots of an application for creating avatars and/or AR content, according to some implementations.
- FIG. 1 is an example operating environment 100 in accordance with some implementations.
- the operating environment 100 includes one or more electronic devices 190 (also called “devices”, “client devices”, or “computing devices”; e.g., electronic devices 190 - 1 through 190 -N) that are communicatively connected to an augmented reality or messaging server 140 (also called “messaging server”, “messaging system”, “server”, or “server system”) of a messaging service via one or more communication networks 110 .
- augmented reality or messaging server 140 also called “messaging server”, “messaging system”, “server”, or “server system”
- the electronic devices 190 are communicatively connected to one or more storage servers 160 that are configured to store and/or serve content 162 to users of the devices 190 (e.g., media content, programs, data, and/or augmented reality programs, or augmented realty content).
- content 162 e.g., media content, programs, data, and/or augmented reality programs, or augmented realty content.
- the client devices 190 are computing devices such as laptop or desktop computers, smart phones, personal digital assistants, portable media players, tablet computers, or other appropriate computing devices that can be used to communicate with a social network or a messaging service.
- the messaging system 140 is a single computing device such as a computer server.
- the server system 140 includes multiple computing devices working together to perform the actions of a messaging server system (e.g., cloud computing).
- the network(s) 110 include a public communication network (e.g., the Internet, cellular data network, dialup modems over a telephone network), or a private communications network (e.g., private LAN, leased lines) or a combination of such communication networks.
- Users 102 - 1 through 102 -M of the client devices 190 - 1 through 190 -M access the messaging system 140 to subscribe to and participate in a messaging service (also called a “messaging network”) provided by the messaging server system 140 .
- a messaging service also called a “messaging network”
- the client devices 190 execute mobile or browser applications (e.g., “apps” running on smart phones) that can be used to access the messaging network.
- Users 102 interacting with the client devices 190 can participate in the messaging network provided by the server 140 by posting information, such as text comments, digital photos, videos, links to other content, or other appropriate electronic information. Users of the messaging server 140 can also annotate information posted by other users. In some implementations, information can be posted on a user's behalf by systems and/or services external to the server system 140 . For example, when a user posts a review of a movie to a movie review website, with proper permissions that website can cross-post the review to a social network managed by the server 140 on the user's behalf.
- a software application executing on a mobile device uses global positioning system capabilities to determine the user's location and automatically update the social network with the user location.
- the electronic devices 190 are also configured to communicate with each other through the communication network 110 .
- the electronic devices 190 can connect to the communication networks 110 and transmit and receive information thereby via cellular connection, a wireless network (e.g., a WiFi, Bluetooth, or other wireless Internet connection), or a wired network (e.g., a cable, fiber optic, or DSL network).
- the electronic devices 190 are registered in a device registry of the messaging service and thus are known to the messaging server 140 .
- the environment 100 also includes one or more storage server(s) 160 .
- a storage server 160 stores information corresponding to the messaging service, according to some implementations.
- an electronic device 190 may be associated with multiple users having respective user accounts in the user domain. Any of these users, as well as users not associated with the device, may use the electronic device 190 . In such implementations, the electronic device 190 receives input from these users 102 - 1 through 102 -M (including associated and non-associated users), and the electronic device 190 and/or the messaging server 140 proceeds to identify, for an input, the user making the input. With the user identification, a response to that input may be personalized to the identified user.
- the environment 100 includes multiple electronic devices 190 (e.g., devices 190 - 1 through 190 -N).
- the devices 190 are located throughout the environment 100 (e.g., all within a room or space in a structure, or spread throughout multiple cities or towns).
- a user 102 makes an input or sends a message or other communication via a device 190
- one or more of the devices 190 receives the input, message or other communication, typically via the communication networks 110 .
- one or more storage server(s) 160 are disposed in the operating environment 100 to provide, to one or more users 102 , messaging, AR-related content, and/or other information.
- storage servers 160 store avatars and location information for the avatars associated with the one or more users 102 .
- FIG. 2 is a block diagram illustrating an example electronic device 190 in an operating environment (e.g., operating environment 100 ) in accordance with some implementations.
- the electronic device 190 includes one or more processing units (CPUs) 202 , one or more network interfaces 204 , memory 206 , and one or more communication buses 208 for interconnecting these components (sometimes called a chipset).
- the electronic device 190 includes one or more input devices 210 that facilitate user input, such as a button 212 , a touch sense array 214 , one or more microphones 216 , and one or more cameras 213 .
- the electronic device 190 also includes one or more output devices 218 , including one or more speakers 220 , and a display 224 .
- the electronic device 190 also includes a location detection device 226 (e.g., a GPS module) and one or more sensors 228 (e.g., accelerometer, gyroscope, light sensor, etc.).
- the memory 206 includes high-speed random access memory, such as DRAM, SRAM, DDR RAM, or other random access solid state memory devices; and, optionally, includes non-volatile memory, such as one or more magnetic disk storage devices, one or more optical disk storage devices, one or more flash memory devices, or one or more other non-volatile solid state storage devices.
- Memory 206 optionally, includes one or more storage devices remotely located from one or more processing units 202 .
- Memory 206 or alternatively the non-volatile memory within memory 206 , includes a non-transitory computer readable storage medium.
- memory 206 or the non-transitory computer readable storage medium of memory 206 , stores the following programs, modules, and data structures, or a subset or superset thereof:
- the device 190 or an application running on the electronic device 190 creates avatars independently (e.g., without communication with the server 140 ).
- the device 190 takes photos of environments for avatar placement (e.g., with or without user intervention, with or without server 140 ).
- the electronic device 190 executes one or more operations to interact with avatar(s).
- the electronic device 190 implements operations to simulate user interactions with avatars, and avatar-avatar interactions.
- the device 190 reports session information to the server 140 .
- the device 190 receives and displays notifications (e.g., regarding an avatar) from the server 140 .
- the device 190 creates avatar-enabled tags, and/or reads or scans avatar-enabled tags.
- Each of the above identified elements may be stored in one or more of the previously mentioned memory devices, and corresponds to a set of instructions for performing a function described above.
- the above identified modules or programs i.e., sets of instructions
- the memory 206 optionally stores a subset of the modules and data structures identified above.
- the memory 206 optionally, stores additional modules and data structures not described above.
- a subset of the programs, modules, and/or data stored in the memory 206 can be stored on and/or executed by the server system 140 .
- FIG. 3 is a block diagram illustrating an example augmented reality or messaging server 140 of an operating environment (e.g., operating environment 100 ) in accordance with some implementations.
- the server 140 includes one or more processing units (CPUs) 302 , one or more network interfaces 304 , memory 306 , and one or more communication buses 308 for interconnecting these components (sometimes called a chipset).
- the server 140 could include one or more input devices 310 that facilitate user input, such as a keyboard, a mouse, a voice-command input unit or microphone, a touch screen display, a touch-sensitive input pad, a gesture capturing camera, or other input buttons or controls.
- the server 140 could use a microphone and voice recognition or a camera and gesture recognition to supplement or replace the keyboard.
- the server 140 includes one or more cameras, scanners, or photo sensor units for capturing images, for example, of graphic series codes printed on the electronic devices.
- the server 140 could also include one or more output devices 312 that enable presentation of user interfaces and display content, including one or more speakers and/or one or more visual displays.
- the memory 306 includes high-speed random access memory, such as DRAM, SRAM, DDR RAM, or other random access solid state memory devices; and, optionally, includes non-volatile memory, such as one or more magnetic disk storage devices, one or more optical disk storage devices, one or more flash memory devices, or one or more other non-volatile solid state storage devices.
- the memory 306 optionally, includes one or more storage devices remotely located from one or more processing units 302 .
- Memory 306 or alternatively the non-volatile memory within memory 306 , includes a non-transitory computer readable storage medium.
- memory 306 or the non-transitory computer readable storage medium of memory 306 , stores the following programs, modules, and data structures, or a subset or superset thereof:
- the avatars have a one-to-one correspondence with a user.
- an avatar can be addressed to a specific individual who is a participant in an electronic communication (e.g., a messaging conversation with a sender/creator of the avatar, which can be an individual or a commercial user, such as an advertiser or a business).
- the avatars are one-to-many, meaning that they can be seen by any user within a group (e.g., users at a particular location, of a particular demographic category, or members in a particular organization, to name a few possibilities).
- the server 140 maintains a geographical database of all avatars and operational information.
- the server maintains a database of all users.
- the server 140 maintains a database of all users/avatars and avatar/avatar interactions.
- the server 140 generates avatars that avoid the uncanny valley problem associated with virtual representations.
- FIGS. 4A-4E illustrate examples of avatar creation and placement, avatar-user interaction, avatar-avatar interaction, avatar-based tag creation, and interaction with avatar-based tags in accordance with some implementations.
- FIG. 4A is an illustration of avatar creation and placement 400 , according to some implementations.
- a user 102 -A uses an electronic device 190 -A (with display 224 -A, and a camera 213 (not shown)) to create an avatar representation of herself (steps 402 , 404 , and 406 ), and places the avatar in a selected location (steps 408 and 410 ), according to some implementations (e.g., in this example, a carpet in front of a piece of furniture).
- FIG. 4A also illustrates that user 102 -B could interact with the avatar created in step 406 , via the messaging server 140 , in accordance with some implementations.
- a user creating an avatar can interact with the avatar via a displayed set of affordances.
- a user can interact via touch, or the user can employ a voice interface provided by the electronic device 190 to verbally designate a selected avatar interaction to be performed.
- the displayed affordances include: a “View BLAVATARTM” icon, an “Edit BLAVATARTM” icon, a microphone icon, which a user can select to record, or playback, audio for use by the avatar (e.g., to enter a message that can be spoken by the avatar when displayed by a message recipient), a pin icon (e.g., to designate an avatar location in an environment), and a rotate icon (e.g., to cause the avatar to be shown from a user-specified angle when displayed on a device 190 ).
- a user viewing an avatar can lock down or select an appearance of the avatar as the avatar's default appearance, select a particular appearance or an alternative appearance for the avatar, or edit another avatar (e.g., an avatar created by a different user or an avatar created for a different user).
- avatar is used here to refer to virtual representations that embody augmented reality characteristics (e.g., dynamic avatars with AR characteristics), not just conventional avatars without the augmented reality capabilities.
- FIGS. 4B-4D illustrate avatar-user and avatar-avatar interactions according to some implementations.
- a user 102 -C uses ( 420 ) an electronic device 190 -C that controls ( 424 ) display 224 -C to display a user interface 440
- user 102 -D uses ( 430 ) an electronic device 190 -D that controls ( 434 ) display 224 -D to display a user interface 460
- the interfaces 440 and 460 each show two avatars 444 and 464 . As described above in reference to FIG.
- avatars 444 and 464 are created by the respective users (user 102 -C and user 102 -D in the current example) and placed in respective locations 442 and 462 (e.g., the location specified by the users 102 -C and 102 -D in step 408 as described above in reference to FIG. 4A ).
- Electronic devices 190 -C and 190 -D are connected to the messaging server 140 .
- the server 140 determines that locations 442 and 462 match or are nearby locations (e.g., within a proximity) and invokes actions corresponding to the respective avatars 444 and 464 .
- the respective users 102 -C and 102 -D can view the resulting actions and/or changes to the respective avatars on their respective displays.
- the user interfaces 440 and 460 show different views depending on how the individual users approached the particular scene or location.
- the user interface 440 displays a collection of available avatar interactions, which can be selected via user 102 -C interaction with affordances 446 displayed on the user interface 440 .
- the user interface 460 displays a collection of available avatar interactions, which can be selected via user 102 -D interaction with affordances 466 displayed on the user interface 440 .
- the illustrated example shows similar choices in affordances 446 and 466 , some implementations display different affordances or available avatar interactions for the different users.
- the displayed affordances are chosen automatically by the server 140 depending on consideration of one or more factors, such as locations of users, locations of avatars, past actions of respective avatars and/or users, and a preferred set of actions or outcomes as selected by respective users.
- the server 140 can automatically (e.g., without user input) select one or more actions from a set of actions stored for each avatar based on location, or user information.
- a user can interact with the avatars via touch inputs on a display screen or other touch input device, or can employ a voice interface provided by the electronic device 190 to verbally designate a selected avatar interaction to be performed.
- the displayed affordances are similar to those presented on the user interface 440 or 460 , but also include: a music note icon that can be used by the user 102 -D to add or listen to a digital music snippet to accompany the avatar when displayed (e.g., to accompany an avatar that has been programmed to dance when displayed).
- FIGS. 4B-4D illustrate a gift exchange as a way of explaining avatar-avatar interaction, according to some implementations.
- a user 102 -D (or the avatar 464 as controlled by the server 140 ) chooses to gift a user 102 -C (or the avatar 444 as controlled by the server 140 ) via the gift affordance 468 in the affordances 466 .
- the avatar 444 receives the gift (e.g., by selecting affordance 448 in the affordances 446 ) sent by the user 102 -D (or the avatar 464 as controlled by the server 140 ).
- the avatar 444 (via the user 102 -C) also has the option of rejecting the gift sent by the user 102 -D (or the avatar 464 as controlled by server 140 ). This can be done via the affordance 449 .
- the avatar 444 interacts with the received gift (e.g., opens the gift, smells the gift).
- the respective user interfaces 440 and 460 are updated to show the gift exchange interaction.
- the affordances 446 and 466 provide the respective users with options to steer or control the interactions (e.g., even as the server 140 makes some automatic decisions to progress the interactions).
- an affordance e.g., affordance 449 or 469 resembling a shoe
- an action or an overture e.g., reject a gift, or reject an offer to go out on a date
- avatar-user interactions and avatar-avatar interactions include the ability of a first avatar created by a first user to interact with a second user or a second avatar created by the second users. These interactions include performing actions, such as exchanging gifts, offering a kiss, or slapping one avatar by another.
- the interactions include decision interactions where one avatar asks another avatar out to perform an action (e.g., to go out on a date to a picnic, movie, go out to eat).
- users can view one or more results of the actions in the real word. For example, a gift accepted by a user's avatar results in delivering a real gift (e.g., a toy or a TV) delivered or scheduled to be delivered to the user.
- the server 140 can add additional actions or interactions between users, users and avatars, or between avatars in a dynamic fashion (e.g., based on environment, time of day, or preferences of users at the time of creation of avatars). Some implementations also include merchandising attributes (e.g., with gifts) and associate commerce engines to process and generate revenue (e.g., for gift exchange) based on interactions. Some implementations process and generate actual revenue from users purchasing specific merchandise via the interactions. Outcomes of interactions are logged by the server 140 , which can initiate charges or new promotional messages based on the outcomes and which can also incorporate the outcomes as factors in future avatar-avatar and/or avatar-user interactions options and recommendations.
- merchandising attributes e.g., with gifts
- revenue e.g., for gift exchange
- Some implementations process and generate actual revenue from users purchasing specific merchandise via the interactions.
- Outcomes of interactions are logged by the server 140 , which can initiate charges or new promotional messages based on the outcomes and which can also incorporate the outcomes as
- FIG. 4E illustrates tag creation ( 470 ) and tag viewing ( 480 ) operations in accordance with some implementations.
- a user 102 -E uses an electronic device 190 -E that controls ( 424 ) the display 224 -E to display an avatar 472 in a tag-creation user interface (e.g., an interface provided by a messaging application).
- a tag-creation user interface e.g., an interface provided by a messaging application.
- the user 102 -E is prompted by the messaging application to create a tag ( 474 ) and is given the option of taking a picture for the tag using the camera of the electronic device 190 -E (e.g., as shown in FIG. 4E , the image could be a picture of a remote control device).
- the picture taken by the user 102 -E is then transmitted to the sever 140 , which creates a tag 476 that is optionally a combination of a displayable machine readable code object (e.g., a matrix or two-dimensional barcode, such as a QR code, or a one dimensional bar code) and the photo transmitted by the user 102 -E.
- a displayable machine readable code object e.g., a matrix or two-dimensional barcode, such as a QR code, or a one dimensional bar code
- This tag is uniquely associated with information entered by the tag creator 102 -E that is stored by the messaging server 140 and/or the storage server(s) 160 for presentation in associated with user interaction with the tag.
- the stored information can include, without limitation, audio and/or video content, a promotional offer, advertising, and/or a message from the tag creator or associated organization (e.g., non-profit group, school, or business).
- the stored information can also include information regarding the creator of the tag and/or the tag creator's associated organization and their associated avatars, and can be configured to implement and exhibit the technical capabilities and behaviors of avatars described herein.
- information associated with a tag can be presented by an avatar selected by the tag's creator, and that avatar can support user-avatar and avatar-avatar interactions, can be pinned to a specific geographic location, can be edited and/or manipulated in accordance with the tag creator's specifications, and can be configured to respond dynamically to questions from the user regarding the information associated with the tag.
- an avatar associated with a tag ( 474 ) includes one or more characteristics of a BLAVATARTM (e.g., action and/or interaction capabilities).
- another user 102 -D uses an electronic device 190 -F that controls the display 224 -F to display and interact with the tag 476 .
- a tag-enabled messaging application executing on the electronic device 190 -F causes the device 190 -F to display the tag shown in display 224 -F and to enable interaction by the user 102 -D with the tag 476 in accordance with avatar capabilities as described herein.
- the server 140 responds to a request from device 190 -F (which is controlled by the user 102 -D) to show one or more avatar-based tags that are associated with messages transmitted to the user, organizations of which the user is a registered member, or tags that are associated with a current or popular location associated with the user.
- FIGS. 5A-5Z illustrate example user interfaces for avatar creation and placement, avatar-user interaction, avatar-avatar interaction, avatar-based tag creation, and interaction with avatar-based tags, in accordance with some implementations.
- some implementations provide a navigation button 502 (e.g., to reverse steps) in a user interface 504 that corresponding to a messaging application in a display 224 .
- the displayed user interface 504 instructs the user to select a type of body 506 , and offers the option 508 to use the user's face or to start with a stored set of avatars 510 .
- a button/affordance 512 is provided to enable a user to proceed to the next step in avatar creation once the user has made the selections, according to some implementations.
- FIG. 5B continues the example.
- Display 224 shows an interface with options 514 , 516 , and 518 to select the type of body for the avatar.
- FIG. 5C an initial image 522 of the user is displayed with appropriate prompts 524 (e.g., to adjust lighting), and a camera icon 526 , according to some implementations.
- appropriate prompts 524 e.g., to adjust lighting
- a camera icon 526 e.g., to adjust lighting
- FIG. 5D further controls 528 (for color or complexion adjustments), forward 532 and backward 530 navigation buttons to select between avatars, and an avatar 534 - 2 (of one or more avatar choices) are shown, according to some implementations.
- the user is also promoted 506 to select a hair style.
- FIG. 5E shows another avatar 534 - 4 based on the user selection in FIG. 5D .
- FIG. 5F shows yet another avatar 536 based on the selection in FIG. 5E .
- the interface shown in FIG. 5G allows the user to select emotions 506 (between choices 538 - 2 , 538 - 4 , 538 - 6 , and 538 - 8 ), according to some implementations.
- FIG. 5I shows a change in selection of emotion from 538 - 2 to 538 - 4 and an updated avatar 540 - 4 in response to the selection, according to some implementations.
- FIG. 5J similarly, shows an updated avatar 540 - 6 based on user selection of emotion 538 - 6 , according to some implementations.
- FIG. 5K illustrates, some implementations allow a user to select body type 506 , with an initial avatar (e.g., a long shot of an avatar) 542 .
- the application 504 shows one or more animation choices 546 - 2 , . . . , 546 - 7 (e.g., rapping, talking, singing, a snake animation), and once the user has made a selection allows the user to see the avatar animation in real world 512 (e.g., a virtual world supported by the application).
- FIG. 5L shows one or more animation choices 546 - 2 , . . . , 546 - 7 (e.g., rapping, talking, singing, a snake animation), and once the user has made a selection allows the user to see the avatar animation in real world 512 (e.g., a virtual world supported by the application).
- a user can select body tone 506 , by sliding an affordance 548 - 4 (between options 548 - 2 and 548 - 6 ), and see the updated avatar 550 .
- a user can additionally select a photo from a storage in the electronic device, choose an input type 552 , take photo from camera 554 , or cancel the process of creating an avatar by selecting an image 556 , according to some implementations.
- a user can select a surface 560 after controlling the device to detect a specific surface 558 .
- a user can view avatar 568 , edit the avatar 570 , add audio via a microphone 572 , associate an audio or music file 566 with the avatar, or reverse commands 562 , according to some implementations.
- the application 504 allows the user to leave the avatar (called a GeoBlab in the Figure) and alerts 576 - 2 the user that recipients of the avatar will get turn by turn directions (if needed) to the avatar once the user places or leaves the avatar at the selected location.
- the user can agree 576 - 4 , in which case the avatar will be left at the chosen location, or decline 576 - 8 in which case the device or server will cancel the request.
- the user can also turn off this alert 576 - 6 .
- Some implementations enable the user to either leave the avatar 578 - 2 , or send the avatar 578 - 4 to another user, as shown in FIG. 5R .
- Some implementations provide an interface, as shown in FIG.
- the electronic device 190 sends the search locations to a messaging server 140 which returns any search results to the electronic device 190 .
- the device 190 displays an avatar 564 corresponding to the search results, as shown in FIG. 5T , according to some implementations. Again, some implementations provide a user with the option to view the location 528 , view the avatar 568 , edit the avatar 570 , or play an audio file associated with the avatar 584 .
- application 504 allows a user to create a tag 590 (sometimes called a Blabeey tag), create an avatar 588 , or create a video 586 , as shown in FIG. 5U .
- FIG. 5V shows another interface with avatar 564 , location 574 , option 568 to view the avatar, edit the avatar 570 , or record an audio 572 , and finally click a create tag 592 to create the tag based on the selected features, according to some implementations.
- Some implementations allow a user to take a picture to super-impose on a tag 594 - 4 , and provides options 594 - 2 and 594 - 6 to either agree or decline.
- a user can click a share button 596 to share the avatar, according to some implementations, as shown in FIG. 5W .
- Some implementations show the tag 598 - 2 (with a photo 598 - 4 selected by the user) to the user before he selects to share the tag 596 , as shown in FIG. 5X .
- Some implementations allow a user select a color for the tag 598 - 1 , gives a preview of a selected color 598 - 3 , shows a palette of color choices 598 - 5 , and allows the user to click a select button 599 to select a color, as shown in FIG. 5Y .
- FIG. 5Z shows an interface displaying the tag 598 - 2 created by another user.
- a second user viewing the tag can use a tag viewer application to view the content 598 - 4 related to the tag 598 - 2 , according to some implementations.
- FIGS. 6A-6E show a flow diagram illustrating a method for avatar creation and placement, in accordance with some implementations.
- User input processing is discussed above in reference to FIG. 2 .
- one or more modules in memory 206 of an electronic device 190 interfaces with one or more modules in memory 306 of the messaging server 140 to receive and process avatar creation and placement, as discussed above in reference to FIG. 4A , in accordance with some implementations.
- a method 600 for placing an avatar in a real world location. As shown in FIG. 6A , the method includes receiving ( 602 ) a request from a first user to place an avatar of the first user in a first user-specified location in a physical environment. In response ( 604 ) to receiving the request from the first user, the method includes obtaining ( 606 ) an avatar of the first user, and associating ( 608 ) the avatar with a first location in the physical environment (e.g., by using a geo-tagging technique). In some implementations, the first location is a position on or around an object in the physical environment ( 610 ). The method also includes receiving ( 612 ) a request from a second user to view the avatar.
- the method in response ( 614 ) to receiving the request from the second user, includes determining ( 616 ) a second location of the second user in the physical environment, and determining ( 618 ) if the second location is proximate to the first location.
- the method includes obtaining ( 622 ) a first image of the physical environment at the first location, creating ( 626 ) an overlay image by overlaying a second image of the avatar onto the first image (e.g., in a manner consistent with an orientation of the second user), and causing ( 628 ) the overlay image to be displayed on an electronic device used by the second user.
- the method includes ( 624 ) retrieving an image from an image database, generating a generic image of the physical environment at the first location, receiving an image from the first user, and/or receiving an image from the second user
- the method includes capturing ( 640 ) an image of the first user using a camera application executed on a first electronic device, and generating ( 642 ) an avatar of the first user based on the image of the first user.
- generating the avatar of the first user includes applying ( 644 ) an algorithm that minimizes perception of an uncanny valley in the avatar.
- the avatar is ( 646 ) one of or more of an animated, emotional, and/or interactive three-dimensional (3D) representation of the first user.
- the method includes receiving ( 648 ) a user input from the first user corresponding to one or more animated, emotional, and/or interactive 3D representation of the first user, and generating ( 650 ) the avatar of the first user based on the user input. In some implementations, the method further includes uploading ( 652 ) the avatar to a computer distinct from the first electronic device.
- the method includes determining ( 630 ) if a third user is in the vicinity of the first location, and, in accordance with a determination that the third user is in the vicinity of the first location, notifying ( 632 ) the third user about the avatar at the first location.
- the method includes associating ( 634 ) an audio file with the first avatar, and, in response to receiving the request from the second user, and in accordance with the determination that the second location is proximate to the first location, causing ( 638 ) the electronic device used by the second user to play the audio file in addition to displaying the overlay image.
- the method includes receiving the audio file from the first user via a microphone on an electronic device used by the first user.
- FIGS. 7A-7C show a flow diagram illustrating a method 700 for supporting avatar-user interactions and avatar-avatar interactions, in accordance with some implementations.
- User input processing is discussed above in reference to FIG. 2 .
- one or more modules in memory 206 of an electronic device 190 interfaces with one or more modules in memory 306 of the messaging server 140 to process avatar-avatar interaction and avatar-user interaction, as discussed above in reference to FIGS. 4B-4D , in accordance with some implementations.
- a non-transitory computer readable storage medium stores one or more programs.
- the one or more programs include instructions, which, when executed by a computing system cause the computing system to perform a method that supports user interaction with one or more avatars and/or interaction amongst the one or more avatars.
- the method includes receiving ( 702 ) a request from a first user to place a first avatar of the first user at a first specific location in a physical environment.
- the first avatar is configured to perform a first virtual action.
- the method includes associating ( 704 ) (e.g., geo-tagging) the first avatar with the first virtual action and the first specific location.
- the first virtual action comprises ( 706 ) one of performing a gift exchange operation between the first user and the second user, displaying marketing information associated with the first user, and exchanging payments between the first user and the second user.
- the first virtual action includes a second user giving a thumbs up or a thumbs down to how the avatar looks.
- the method upon detecting ( 708 ) that a second user is proximate to the first specific location, the method includes sending ( 710 ) operational information for the first avatar to a second device, wherein the operational information is configured when executed on the second device to enable first interactions of the second user with the first avatar, including causing the first avatar to perform the first virtual action.
- the second user's first interactions with the first avatar causes ( 712 ) the first avatar to perform the first virtual action subject to constraints associated with the physical environment. For example, the avatar is blocked by a wall and cannot walk through, but can lean on the wall.
- the method also includes receiving ( 714 ), from the second device, first session information regarding the second user's first interactions with the first avatar, and updating ( 716 ) online status of the first user, first avatar and/or the second user to reflect the first session information.
- the method includes updating ( 716 ) databases that store information corresponding to the first user, first avatar, and/or second user to reflect the first session information.
- the method includes associating ( 718 ) the first avatar with a resource (e.g., a gift), and performing the first virtual action comprises presenting the resource to the second user and enabling second interactions of the second user with the first avatar, including accepting or rejecting the resource.
- the method includes receiving ( 722 ), from the second device, second session information regarding the second user's second interactions with the first avatar.
- the method also includes, in response ( 726 ) to receiving the second session information, determining ( 728 ), based on the second session information, if the second user's second interactions with the first avatar corresponds to the second user accepting the resource, and, in accordance with a determination that the second user accepted the resource, updating ( 730 ) a resource table with the information that the resource has been accepted by the second user.
- the method includes receiving ( 732 ) a request from a third user to place a second avatar of the third user at a second specific location in a physical environment.
- the second avatar is configured to perform a second virtual action.
- the method includes associating the second avatar with the second virtual action and the second specific location.
- the first virtual action is ( 736 ) an action that relates to the second avatar and the second virtual action is an action that relates to the first avatar.
- the method includes sending ( 740 ) operational information for the first avatar and the second avatar to a third device used by the third user.
- the operational information is configured ( 742 ) when executed on the third device to cause the third device to display the first avatar and the second avatar, cause the first avatar to perform the first virtual action and the second avatar to perform the second virtual action, and display a result of the first virtual action and a result of the second virtual action, according to some implementations.
- the operational information causes the first avatar to perform the first virtual action and the second avatar to perform the second virtual action subject to constraints associated with the physical environment.
- the method includes determining whether the first virtual action and/or the second virtual action can be executed by the third device (e.g., by probing the third device). In accordance with a determination that the third device is unable to execute the first virtual action and/or the second virtual action, the method includes executing a part or whole of the first virtual action and/or the second virtual action on a fourth device distinct from the third device.
- FIGS. 8A-8C show a flow diagram illustrating a method 800 for avatar-based tag creation and user interaction with avatar-based tags, in accordance with some implementations.
- User input processing is discussed above in reference to FIG. 2 .
- one or more modules in memory 206 of an electronic device 190 interfaces with one or more modules in memory 306 of the messaging server 140 to process avatar-based tag creation and viewing, as discussed above in reference to FIG. 4E , in accordance with some implementations.
- an electronic device includes one or more processors, memory, a display, and one or more programs stored in the memory.
- the programs are configured for execution by the one or more processors and are configured to perform a method for generating and/or interacting with tags that embed avatars on matrix barcodes, according to some implementations.
- the method includes receiving ( 802 ) a request from a first user to place a custom image on a matrix barcode associated with an online resource.
- the method includes obtaining ( 806 ) the custom image from the first user, obtaining ( 808 ) an image of the matrix barcode, creating ( 810 ) a tag by overlaying the custom image onto the image of the matrix barcode, associating ( 812 ) the tag with an avatar corresponding to the custom image, and creating ( 814 ) a digital downloadable format of the tag for the first user, wherein the digital downloadable format is associated with the online resource.
- the method includes receiving ( 816 ) a request from a second user to scan the tag.
- the method includes receiving ( 820 ) scanner information corresponding to the tag from a first electronic device used by the second user, retrieving ( 824 ) the avatar associated with the tag using the scanner information, and causing ( 826 ) a second electronic device used by the second user to display the avatar, according to some implementations.
- the first action comprises one of playing an audio or a video file, displaying an animation sequence, and displaying product or company information corresponding to the online resource.
- the method includes causing ( 822 ) the second electronic device used by the second user to scan the tag (e.g., using a barcode scanner).
- the method includes deciphering the avatar associated with the tag using a disambiguation algorithm (e.g., an algorithm that uses error codes to distinguish between barcodes).
- the method further includes associating ( 830 ) the first avatar with a first action.
- the first action comprises ( 832 ) any one of playing an audio or a video file, displaying an animation sequence, and displaying product or company information corresponding to the online resource.
- the method includes receiving ( 834 ) a request from the second user to interact with the avatar in the tag, and in response to receiving the request from the second user, sending ( 836 ) operational information for the first avatar to the second electronic device, wherein the operational information is configured when executed on the second electronic device to enable interactions of the second user with the avatar, including causing the avatar to perform the first action
- the method includes obtaining the custom image from the first user by receiving ( 838 ) a request from the first user to create a custom image, and, in response ( 840 ) to receiving the request, retrieving ( 842 ) a first image from an image database, receiving ( 844 ) an input from the first user to select one or more customizations to apply to the first image, and, in response to receiving the input, applying ( 846 ) the one or more customizations to the first image thereby producing the first image.
- FIGS. 9A and 9B illustrate snapshots of an application for creating avatars and/or AR content, according to some implementations.
- a user can record a video (e.g., using a camera application on their mobile device), associate the content with a location 904 , and share the AR content with other users by selecting (e.g., clinking) a SHARE icon 902 . Another user may then view or interact with the content when visiting the proximity of the location 904 .
- Each of the above identified elements may be stored in one or more of the previously mentioned memory devices, and corresponds to a set of instructions for performing a function described above.
- the above identified modules or programs i.e., sets of instructions
- memory 306 optionally, stores a subset of the modules and data structures identified above.
- memory 306 optionally, stores additional modules and data structures not described above.
- a method of placing an avatar in a real world location comprising: receiving a request from a first user to place an avatar of the first user in a first user-specified location in a physical environment; in response to receiving the request from the first user: obtaining an avatar of the first user; and associating the avatar with a first location in the physical environment; receiving a request from a second user to view the avatar; and in response to receiving the request from the second user: determining a second location of the second user in the physical environment; determining if the second location is proximate to the first location; and in accordance with a determination that the second location is proximate to the first location: obtaining a first image of the physical environment at the first location; creating an overlay image by overlaying a second image of the avatar onto the first image; and causing the overlay image to be displayed on an electronic device used by the second user.
- Clause A2 The method as recited in clause A1, further comprising: determining if a third user is in the vicinity of the first location; and in accordance with a determination that the third user is in the vicinity of the first location, notifying the third user about the avatar at the first location.
- obtaining the avatar of the first user comprises: capturing an image of the first user using a camera application executed on a first electronic device; and generating an avatar of the first user based on the image of the first user.
- Clause A4 The method as recited in clause A3, further comprising uploading the avatar to a computer distinct from the first electronic device.
- Clause A7 The method as recited in any of the preceding clauses, further comprising: receiving a user input from the first user corresponding to one or more animated, emotional, and/or interactive 3D representation of the first user; and generating the avatar of the first user based on the user input.
- Clause A8 The method as recited in any of the preceding clauses, further comprising: associating an audio file with the first avatar; and in response to receiving the request from the second user, and in accordance with the determination that the second location is proximate to the first location, causing the electronic device used by the second user to play the audio file in addition to displaying the overlay image.
- Clause A9 The method as recited in clause A8, further comprising receiving the audio file from the first user via a microphone on an electronic device used by the first user.
- Clause A10 The method as recited in any of preceding the clauses, wherein the first location is a position on or around an object in the physical environment.
- obtaining the first image of the physical environment at the first location comprises retrieving an image from an image database, generating a generic image of the physical environment at the first location, receiving an image from the first user, and/or receiving an image from the second user.
- a method of placing an avatar in a real world location comprising: receiving a request from a first user to place an avatar of the first user in a first user-specified location in a physical environment; in response to receiving the request from the first user: capturing an image of the first user using a camera application; generating an avatar of the first user based on the image of the first user, wherein the avatar is one or more of an animated, emotional, and/or interactive 3D representation of the first user; and associating the avatar with a first location in the physical environment; receiving a request from a second user to view the avatar; and in response to receiving the request from the second user: determining a second location of the second user in the physical environment; determining if the second location is proximate to the first location; and in accordance with a determination that the second location is proximate to the first location: obtaining a first image of the physical environment at the first location by either retrieving an image from an image database, generating a generic image of the physical environment at
- a method comprising: receiving a request from a first user to place a first avatar of the first user at a first specific location in a physical environment, wherein the first avatar is configured to perform a first virtual action; in response to receiving the request from the first user, associating the first avatar with the first virtual action and the first specific location; and upon detecting that a second user is proximate to the first specific location: sending operational information for the first avatar to a second device, wherein the operational information is configured when executed on the second device to enable first interactions of the second user with the first avatar, including causing the first avatar to perform the first virtual action; receiving, from the second device, first session information regarding the second user's first interactions with the first avatar; and updating online status of the first user, first avatar and/or the second user to reflect the first session information.
- Clause A14 The method as recited in clause A13, wherein the first virtual action comprises one of performing a gift exchange operation between the first user and the second user, displaying marketing information associated with the first user, and exchanging payments between the first user and the second user.
- Clause A15 The method as recited in clause A13, wherein the second user's first interactions with the first avatar causes the first avatar to perform the first virtual action subject to constraints associated with the physical environment.
- Clause A15 The method as recited in clause A13, wherein the one or more programs further comprise instructions for associating the first avatar with a resource, and wherein performing the first virtual action comprises presenting the resource to the second user and enabling second interactions of the second user with the first avatar, including accepting or rejecting the resource.
- Clause A16 The method as recited in clause A15, wherein the one or more programs further comprise instructions for: receiving, from the second device, second session information regarding the second user's second interactions with the first avatar; in response to receiving the second session information: determining, based on the second session information, if the second user's second interactions with the first avatar corresponds to the second user accepting the resource; in accordance with a determination that the second user accepted the resource, updating a resource table with the information that the resource has been accepted by the second user.
- Clause A17 The method as recited in any of preceding clauses A13-A16, wherein the one or more programs further comprise instructions for: receiving a request from a third user to place a second avatar of the third user at a second specific location in a physical environment, wherein the second avatar is configured to perform a second virtual action; in response to receiving the request from the third user, associating the second avatar with the second virtual action and the second specific location; and upon detecting that the second specific location is proximate to the first specific location: sending operational information for the first avatar and the second avatar to a third device used by the third user, wherein the operational information is configured when executed on the third device to cause the third device to display the first avatar and the second avatar, cause the first avatar to perform the first virtual action and the second avatar to perform the second virtual action, and display a result of the first virtual action and a result of the second virtual action.
- Clause A18 The method as recited in clause A17, wherein the operational information causes the first avatar to perform the first virtual action and the second avatar to perform the second virtual action subject to constraints associated with the physical environment.
- Clause A19 The method as recited in clause A17, wherein the first virtual action is an action that relates to the second avatar and the second virtual action is an action that relates to the first avatar.
- a method comprising: receiving a request from a first user to place a custom image on a matrix barcode associated with an online resource; in response to receiving the request from the first user: obtaining the custom image from the first user; obtaining an image of the matrix barcode; creating a tag by overlaying the custom image onto the image of the matrix barcode; associating the tag with an avatar corresponding to the custom image; and creating a digital downloadable format of the tag for the first user, wherein the digital downloadable format is associated with the online resource.
- Clause A21 The method as recited in clause A20, wherein the one or more programs further comprise instructions for: receiving a request from a second user to scan the tag; in response to receiving the request: receiving scanner information corresponding to the tag from a first electronic device used by the second user; retrieving the avatar associated with the tag using the scanner information; and causing a second electronic device used by the second user to display the avatar.
- Clause A22 The method as recited in clause A21, wherein the one or more programs further comprise instructions for: associating the avatar with a first action; receiving a request from the second user to interact with the avatar in the tag; and in response to receiving the request from the second user, sending operational information for the first avatar to the first electronic device, wherein the operational information is configured when executed on the first electronic device to enable interactions of the second user with the avatar, including causing the avatar to perform the first action.
- Clause A23 The method as recited in clause A22, wherein the first action comprises one of playing an audio or a video file, displaying an animation sequence, and displaying product or company information corresponding to the online resource.
- Clause A24 The method as recited in clause A21, further comprising causing the second electronic device used by the second user to scan the tag.
- obtaining the custom image from the first user comprises: receiving a request from the first user to create a custom image; in response to receiving the request: retrieving a first image from an image database; receiving an input from the first user to select one or more customizations to apply to the first image; and in response to receiving the input, applying the one or more customizations to the first image thereby producing the custom image.
- An electronic device comprising one or more processors, memory, and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for carrying out the method recited in any of clauses A1-A25.
- a non-transitory computer readable storage medium storing one or more programs configured for execution by one or more processors, the one or more programs including instructions, which when executed by the one or more processors, cause the one or more processors to perform the method recited in any of clauses A1-A25.
- first, second, etc. are, in some instances, used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another.
- a first device could be termed a second device, and, similarly, a second device could be termed a first device, without departing from the scope of the various described implementations.
- the first device and the second device are both types of devices, but they are not the same device.
- the term “if” is, optionally, construed to mean “when” or “upon” or “in response to determining” or “in response to detecting” or “in accordance with a determination that,” depending on the context.
- the phrase “if it is determined” or “if [a stated condition or event] is detected” is, optionally, construed to mean “upon determining” or “in response to determining” or “upon detecting [the stated condition or event]” or “in response to detecting [the stated condition or event]” or “in accordance with a determination that [a stated condition or event] is detected,” depending on the context.
- the users may be provided with an opportunity to opt in/out of programs or features that may collect personal information (e.g., information about a user's preferences or usage of a smart device).
- personal information e.g., information about a user's preferences or usage of a smart device.
- certain data may be anonymized in one or more ways before it is stored or used, so that personally identifiable information is removed.
- a user's identity may be anonymized so that the personally identifiable information cannot be determined for or associated with the user, and so that user preferences or user interactions are generalized (for example, generalized based on user demographics) rather than associated with a particular user.
- stages that are not order dependent may be reordered and other stages may be combined or broken out. While some reordering or other groupings are specifically mentioned, others will be obvious to those of ordinary skill in the art, so the ordering and groupings presented herein are not an exhaustive list of alternatives. Moreover, it should be recognized that the stages could be implemented in hardware, firmware, software or any combination thereof.
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Accounting & Taxation (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Business, Economics & Management (AREA)
- Finance (AREA)
- Strategic Management (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- General Engineering & Computer Science (AREA)
- Economics (AREA)
- Development Economics (AREA)
- Marketing (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Computing Systems (AREA)
- Human Computer Interaction (AREA)
- User Interface Of Digital Computer (AREA)
- Information Transfer Between Computers (AREA)
- Processing Or Creating Images (AREA)
Abstract
The various implementations described herein include methods, systems, and devices for Augmented Reality (AR) based messaging. In one aspect, a method includes processing a user request to create and place an avatar (e.g., a virtual representation of a user) at a user-specified location. In another aspect, a system manages placement of avatars at various geo locations, allows users to interact with avatars, and manages avatar-avatar interactions with or without user controls. In yet another aspect, a device is provided allowing a user to create, manage, and view avatar-based tags. Additionally, various user interfaces are provided to support the user to create, manage, and view avatars.
Description
- This application claims priority to U.S. Provisional Application No. 62/645,537, filed Mar. 20, 2018, entitled “Ability to send Augmented Reality video content to a recipient who when viewing the content sees it also in Augmented Reality,” which is incorporated by reference herein in its entirety.
- The disclosed implementations relate generally to augmented reality, and more particularly, to methods and systems for messaging using augmented reality.
- Augmented Reality (AR) provides a view of the real world that is enhanced by digital information and media, such as video, graphics, and GPS overlays. AR applications continue to integrate with daily life in improving productivity, and efficiency. While AR gains popularity, messaging platforms continue to add social networking features. There is a need for social messaging platforms that incorporate AR concepts. It is also desirable to provide a more realistic and interactive AR user experience through the messaging platforms.
- Accordingly, there is a need for an electronic device with a messaging system and/or a messaging server system that incorporates methods and systems for AR-based messaging. The device and/or the server system may be configured to allow users to generate and place personalized avatars in the real world, to interact with the avatars, to message other users using the avatars, and to have the avatars interact amongst themselves.
- The system allows a user to send AR video content to a recipient who can view the content in augmented reality. Thus, augmented reality content can not only be viewed in augmented reality by its creator, but that content can be sent to a recipient who may then experience that same content in augmented reality from their own device. A recipient can interact with received AR content in their own augmented reality world rather than watch a recorded video of the content in a sender's augmented reality or environment. Rather than just creating AR content, the system allows a creator to place AR content tagged to specific locations. When a recipient is in proximity of the locations, the recipient can not only watch the AR content but also interact with that content in their own Augmenter Reality versus just watching a recorded video.
- Some implementations of the system allow a user to produce a virtual sales person, a virtual companion, virtual pets, and other virtual beings. In some implementations, the system allows a creator to create a three-dimensional animated avatar of themselves. In some implementations, the system allows a user to selects a surface to place an avatar upon. In some implementations, the system allows a creator to click a “View BLAVATAR™” icon to view his avatar in his own real world (sometimes called physical world) using augmented reality. In some implementations, the system allows a creator to select a microphone button and record an audio. In some implementations, the system allows a creator to select a friend and send or share the content. In some implementations, the system allows a recipient (e.g., a friend) to receive a notification that they have content (top open/view) from a sender. In some implementations, an application opens (e.g., in response to a recipient selecting a message) and the recipient can view an avatar in a received message. In some implementations, a recipient selects a “REAL WORLD” icon. In some implementations, an application launches a recipient's camera where he/she views a room the recipient is in and a sender's avatar (situated in the room) speaks to the recipient. In some implementations, a recipient can interact with the sender's avatar in the room. In some implementations, the system includes an application that allows a recipient to receive and engage with AR content.
- In some implementations, when a creator of AR content shares his/her content, the content is sent to a cloud based server. In some implementations, a notification is delivered to a recipient. In some implementations, when a receiver accepts the notification, an application is launched. In some implementations, the recipient will then see the avatar sent by the creator. In some implementations, when the recipient a selects a “REAL WORLD” icon, an application launches the recipient's phone camera, and the recipient gets to see their current location (e.g., a room) through their camera. In some implementations, in that same view, the recipient will see a “View BLAVATAR™” icon. When the recipient selects the icon, the sender's avatar will show up in the camera view, and thus in the recipient's location. In some implementations, the recipient will be able to hear the message accompanying the content and even interact with the content as if the content were physically there in the same room as they are.
- In some implementations, the system allows a user to create an avatar using an application that would allow creators to create video content in Augmented Reality. In some implementations, the system allows a recipient to view that content in Augmented Reality from their own mobile device. In some implementations, the system includes modifications to mobile device hardware and/or software applications. In some implementations, a user can e-mail the AR content or a link to the content to another user. In some implementations, a user can view the AR content by selecting a link in an e-mail or view the content in the e-mail.
- In some implementations, the system allows a user to send Augmented Reality content to kids with autism, so the recipient kids have “someone” with them in the same room to communicate with. In some implementations, the system also can be used for medical reasons. In some implementations, the system allows a user to create a virtual sales person and let customers see and engage with the virtual sales person in their own real world
- In accordance with some implementations, a method is provided for placing an avatar in a real world location. The method includes receiving a request from a first user to place an avatar of the first user in a first user-specified location in a physical environment. In response to receiving the request from the first user, the method includes obtaining an avatar of the first user, and associating the avatar with a first location in the physical environment (e.g., by using a geo-tagging technique). In some implementations, the first location is a position on or around an object in the physical environment. The method also includes receiving a request from a second user to view the avatar. In response to receiving the request from the second user, the method includes determining a second location of the second user in the physical environment, and determining if the second location is proximate to the first location. In accordance with a determination that the second location is proximate to the first location, the method includes obtaining a first image of the physical environment at the first location, creating an overlay image by overlaying a second image of the avatar onto the first image (e.g., in a manner consistent with an orientation of the second user), and causing the overlay image to be displayed on an electronic device used by the second user. In some implementations, to obtain the first image of the physical environment at the first location, the method includes retrieving an image from an image database, generating a generic image of the physical environment at the first location, receiving an image from the first user, and/or receiving an image from the second user
- In some implementations, to obtain an avatar of the first user, the method includes capturing an image of the first user using a camera application executed on a first electronic device, and generating an avatar of the first user based on the image of the first user. In some implementations, generating the avatar of the first user includes applying an algorithm that minimizes perception of an uncanny valley in the avatar. In some implementations, the avatar is one of or more of an animated, emotional, and/or interactive three-dimensional (3D) representation of the first user. In some implementations, the method includes receiving a user input from the first user corresponding to one or more animated, emotional, and/or interactive 3D representation of the first user, and generating the avatar of the first user based on the user input. In some implementations, the method further includes uploading the avatar to a computer distinct from the first electronic device.
- In some implementations, the method includes determining if a third user is in the vicinity of the first location, and, in accordance with a determination that the third user is in the vicinity of the first location, notifying the third user about the avatar at the first location.
- In some implementations, the method includes associating an audio file with the first avatar, and, in response to receiving the request from the second user, and in accordance with the determination that the second location is proximate to the first location, causing the electronic device used by the second user to play the audio file in addition to displaying the overlay image. In some implementations, the method includes receiving the audio file from the first user via a microphone on an electronic device used by the first user.
- In accordance with some implementations, a non-transitory computer readable storage medium stores one or more programs. The one or more programs include instructions, which, when executed by a computing system causes the computing system to perform a method that supports user interaction with one or more avatars and/or interaction amongst the one or more avatars. The method includes receiving a request from a first user to place a first avatar of the first user at a first specific location in a physical environment. The first avatar is configured to perform a first virtual action. In response to receiving the request from the first user, the method includes associating the first avatar with the first virtual action and the first specific location. In some implementations, the first virtual action comprises one of performing a gift exchange operation between the first user and the second user, displaying marketing information associated with the first user, and exchanging payments between the first user and the second user.
- In accordance with some implementations, upon detecting that a second user is proximate to the first specific location, the method includes sending operational information for the first avatar to a second device, wherein the operational information is configured when executed on the second device to enable first interactions of the second user with the first avatar, including causing the first avatar to perform the first virtual action. In some implementations, the second user's first interactions with the first avatar causes the first avatar to perform the first virtual action subject to constraints associated with the physical environment. The method also includes receiving, from the second device, first session information regarding the second user's first interactions with the first avatar, and updating online status of the first user, first avatar and/or the second user to reflect the first session information. In some implementations, the method includes updating databases that store information corresponding to the first user, first avatar, and/or second user to reflect the first session information.
- In some implementations, the method includes associating the first avatar with a resource, and performing the first virtual action comprises presenting the resource to the second user and enabling second interactions of the second user with the first avatar, including accepting or rejecting the resource. In some implementations, the method includes receiving, from the second device, second session information regarding the second user's second interactions with the first avatar. The method also includes, in response to receiving the second session information, determining, based on the second session information, if the second user's second interactions with the first avatar corresponds to the second user accepting the resource, and, in accordance with a determination that the second user accepted the resource, updating a resource table with the information that the resource has been accepted by the second user.
- In some implementations, the method includes receiving a request from a third user to place a second avatar of the third user at a second specific location in a physical environment. The second avatar is configured to perform a second virtual action. In response to receiving the request from the third user, the method includes associating the second avatar with the second virtual action and the second specific location. In some implementations, the first virtual action is an action that relates to the second avatar and the second virtual action is an action that relates to the first avatar. Upon detecting that the second specific location is proximate to the first specific location, the method includes sending operational information for the first avatar and the second avatar to a third device used by the third user. The operational information is configured when executed on the third device to cause the third device to display the first avatar and the second avatar, cause the first avatar to perform the first virtual action and the second avatar to perform the second virtual action, and display a result of the first virtual action and a result of the second virtual action, according to some implementations. In some implementations, the operational information causes the first avatar to perform the first virtual action and the second avatar to perform the second virtual action subject to constraints associated with the physical environment. In some implementations, the method includes determining whether the first virtual action and/or the second virtual action can be executed by the third device (e.g., by probing the third device). In accordance with a determination that the third device is unable to execute the first virtual action and/or the second virtual action, the method includes executing a part or whole of the first virtual action and/or the second virtual action on a fourth device distinct from the third device.
- According to some implementations, an electronic device includes one or more processors, memory, a display, and one or more programs stored in the memory. The programs are configured for execution by the one or more processors and are configured to perform a method for generating and/or interacting with tags that embed avatars on matrix barcodes, according to some implementations. The method includes receiving a request from a first user to place a custom image on a matrix barcode associated with an online resource. In response to receiving the request from the first user, the method includes obtaining the custom image from the first user, obtaining an image of the matrix barcode, creating a tag by overlaying the custom image onto the image of the matrix barcode, associating the tag with an avatar corresponding to the custom image, and creating a digital downloadable format of the tag for the first user, wherein the digital downloadable format is associated with the online resource.
- In some implementations, the method includes receiving a request from a second user to scan the tag. In response to receiving the request, the method includes receiving scanner information corresponding to the tag from a first electronic device used by the second user, retrieving the avatar associated with the tag using the scanner information, and causing a second electronic device used by the second user to display the avatar, according to some implementations. In some implementations, the first action comprises one of playing an audio or a video file, displaying an animation sequence, and displaying product or company information corresponding to the online resource. In some implementations, the method includes causing the second electronic device used by the second user to scan the tag (e.g., using a barcode scanner). In some implementations, the method includes deciphering the avatar associated with the tag using a disambiguation algorithm (e.g., an algorithm that uses error codes to distinguish between barcodes).
- In some implementations, the method includes obtaining the custom image from the first user by receiving a request from the first user to create a custom image, and, in response to receiving the request, retrieving a first image from an image database, receiving an input from the first user to select one or more customizations to apply to the first image, and, in response to receiving the input, applying the one or more customizations to the first image thereby producing the custom image.
- Thus methods, systems, and graphical user interfaces are disclosed that allow users to create and interact using AR-based content and/or messages.
- Both the foregoing general description and the following detailed description are exemplary and explanatory, and are intended to provide further explanation of the invention as claimed.
- The patent or application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the Office upon request and payment of the necessary fee.
- For a better understanding of the various described implementations, reference should be made to the Description of Implementations below, in conjunction with the following drawings in which like reference numerals refer to corresponding parts throughout the figures.
-
FIG. 1 is an example operating environment in accordance with some implementations. -
FIG. 2 is a block diagram illustrating an example electronic device in an operating environment in accordance with some implementations. -
FIG. 3 is a block diagram illustrating an example server in the server system of an operating environment in accordance with some implementations. -
FIGS. 4A-4E illustrate examples of avatar creation and placement, avatar-user interaction, avatar-avatar interaction, avatar-based tag creation, and interaction with avatar-based tags in accordance with some implementations. -
FIG. 5A-5Z illustrate example user interfaces for avatar creation and placement, avatar-user interaction, avatar-avatar interaction, avatar-based tag creation, and interaction with avatar-based tags, in accordance with some implementations. -
FIGS. 6A-6E show a flow diagram illustrating a method for avatar creation and placement, in accordance with some implementations. -
FIGS. 7A-7C show a flow diagram illustrating a method for avatar-user interactions and avatar-avatar interactions, in accordance with some implementations. -
FIGS. 8A-8C show a flow diagram illustrating a method for avatar-based tag creation and user interaction with avatar-based tags, in accordance with some implementations. -
FIGS. 9A and 9B illustrate snapshots of an application for creating avatars and/or AR content, according to some implementations. - Like reference numerals refer to corresponding parts throughout the several views of the drawings.
-
FIG. 1 is anexample operating environment 100 in accordance with some implementations. The operatingenvironment 100 includes one or more electronic devices 190 (also called “devices”, “client devices”, or “computing devices”; e.g., electronic devices 190-1 through 190-N) that are communicatively connected to an augmented reality or messaging server 140 (also called “messaging server”, “messaging system”, “server”, or “server system”) of a messaging service via one ormore communication networks 110. In some implementations, theelectronic devices 190 are communicatively connected to one ormore storage servers 160 that are configured to store and/or serve content 162 to users of the devices 190 (e.g., media content, programs, data, and/or augmented reality programs, or augmented realty content). - In some implementations, the
client devices 190 are computing devices such as laptop or desktop computers, smart phones, personal digital assistants, portable media players, tablet computers, or other appropriate computing devices that can be used to communicate with a social network or a messaging service. In some implementations, themessaging system 140 is a single computing device such as a computer server. In some implementations, theserver system 140 includes multiple computing devices working together to perform the actions of a messaging server system (e.g., cloud computing). In some implementations, the network(s) 110 include a public communication network (e.g., the Internet, cellular data network, dialup modems over a telephone network), or a private communications network (e.g., private LAN, leased lines) or a combination of such communication networks. - Users 102-1 through 102-M of the client devices 190-1 through 190-M access the
messaging system 140 to subscribe to and participate in a messaging service (also called a “messaging network”) provided by themessaging server system 140. For example, one or more of theclient devices 190 execute mobile or browser applications (e.g., “apps” running on smart phones) that can be used to access the messaging network. -
Users 102 interacting with theclient devices 190 can participate in the messaging network provided by theserver 140 by posting information, such as text comments, digital photos, videos, links to other content, or other appropriate electronic information. Users of themessaging server 140 can also annotate information posted by other users. In some implementations, information can be posted on a user's behalf by systems and/or services external to theserver system 140. For example, when a user posts a review of a movie to a movie review website, with proper permissions that website can cross-post the review to a social network managed by theserver 140 on the user's behalf. In another example, a software application executing on a mobile device, with proper permissions, uses global positioning system capabilities to determine the user's location and automatically update the social network with the user location. - The
electronic devices 190 are also configured to communicate with each other through thecommunication network 110. For example, theelectronic devices 190 can connect to thecommunication networks 110 and transmit and receive information thereby via cellular connection, a wireless network (e.g., a WiFi, Bluetooth, or other wireless Internet connection), or a wired network (e.g., a cable, fiber optic, or DSL network). In some implementations, theelectronic devices 190 are registered in a device registry of the messaging service and thus are known to themessaging server 140. In some implementations, theenvironment 100 also includes one or more storage server(s) 160. Astorage server 160 stores information corresponding to the messaging service, according to some implementations. - In some implementations, an
electronic device 190 may be associated with multiple users having respective user accounts in the user domain. Any of these users, as well as users not associated with the device, may use theelectronic device 190. In such implementations, theelectronic device 190 receives input from these users 102-1 through 102-M (including associated and non-associated users), and theelectronic device 190 and/or themessaging server 140 proceeds to identify, for an input, the user making the input. With the user identification, a response to that input may be personalized to the identified user. - In some implementations, the
environment 100 includes multiple electronic devices 190 (e.g., devices 190-1 through 190-N). Thedevices 190 are located throughout the environment 100 (e.g., all within a room or space in a structure, or spread throughout multiple cities or towns). When auser 102 makes an input or sends a message or other communication via adevice 190, one or more of thedevices 190 receives the input, message or other communication, typically via the communication networks 110. - In some implementations, one or more storage server(s) 160 are disposed in the operating
environment 100 to provide, to one ormore users 102, messaging, AR-related content, and/or other information. For example, in some implementations,storage servers 160 store avatars and location information for the avatars associated with the one ormore users 102. -
FIG. 2 is a block diagram illustrating an exampleelectronic device 190 in an operating environment (e.g., operating environment 100) in accordance with some implementations. Theelectronic device 190 includes one or more processing units (CPUs) 202, one ormore network interfaces 204,memory 206, and one ormore communication buses 208 for interconnecting these components (sometimes called a chipset). Theelectronic device 190 includes one ormore input devices 210 that facilitate user input, such as a button 212, atouch sense array 214, one ormore microphones 216, and one ormore cameras 213. Theelectronic device 190 also includes one ormore output devices 218, including one ormore speakers 220, and adisplay 224. In some implementations, theelectronic device 190 also includes a location detection device 226 (e.g., a GPS module) and one or more sensors 228 (e.g., accelerometer, gyroscope, light sensor, etc.). - The
memory 206 includes high-speed random access memory, such as DRAM, SRAM, DDR RAM, or other random access solid state memory devices; and, optionally, includes non-volatile memory, such as one or more magnetic disk storage devices, one or more optical disk storage devices, one or more flash memory devices, or one or more other non-volatile solid state storage devices.Memory 206, optionally, includes one or more storage devices remotely located from one ormore processing units 202.Memory 206, or alternatively the non-volatile memory withinmemory 206, includes a non-transitory computer readable storage medium. In some implementations,memory 206, or the non-transitory computer readable storage medium ofmemory 206, stores the following programs, modules, and data structures, or a subset or superset thereof: -
-
Operating system 232 including procedures for handling various basic system services and for performing hardware dependent tasks; -
Network communication module 234 for connecting theelectronic device 190 to other devices (e.g., theserver system 140, one or more cast devices, one or more client devices, one or more smart home devices, and other electronic device(s) 190) via one or more network interfaces 204 (wired or wireless) and one ormore networks 110, such as the Internet, other wide area networks, local area networks (e.g., local network 104), metropolitan area networks, and so on; - Input/
output control module 236 for receiving inputs via one or more input devices and enabling presentation of information at theelectronic device 190 via one ormore output devices 218; and - One or more client or mobile application module(s), including:
-
Camera application 240 to allow the user to capture photos, orvideo using camera 213; -
Display module 242 to displaycontent using display 224; - Optionally, a
tag creation module 244 including amatrix barcode reader 246 to create tags; -
Audio processing module 248 to process audio; - Optionally, an Avatar-User
Interaction Processing module 250 to process interaction between one or more users and one or more avatars; - Optionally, an Avatar-Avatar
Interaction Processing module 252 to process interaction between a plurality of avatars; and -
Messaging module 254 that processes input (e.g., messages) and invokes one or more of the above mentioned client or mobile application modules.
-
-
- In some implementations, the
device 190 or an application running on theelectronic device 190 creates avatars independently (e.g., without communication with the server 140). In some implementations, thedevice 190 takes photos of environments for avatar placement (e.g., with or without user intervention, with or without server 140). In some implementations, theelectronic device 190 executes one or more operations to interact with avatar(s). In some implementations, theelectronic device 190 implements operations to simulate user interactions with avatars, and avatar-avatar interactions. In some implementations, thedevice 190 reports session information to theserver 140. In some implementations, thedevice 190 receives and displays notifications (e.g., regarding an avatar) from theserver 140. In some implementations, thedevice 190 creates avatar-enabled tags, and/or reads or scans avatar-enabled tags. - Each of the above identified elements may be stored in one or more of the previously mentioned memory devices, and corresponds to a set of instructions for performing a function described above. The above identified modules or programs (i.e., sets of instructions) need not be implemented as separate software programs, procedures, modules or data structures, and thus various subsets of these modules may be combined or otherwise re-arranged in various implementations. In some implementations, the
memory 206 optionally stores a subset of the modules and data structures identified above. Furthermore, thememory 206, optionally, stores additional modules and data structures not described above. In some implementations, a subset of the programs, modules, and/or data stored in thememory 206 can be stored on and/or executed by theserver system 140. -
FIG. 3 is a block diagram illustrating an example augmented reality ormessaging server 140 of an operating environment (e.g., operating environment 100) in accordance with some implementations. Theserver 140 includes one or more processing units (CPUs) 302, one ormore network interfaces 304,memory 306, and one ormore communication buses 308 for interconnecting these components (sometimes called a chipset). Theserver 140 could include one ormore input devices 310 that facilitate user input, such as a keyboard, a mouse, a voice-command input unit or microphone, a touch screen display, a touch-sensitive input pad, a gesture capturing camera, or other input buttons or controls. Furthermore, theserver 140 could use a microphone and voice recognition or a camera and gesture recognition to supplement or replace the keyboard. In some implementations, theserver 140 includes one or more cameras, scanners, or photo sensor units for capturing images, for example, of graphic series codes printed on the electronic devices. Theserver 140 could also include one ormore output devices 312 that enable presentation of user interfaces and display content, including one or more speakers and/or one or more visual displays. - The
memory 306 includes high-speed random access memory, such as DRAM, SRAM, DDR RAM, or other random access solid state memory devices; and, optionally, includes non-volatile memory, such as one or more magnetic disk storage devices, one or more optical disk storage devices, one or more flash memory devices, or one or more other non-volatile solid state storage devices. Thememory 306, optionally, includes one or more storage devices remotely located from one ormore processing units 302.Memory 306, or alternatively the non-volatile memory withinmemory 306, includes a non-transitory computer readable storage medium. In some implementations,memory 306, or the non-transitory computer readable storage medium ofmemory 306, stores the following programs, modules, and data structures, or a subset or superset thereof: -
-
Operating system 316 including procedures for handling various basic system services and for performing hardware dependent tasks; -
Network communication module 318 for connecting theserver system 140 to other devices (e.g., various servers in theserver system 140, client devices, cast devices,electronic devices 190, and smart home devices) via one or more network interfaces 304 (wired or wireless) and one ormore networks 110, such as the Internet, other wide area networks, local area networks, metropolitan area networks, and so on; -
User interface module 320 for enabling presentation of information (e.g., a graphical user interface for presenting application(s), widgets, websites and web pages thereof, and/or games, audio and/or video content, text, etc.) at a client device; and - Augmented Reality or messaging module(s) 322, including:
-
Avatar creation module 324 to create one or more avatar(s) 326, and optionally one or more uncanny valley minimization algorithm(s) 328 that minimize uncanny valley associated with eachavatar 326 and a respective user the respective avatar represents; -
Avatar placement module 330 to place avatars at avatar location(s) 332; -
Client notification module 334 to notify users (e.g., via an alert) about avatars (e.g., in proximity); -
Audio processing module 336, including audio file(s) 338; - Avatar-Avatar
Interaction processing module 340 to process interaction between avatars; - Avatar-User interaction processing module(s) 342 to process interaction between one or more users and one or more avatars;
- Gift or resource or
payment processing module 344 to process gift, resource, or payments; -
Marketing information module 346 to store, supply, or generate marketing information associated with avatars; - Tag processing module(s) 348 to process tag(s) 350, action(s) 352 associated with the tags, and customization module(s) 354 to customize tags; and
-
Messaging module 356 that processes input (e.g., messages) and invokes one or more of the above mentioned augmented reality of messaging modules.
-
-
- In some implementations, the avatars have a one-to-one correspondence with a user. For example, an avatar can be addressed to a specific individual who is a participant in an electronic communication (e.g., a messaging conversation with a sender/creator of the avatar, which can be an individual or a commercial user, such as an advertiser or a business). In some implementations, the avatars are one-to-many, meaning that they can be seen by any user within a group (e.g., users at a particular location, of a particular demographic category, or members in a particular organization, to name a few possibilities). In some implementations, the
server 140 maintains a geographical database of all avatars and operational information. In some implementations, the server maintains a database of all users. In some implementations, theserver 140 maintains a database of all users/avatars and avatar/avatar interactions. In some implementations, theserver 140 generates avatars that avoid the uncanny valley problem associated with virtual representations. -
FIGS. 4A-4E illustrate examples of avatar creation and placement, avatar-user interaction, avatar-avatar interaction, avatar-based tag creation, and interaction with avatar-based tags in accordance with some implementations.FIG. 4A is an illustration of avatar creation andplacement 400, according to some implementations. In the example shown, a user 102-A uses an electronic device 190-A (with display 224-A, and a camera 213 (not shown)) to create an avatar representation of herself (steps steps 408 and 410), according to some implementations (e.g., in this example, a carpet in front of a piece of furniture).FIG. 4A also illustrates that user 102-B could interact with the avatar created instep 406, via themessaging server 140, in accordance with some implementations. - In some implementations, a user creating an avatar can interact with the avatar via a displayed set of affordances. For example, a user can interact via touch, or the user can employ a voice interface provided by the
electronic device 190 to verbally designate a selected avatar interaction to be performed. In some implementations, the displayed affordances include: a “View BLAVATAR™” icon, an “Edit BLAVATAR™” icon, a microphone icon, which a user can select to record, or playback, audio for use by the avatar (e.g., to enter a message that can be spoken by the avatar when displayed by a message recipient), a pin icon (e.g., to designate an avatar location in an environment), and a rotate icon (e.g., to cause the avatar to be shown from a user-specified angle when displayed on a device 190). In some implementations, a user viewing an avatar can lock down or select an appearance of the avatar as the avatar's default appearance, select a particular appearance or an alternative appearance for the avatar, or edit another avatar (e.g., an avatar created by a different user or an avatar created for a different user). It is noted that the term avatar is used here to refer to virtual representations that embody augmented reality characteristics (e.g., dynamic avatars with AR characteristics), not just conventional avatars without the augmented reality capabilities. -
FIGS. 4B-4D illustrate avatar-user and avatar-avatar interactions according to some implementations. In the example shown, a user 102-C uses (420) an electronic device 190-C that controls (424) display 224-C to display auser interface 440, and user 102-D uses (430) an electronic device 190-D that controls (434) display 224-D to display auser interface 460. In this example, theinterfaces avatars FIG. 4A ,avatars respective locations 442 and 462 (e.g., the location specified by the users 102-C and 102-D instep 408 as described above in reference toFIG. 4A ). Electronic devices 190-C and 190-D are connected to themessaging server 140. Theserver 140 determines thatlocations respective avatars user interfaces user interfaces - In the example shown, the
user interface 440 displays a collection of available avatar interactions, which can be selected via user 102-C interaction withaffordances 446 displayed on theuser interface 440. Similarly, theuser interface 460 displays a collection of available avatar interactions, which can be selected via user 102-D interaction withaffordances 466 displayed on theuser interface 440. Although the illustrated example shows similar choices inaffordances server 140 depending on consideration of one or more factors, such as locations of users, locations of avatars, past actions of respective avatars and/or users, and a preferred set of actions or outcomes as selected by respective users. In some implementations, theserver 140 can automatically (e.g., without user input) select one or more actions from a set of actions stored for each avatar based on location, or user information. A user can interact with the avatars via touch inputs on a display screen or other touch input device, or can employ a voice interface provided by theelectronic device 190 to verbally designate a selected avatar interaction to be performed. In some implementations, the displayed affordances are similar to those presented on theuser interface -
FIGS. 4B-4D illustrate a gift exchange as a way of explaining avatar-avatar interaction, according to some implementations. InFIG. 4B , a user 102-D (or theavatar 464 as controlled by the server 140) chooses to gift a user 102-C (or theavatar 444 as controlled by the server 140) via thegift affordance 468 in theaffordances 466. InFIG. 4C , theavatar 444 receives the gift (e.g., by selectingaffordance 448 in the affordances 446) sent by the user 102-D (or theavatar 464 as controlled by the server 140). The avatar 444 (via the user 102-C) also has the option of rejecting the gift sent by the user 102-D (or theavatar 464 as controlled by server 140). This can be done via theaffordance 449. InFIG. 4D , theavatar 444 interacts with the received gift (e.g., opens the gift, smells the gift). Therespective user interfaces affordances server 140 makes some automatic decisions to progress the interactions). In some implementations, an affordance (e.g.,affordance - In some implementations, avatar-user interactions and avatar-avatar interactions include the ability of a first avatar created by a first user to interact with a second user or a second avatar created by the second users. These interactions include performing actions, such as exchanging gifts, offering a kiss, or slapping one avatar by another. In some implementations, the interactions include decision interactions where one avatar asks another avatar out to perform an action (e.g., to go out on a date to a picnic, movie, go out to eat). In some implementations, users can view one or more results of the actions in the real word. For example, a gift accepted by a user's avatar results in delivering a real gift (e.g., a toy or a TV) delivered or scheduled to be delivered to the user. Further, the
server 140 can add additional actions or interactions between users, users and avatars, or between avatars in a dynamic fashion (e.g., based on environment, time of day, or preferences of users at the time of creation of avatars). Some implementations also include merchandising attributes (e.g., with gifts) and associate commerce engines to process and generate revenue (e.g., for gift exchange) based on interactions. Some implementations process and generate actual revenue from users purchasing specific merchandise via the interactions. Outcomes of interactions are logged by theserver 140, which can initiate charges or new promotional messages based on the outcomes and which can also incorporate the outcomes as factors in future avatar-avatar and/or avatar-user interactions options and recommendations. -
FIG. 4E illustrates tag creation (470) and tag viewing (480) operations in accordance with some implementations. In the example shown, a user 102-E uses an electronic device 190-E that controls (424) the display 224-E to display anavatar 472 in a tag-creation user interface (e.g., an interface provided by a messaging application). As part of the tag creation process, the user 102-E is prompted by the messaging application to create a tag (474) and is given the option of taking a picture for the tag using the camera of the electronic device 190-E (e.g., as shown inFIG. 4E , the image could be a picture of a remote control device). The picture taken by the user 102-E is then transmitted to the sever 140, which creates atag 476 that is optionally a combination of a displayable machine readable code object (e.g., a matrix or two-dimensional barcode, such as a QR code, or a one dimensional bar code) and the photo transmitted by the user 102-E. This tag is uniquely associated with information entered by the tag creator 102-E that is stored by themessaging server 140 and/or the storage server(s) 160 for presentation in associated with user interaction with the tag. The stored information can include, without limitation, audio and/or video content, a promotional offer, advertising, and/or a message from the tag creator or associated organization (e.g., non-profit group, school, or business). The stored information can also include information regarding the creator of the tag and/or the tag creator's associated organization and their associated avatars, and can be configured to implement and exhibit the technical capabilities and behaviors of avatars described herein. For example, information associated with a tag can be presented by an avatar selected by the tag's creator, and that avatar can support user-avatar and avatar-avatar interactions, can be pinned to a specific geographic location, can be edited and/or manipulated in accordance with the tag creator's specifications, and can be configured to respond dynamically to questions from the user regarding the information associated with the tag. In some implementations, an avatar associated with a tag (474) includes one or more characteristics of a BLAVATAR™ (e.g., action and/or interaction capabilities). - Referring again to
FIG. 4E , after receiving a message associated with thetag 476, another user 102-D uses an electronic device 190-F that controls the display 224-F to display and interact with thetag 476. In some implementations, when a message with atag 476 is transmitted by a user 102-E to another user 102-D via theMessaging server 140, a tag-enabled messaging application executing on the electronic device 190-F causes the device 190-F to display the tag shown in display 224-F and to enable interaction by the user 102-D with thetag 476 in accordance with avatar capabilities as described herein. In some implementations, theserver 140 responds to a request from device 190-F (which is controlled by the user 102-D) to show one or more avatar-based tags that are associated with messages transmitted to the user, organizations of which the user is a registered member, or tags that are associated with a current or popular location associated with the user. -
FIGS. 5A-5Z illustrate example user interfaces for avatar creation and placement, avatar-user interaction, avatar-avatar interaction, avatar-based tag creation, and interaction with avatar-based tags, in accordance with some implementations. As shown inFIG. 5A , some implementations provide a navigation button 502 (e.g., to reverse steps) in auser interface 504 that corresponding to a messaging application in adisplay 224. In the illustrated example, the displayeduser interface 504 instructs the user to select a type ofbody 506, and offers theoption 508 to use the user's face or to start with a stored set ofavatars 510. A button/affordance 512 is provided to enable a user to proceed to the next step in avatar creation once the user has made the selections, according to some implementations.FIG. 5B continues the example.Display 224 shows an interface withoptions FIG. 5C , aninitial image 522 of the user is displayed with appropriate prompts 524 (e.g., to adjust lighting), and acamera icon 526, according to some implementations. InFIG. 5D , further controls 528 (for color or complexion adjustments), forward 532 and backward 530 navigation buttons to select between avatars, and an avatar 534-2 (of one or more avatar choices) are shown, according to some implementations. The user is also promoted 506 to select a hair style.FIG. 5E shows another avatar 534-4 based on the user selection inFIG. 5D .FIG. 5F shows yet anotheravatar 536 based on the selection inFIG. 5E . The interface shown inFIG. 5G allows the user to select emotions 506 (between choices 538-2, 538-4, 538-6, and 538-8), according to some implementations. Once the user selects an emotion (say 538-2), the interface updates to show the selection (as indicated by the green ring around emotion 538-2 and the avatar 540-2 inFIG. 5H ), according to some implementations. Some implementations also show additional emotions as choices (e.g., emoticon 538-1).FIG. 5I shows a change in selection of emotion from 538-2 to 538-4 and an updated avatar 540-4 in response to the selection, according to some implementations.FIG. 5J , similarly, shows an updated avatar 540-6 based on user selection of emotion 538-6, according to some implementations. - As
FIG. 5K illustrates, some implementations allow a user to selectbody type 506, with an initial avatar (e.g., a long shot of an avatar) 542. In some implementations, as shown inFIG. 5L , theapplication 504 shows one or more animation choices 546-2, . . . , 546-7 (e.g., rapping, talking, singing, a snake animation), and once the user has made a selection allows the user to see the avatar animation in real world 512 (e.g., a virtual world supported by the application). In some implementations, as shown inFIG. 5M , the user can selectbody tone 506, by sliding an affordance 548-4 (between options 548-2 and 548-6), and see the updatedavatar 550. A shown inFIG. 5N , a user can additionally select a photo from a storage in the electronic device, choose aninput type 552, take photo fromcamera 554, or cancel the process of creating an avatar by selecting animage 556, according to some implementations. In some implementations, as shown inFIG. 5O , a user can select asurface 560 after controlling the device to detect aspecific surface 558. After creating and placing an avatar (e.g., 564,FIG. 5P ) atlocation 574, a user can viewavatar 568, edit theavatar 570, add audio via amicrophone 572, associate an audio ormusic file 566 with the avatar, or reversecommands 562, according to some implementations. - Further, as shown in
FIG. 5Q , theapplication 504 allows the user to leave the avatar (called a GeoBlab in the Figure) and alerts 576-2 the user that recipients of the avatar will get turn by turn directions (if needed) to the avatar once the user places or leaves the avatar at the selected location. The user can agree 576-4, in which case the avatar will be left at the chosen location, or decline 576-8 in which case the device or server will cancel the request. The user can also turn off this alert 576-6. Some implementations enable the user to either leave the avatar 578-2, or send the avatar 578-4 to another user, as shown inFIG. 5R . Some implementations provide an interface, as shown inFIG. 5S , to search 582 for avatars 580-2 in user selected locations, and returns or displays the search results in a window 580-4. Theelectronic device 190 sends the search locations to amessaging server 140 which returns any search results to theelectronic device 190. Thedevice 190 then displays anavatar 564 corresponding to the search results, as shown inFIG. 5T , according to some implementations. Again, some implementations provide a user with the option to view thelocation 528, view theavatar 568, edit theavatar 570, or play an audio file associated with theavatar 584. - In some implementations,
application 504 allows a user to create a tag 590 (sometimes called a Blabeey tag), create anavatar 588, or create avideo 586, as shown inFIG. 5U .FIG. 5V shows another interface withavatar 564,location 574,option 568 to view the avatar, edit theavatar 570, or record an audio 572, and finally click a createtag 592 to create the tag based on the selected features, according to some implementations. Some implementations allow a user to take a picture to super-impose on a tag 594-4, and provides options 594-2 and 594-6 to either agree or decline. A user can click ashare button 596 to share the avatar, according to some implementations, as shown inFIG. 5W . Some implementations show the tag 598-2 (with a photo 598-4 selected by the user) to the user before he selects to share thetag 596, as shown inFIG. 5X . Some implementations allow a user select a color for the tag 598-1, gives a preview of a selected color 598-3, shows a palette of color choices 598-5, and allows the user to click aselect button 599 to select a color, as shown inFIG. 5Y .FIG. 5Z shows an interface displaying the tag 598-2 created by another user. A second user viewing the tag can use a tag viewer application to view the content 598-4 related to the tag 598-2, according to some implementations. -
FIGS. 6A-6E show a flow diagram illustrating a method for avatar creation and placement, in accordance with some implementations. User input processing is discussed above in reference toFIG. 2 . In some implementations, one or more modules inmemory 206 of anelectronic device 190 interfaces with one or more modules inmemory 306 of themessaging server 140 to receive and process avatar creation and placement, as discussed above in reference toFIG. 4A , in accordance with some implementations. - In accordance with some implementations, a
method 600 is provided for placing an avatar in a real world location. As shown inFIG. 6A , the method includes receiving (602) a request from a first user to place an avatar of the first user in a first user-specified location in a physical environment. In response (604) to receiving the request from the first user, the method includes obtaining (606) an avatar of the first user, and associating (608) the avatar with a first location in the physical environment (e.g., by using a geo-tagging technique). In some implementations, the first location is a position on or around an object in the physical environment (610). The method also includes receiving (612) a request from a second user to view the avatar. - Referring next to
FIG. 6B , in response (614) to receiving the request from the second user, the method includes determining (616) a second location of the second user in the physical environment, and determining (618) if the second location is proximate to the first location. In accordance with a determination (620) that the second location is proximate to the first location, the method includes obtaining (622) a first image of the physical environment at the first location, creating (626) an overlay image by overlaying a second image of the avatar onto the first image (e.g., in a manner consistent with an orientation of the second user), and causing (628) the overlay image to be displayed on an electronic device used by the second user. In some implementations, to obtain the first image of the physical environment at the first location, the method includes (624) retrieving an image from an image database, generating a generic image of the physical environment at the first location, receiving an image from the first user, and/or receiving an image from the second user - Referring next to
FIG. 6E , in some implementations, to obtain an avatar of the first user, the method includes capturing (640) an image of the first user using a camera application executed on a first electronic device, and generating (642) an avatar of the first user based on the image of the first user. In some implementations, generating the avatar of the first user includes applying (644) an algorithm that minimizes perception of an uncanny valley in the avatar. In some implementations, the avatar is (646) one of or more of an animated, emotional, and/or interactive three-dimensional (3D) representation of the first user. In some implementations, the method includes receiving (648) a user input from the first user corresponding to one or more animated, emotional, and/or interactive 3D representation of the first user, and generating (650) the avatar of the first user based on the user input. In some implementations, the method further includes uploading (652) the avatar to a computer distinct from the first electronic device. - Referring back to
FIG. 6C , in some implementations, the method includes determining (630) if a third user is in the vicinity of the first location, and, in accordance with a determination that the third user is in the vicinity of the first location, notifying (632) the third user about the avatar at the first location. - Referring next to
FIG. 6D , in some implementations, the method includes associating (634) an audio file with the first avatar, and, in response to receiving the request from the second user, and in accordance with the determination that the second location is proximate to the first location, causing (638) the electronic device used by the second user to play the audio file in addition to displaying the overlay image. In some implementations, the method includes receiving the audio file from the first user via a microphone on an electronic device used by the first user. -
FIGS. 7A-7C show a flow diagram illustrating amethod 700 for supporting avatar-user interactions and avatar-avatar interactions, in accordance with some implementations. User input processing is discussed above in reference toFIG. 2 . In some implementations, one or more modules inmemory 206 of anelectronic device 190 interfaces with one or more modules inmemory 306 of themessaging server 140 to process avatar-avatar interaction and avatar-user interaction, as discussed above in reference toFIGS. 4B-4D , in accordance with some implementations. - In accordance with some implementations, a non-transitory computer readable storage medium stores one or more programs. The one or more programs include instructions, which, when executed by a computing system cause the computing system to perform a method that supports user interaction with one or more avatars and/or interaction amongst the one or more avatars. The method includes receiving (702) a request from a first user to place a first avatar of the first user at a first specific location in a physical environment. The first avatar is configured to perform a first virtual action. In response to receiving the request from the first user, the method includes associating (704) (e.g., geo-tagging) the first avatar with the first virtual action and the first specific location. In some implementations, the first virtual action comprises (706) one of performing a gift exchange operation between the first user and the second user, displaying marketing information associated with the first user, and exchanging payments between the first user and the second user. In some implementations, the first virtual action includes a second user giving a thumbs up or a thumbs down to how the avatar looks.
- In accordance with some implementations, upon detecting (708) that a second user is proximate to the first specific location, the method includes sending (710) operational information for the first avatar to a second device, wherein the operational information is configured when executed on the second device to enable first interactions of the second user with the first avatar, including causing the first avatar to perform the first virtual action. In some implementations, the second user's first interactions with the first avatar causes (712) the first avatar to perform the first virtual action subject to constraints associated with the physical environment. For example, the avatar is blocked by a wall and cannot walk through, but can lean on the wall. The method also includes receiving (714), from the second device, first session information regarding the second user's first interactions with the first avatar, and updating (716) online status of the first user, first avatar and/or the second user to reflect the first session information. In some implementations, the method includes updating (716) databases that store information corresponding to the first user, first avatar, and/or second user to reflect the first session information.
- Referring next to
FIG. 7B , in some implementations, the method includes associating (718) the first avatar with a resource (e.g., a gift), and performing the first virtual action comprises presenting the resource to the second user and enabling second interactions of the second user with the first avatar, including accepting or rejecting the resource. In some implementations, the method includes receiving (722), from the second device, second session information regarding the second user's second interactions with the first avatar. The method also includes, in response (726) to receiving the second session information, determining (728), based on the second session information, if the second user's second interactions with the first avatar corresponds to the second user accepting the resource, and, in accordance with a determination that the second user accepted the resource, updating (730) a resource table with the information that the resource has been accepted by the second user. - Referring next to
FIG. 7C , in some implementations, the method includes receiving (732) a request from a third user to place a second avatar of the third user at a second specific location in a physical environment. The second avatar is configured to perform a second virtual action. In response (734) to receiving the request from the third user, the method includes associating the second avatar with the second virtual action and the second specific location. In some implementations, the first virtual action is (736) an action that relates to the second avatar and the second virtual action is an action that relates to the first avatar. Upon detecting (738) that the second specific location is proximate to the first specific location, the method includes sending (740) operational information for the first avatar and the second avatar to a third device used by the third user. The operational information is configured (742) when executed on the third device to cause the third device to display the first avatar and the second avatar, cause the first avatar to perform the first virtual action and the second avatar to perform the second virtual action, and display a result of the first virtual action and a result of the second virtual action, according to some implementations. In some implementations, the operational information causes the first avatar to perform the first virtual action and the second avatar to perform the second virtual action subject to constraints associated with the physical environment. In some implementations, the method includes determining whether the first virtual action and/or the second virtual action can be executed by the third device (e.g., by probing the third device). In accordance with a determination that the third device is unable to execute the first virtual action and/or the second virtual action, the method includes executing a part or whole of the first virtual action and/or the second virtual action on a fourth device distinct from the third device. -
FIGS. 8A-8C show a flow diagram illustrating amethod 800 for avatar-based tag creation and user interaction with avatar-based tags, in accordance with some implementations. User input processing is discussed above in reference toFIG. 2 . In some implementations, one or more modules inmemory 206 of anelectronic device 190 interfaces with one or more modules inmemory 306 of themessaging server 140 to process avatar-based tag creation and viewing, as discussed above in reference toFIG. 4E , in accordance with some implementations. - According to some implementations, an electronic device includes one or more processors, memory, a display, and one or more programs stored in the memory. The programs are configured for execution by the one or more processors and are configured to perform a method for generating and/or interacting with tags that embed avatars on matrix barcodes, according to some implementations. The method includes receiving (802) a request from a first user to place a custom image on a matrix barcode associated with an online resource. In response to receiving the request from the first user, the method includes obtaining (806) the custom image from the first user, obtaining (808) an image of the matrix barcode, creating (810) a tag by overlaying the custom image onto the image of the matrix barcode, associating (812) the tag with an avatar corresponding to the custom image, and creating (814) a digital downloadable format of the tag for the first user, wherein the digital downloadable format is associated with the online resource.
- Referring next to
FIG. 8B , in some implementations, the method includes receiving (816) a request from a second user to scan the tag. In response (818) to receiving the request, the method includes receiving (820) scanner information corresponding to the tag from a first electronic device used by the second user, retrieving (824) the avatar associated with the tag using the scanner information, and causing (826) a second electronic device used by the second user to display the avatar, according to some implementations. In some implementations, the first action comprises one of playing an audio or a video file, displaying an animation sequence, and displaying product or company information corresponding to the online resource. In some implementations, the method includes causing (822) the second electronic device used by the second user to scan the tag (e.g., using a barcode scanner). In some implementations, the method includes deciphering the avatar associated with the tag using a disambiguation algorithm (e.g., an algorithm that uses error codes to distinguish between barcodes). - In some implementations, as shown in
block 828, the method further includes associating (830) the first avatar with a first action. In some implementations, the first action comprises (832) any one of playing an audio or a video file, displaying an animation sequence, and displaying product or company information corresponding to the online resource. In some implementations, the method includes receiving (834) a request from the second user to interact with the avatar in the tag, and in response to receiving the request from the second user, sending (836) operational information for the first avatar to the second electronic device, wherein the operational information is configured when executed on the second electronic device to enable interactions of the second user with the avatar, including causing the avatar to perform the first action - Referring next to
FIG. 8C , in some implementations, the method includes obtaining the custom image from the first user by receiving (838) a request from the first user to create a custom image, and, in response (840) to receiving the request, retrieving (842) a first image from an image database, receiving (844) an input from the first user to select one or more customizations to apply to the first image, and, in response to receiving the input, applying (846) the one or more customizations to the first image thereby producing the first image. -
FIGS. 9A and 9B illustrate snapshots of an application for creating avatars and/or AR content, according to some implementations. As the Figures show, a user can record a video (e.g., using a camera application on their mobile device), associate the content with alocation 904, and share the AR content with other users by selecting (e.g., clinking) aSHARE icon 902. Another user may then view or interact with the content when visiting the proximity of thelocation 904. - Each of the above identified elements may be stored in one or more of the previously mentioned memory devices, and corresponds to a set of instructions for performing a function described above. The above identified modules or programs (i.e., sets of instructions) need not be implemented as separate software programs, procedures, modules or data structures, and thus various subsets of these modules may be combined or otherwise re-arranged in various implementations. In some implementations,
memory 306, optionally, stores a subset of the modules and data structures identified above. Furthermore,memory 306, optionally, stores additional modules and data structures not described above. - The present application discloses subject-matter in correspondence with the following numbered clauses:
- Clause A1. A method of placing an avatar in a real world location, the method comprising: receiving a request from a first user to place an avatar of the first user in a first user-specified location in a physical environment; in response to receiving the request from the first user: obtaining an avatar of the first user; and associating the avatar with a first location in the physical environment; receiving a request from a second user to view the avatar; and in response to receiving the request from the second user: determining a second location of the second user in the physical environment; determining if the second location is proximate to the first location; and in accordance with a determination that the second location is proximate to the first location: obtaining a first image of the physical environment at the first location; creating an overlay image by overlaying a second image of the avatar onto the first image; and causing the overlay image to be displayed on an electronic device used by the second user.
- Clause A2. The method as recited in clause A1, further comprising: determining if a third user is in the vicinity of the first location; and in accordance with a determination that the third user is in the vicinity of the first location, notifying the third user about the avatar at the first location.
- Clause A3. The method as recited in any of the preceding clauses, wherein obtaining the avatar of the first user comprises: capturing an image of the first user using a camera application executed on a first electronic device; and generating an avatar of the first user based on the image of the first user.
- Clause A4. The method as recited in clause A3, further comprising uploading the avatar to a computer distinct from the first electronic device.
- Clause A5. The method as recited in clause A3, wherein generating the avatar of the first user includes applying an algorithm that minimizes perception of an uncanny valley in the avatar.
- Clause A6. The method as recited in any of the preceding clauses, wherein the avatar is one or more of an animated, emotional, and/or interactive 3D representation of the first user.
- Clause A7. The method as recited in any of the preceding clauses, further comprising: receiving a user input from the first user corresponding to one or more animated, emotional, and/or interactive 3D representation of the first user; and generating the avatar of the first user based on the user input.
- Clause A8. The method as recited in any of the preceding clauses, further comprising: associating an audio file with the first avatar; and in response to receiving the request from the second user, and in accordance with the determination that the second location is proximate to the first location, causing the electronic device used by the second user to play the audio file in addition to displaying the overlay image.
- Clause A9. The method as recited in clause A8, further comprising receiving the audio file from the first user via a microphone on an electronic device used by the first user.
- Clause A10. The method as recited in any of preceding the clauses, wherein the first location is a position on or around an object in the physical environment.
- Clause A11. The method as recited in any of the preceding clauses, wherein obtaining the first image of the physical environment at the first location comprises retrieving an image from an image database, generating a generic image of the physical environment at the first location, receiving an image from the first user, and/or receiving an image from the second user.
- Clause A12. A method of placing an avatar in a real world location, the method comprising: receiving a request from a first user to place an avatar of the first user in a first user-specified location in a physical environment; in response to receiving the request from the first user: capturing an image of the first user using a camera application; generating an avatar of the first user based on the image of the first user, wherein the avatar is one or more of an animated, emotional, and/or interactive 3D representation of the first user; and associating the avatar with a first location in the physical environment; receiving a request from a second user to view the avatar; and in response to receiving the request from the second user: determining a second location of the second user in the physical environment; determining if the second location is proximate to the first location; and in accordance with a determination that the second location is proximate to the first location: obtaining a first image of the physical environment at the first location by either retrieving an image from an image database, generating a generic image of the physical environment at the first location, receiving an image from the first user, or receiving an image from the second user; creating an overlay image by overlaying a second image of the avatar onto the first image; and displaying the overlay image to the second user.
- Clause A13. A method comprising: receiving a request from a first user to place a first avatar of the first user at a first specific location in a physical environment, wherein the first avatar is configured to perform a first virtual action; in response to receiving the request from the first user, associating the first avatar with the first virtual action and the first specific location; and upon detecting that a second user is proximate to the first specific location: sending operational information for the first avatar to a second device, wherein the operational information is configured when executed on the second device to enable first interactions of the second user with the first avatar, including causing the first avatar to perform the first virtual action; receiving, from the second device, first session information regarding the second user's first interactions with the first avatar; and updating online status of the first user, first avatar and/or the second user to reflect the first session information.
- Clause A14. The method as recited in clause A13, wherein the first virtual action comprises one of performing a gift exchange operation between the first user and the second user, displaying marketing information associated with the first user, and exchanging payments between the first user and the second user.
- Clause A15. The method as recited in clause A13, wherein the second user's first interactions with the first avatar causes the first avatar to perform the first virtual action subject to constraints associated with the physical environment.
- Clause A15. The method as recited in clause A13, wherein the one or more programs further comprise instructions for associating the first avatar with a resource, and wherein performing the first virtual action comprises presenting the resource to the second user and enabling second interactions of the second user with the first avatar, including accepting or rejecting the resource.
- Clause A16. The method as recited in clause A15, wherein the one or more programs further comprise instructions for: receiving, from the second device, second session information regarding the second user's second interactions with the first avatar; in response to receiving the second session information: determining, based on the second session information, if the second user's second interactions with the first avatar corresponds to the second user accepting the resource; in accordance with a determination that the second user accepted the resource, updating a resource table with the information that the resource has been accepted by the second user.
- Clause A17. The method as recited in any of preceding clauses A13-A16, wherein the one or more programs further comprise instructions for: receiving a request from a third user to place a second avatar of the third user at a second specific location in a physical environment, wherein the second avatar is configured to perform a second virtual action; in response to receiving the request from the third user, associating the second avatar with the second virtual action and the second specific location; and upon detecting that the second specific location is proximate to the first specific location: sending operational information for the first avatar and the second avatar to a third device used by the third user, wherein the operational information is configured when executed on the third device to cause the third device to display the first avatar and the second avatar, cause the first avatar to perform the first virtual action and the second avatar to perform the second virtual action, and display a result of the first virtual action and a result of the second virtual action.
- Clause A18. The method as recited in clause A17, wherein the operational information causes the first avatar to perform the first virtual action and the second avatar to perform the second virtual action subject to constraints associated with the physical environment.
- Clause A19. The method as recited in clause A17, wherein the first virtual action is an action that relates to the second avatar and the second virtual action is an action that relates to the first avatar.
- Clause A20. A method comprising: receiving a request from a first user to place a custom image on a matrix barcode associated with an online resource; in response to receiving the request from the first user: obtaining the custom image from the first user; obtaining an image of the matrix barcode; creating a tag by overlaying the custom image onto the image of the matrix barcode; associating the tag with an avatar corresponding to the custom image; and creating a digital downloadable format of the tag for the first user, wherein the digital downloadable format is associated with the online resource.
- Clause A21. The method as recited in clause A20, wherein the one or more programs further comprise instructions for: receiving a request from a second user to scan the tag; in response to receiving the request: receiving scanner information corresponding to the tag from a first electronic device used by the second user; retrieving the avatar associated with the tag using the scanner information; and causing a second electronic device used by the second user to display the avatar.
- Clause A22. The method as recited in clause A21, wherein the one or more programs further comprise instructions for: associating the avatar with a first action; receiving a request from the second user to interact with the avatar in the tag; and in response to receiving the request from the second user, sending operational information for the first avatar to the first electronic device, wherein the operational information is configured when executed on the first electronic device to enable interactions of the second user with the avatar, including causing the avatar to perform the first action.
- Clause A23. The method as recited in clause A22, wherein the first action comprises one of playing an audio or a video file, displaying an animation sequence, and displaying product or company information corresponding to the online resource.
- Clause A24. The method as recited in clause A21, further comprising causing the second electronic device used by the second user to scan the tag.
- Clause A25. The method as recited in clause A21, wherein obtaining the custom image from the first user comprises: receiving a request from the first user to create a custom image; in response to receiving the request: retrieving a first image from an image database; receiving an input from the first user to select one or more customizations to apply to the first image; and in response to receiving the input, applying the one or more customizations to the first image thereby producing the custom image.
- Clause A26. An electronic device, comprising one or more processors, memory, and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for carrying out the method recited in any of clauses A1-A25.
- Clause A27. A non-transitory computer readable storage medium, storing one or more programs configured for execution by one or more processors, the one or more programs including instructions, which when executed by the one or more processors, cause the one or more processors to perform the method recited in any of clauses A1-A25.
- Reference has been made in detail to implementations, examples of which are illustrated in the accompanying drawings. In the detailed description above, numerous specific details have been set forth in order to provide a thorough understanding of the various described implementations. However, it will be apparent to one of ordinary skill in the art that the various described implementations may be practiced without these specific details. In other instances, well-known methods, procedures, components, circuits, and networks have not been described in detail so as not to unnecessarily obscure aspects of the implementations.
- It will also be understood that, although the terms first, second, etc. are, in some instances, used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first device could be termed a second device, and, similarly, a second device could be termed a first device, without departing from the scope of the various described implementations. The first device and the second device are both types of devices, but they are not the same device.
- The terminology used in the description of the various described implementations herein is for the purpose of describing particular implementations only and is not intended to be limiting. As used in the description of the various described implementations and the appended claims, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term “and/or” as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms “includes,” “including,” “comprises,” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
- As used herein, the term “if” is, optionally, construed to mean “when” or “upon” or “in response to determining” or “in response to detecting” or “in accordance with a determination that,” depending on the context. Similarly, the phrase “if it is determined” or “if [a stated condition or event] is detected” is, optionally, construed to mean “upon determining” or “in response to determining” or “upon detecting [the stated condition or event]” or “in response to detecting [the stated condition or event]” or “in accordance with a determination that [a stated condition or event] is detected,” depending on the context.
- For situations in which the systems discussed above collect information about users, the users may be provided with an opportunity to opt in/out of programs or features that may collect personal information (e.g., information about a user's preferences or usage of a smart device). In addition, in some implementations, certain data may be anonymized in one or more ways before it is stored or used, so that personally identifiable information is removed. For example, a user's identity may be anonymized so that the personally identifiable information cannot be determined for or associated with the user, and so that user preferences or user interactions are generalized (for example, generalized based on user demographics) rather than associated with a particular user.
- Although some of various drawings illustrate a number of logical stages in a particular order, stages that are not order dependent may be reordered and other stages may be combined or broken out. While some reordering or other groupings are specifically mentioned, others will be obvious to those of ordinary skill in the art, so the ordering and groupings presented herein are not an exhaustive list of alternatives. Moreover, it should be recognized that the stages could be implemented in hardware, firmware, software or any combination thereof.
- The foregoing description, for purpose of explanation, has been described with reference to specific implementations. However, the illustrative discussions above are not intended to be exhaustive or to limit the scope of the claims to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The implementations were chosen in order to best explain the principles underlying the claims and their practical applications, to thereby enable others skilled in the art to best use the implementations with various modifications as are suited to the particular uses contemplated.
Claims (20)
1. A non-transitory computer readable storage medium storing one or more programs configured for execution by an electronic device, the one or more programs comprising instructions for:
receiving a request from a first user to place a first avatar of the first user at a first specific location in a physical environment, wherein the first avatar is configured to perform a first virtual action;
in response to receiving the request from the first user, associating the first avatar with the first virtual action and the first specific location; and
upon detecting that a second user is proximate to the first specific location:
sending operational information for the first avatar to a second device, wherein the operational information is configured when executed on the second device to enable first interactions of the second user with the first avatar, including causing the first avatar to perform the first virtual action;
receiving, from the second device, first session information regarding the second user's first interactions with the first avatar; and
updating online status of the first user, first avatar and/or the second user to reflect the first session information.
2. The non-transitory computer readable storage medium of claim 1 , wherein the first virtual action comprises one of performing a gift exchange operation between the first user and the second user, displaying marketing information associated with the first user, and exchanging payments between the first user and the second user.
3. The non-transitory computer readable storage medium of claim 1 , wherein the second user's first interactions with the first avatar causes the first avatar to perform the first virtual action subject to constraints associated with the physical environment.
4. The non-transitory computer readable storage medium of claim 1 , wherein the one or more programs further comprise instructions for associating the first avatar with a resource, and wherein performing the first virtual action comprises presenting the resource to the second user and enabling second interactions of the second user with the first avatar, including accepting or rejecting the resource.
5. The non-transitory computer readable storage medium of claim 4 , wherein the one or more programs further comprise instructions for:
receiving, from the second device, second session information regarding the second user's second interactions with the first avatar;
in response to receiving the second session information:
determining, based on the second session information, if the second user's second interactions with the first avatar corresponds to the second user accepting the resource;
in accordance with a determination that the second user accepted the resource, updating a resource table with the information that the resource has been accepted by the second user.
6. The non-transitory computer readable storage medium of claim 1 , wherein the one or more programs further comprise instructions for:
receiving a request from a third user to place a second avatar of the third user at a second specific location in a physical environment, wherein the second avatar is configured to perform a second virtual action;
in response to receiving the request from the third user, associating the second avatar with the second virtual action and the second specific location; and
upon detecting that the second specific location is proximate to the first specific location:
sending operational information for the first avatar and the second avatar to a third device used by the third user, wherein the operational information is configured when executed on the third device to cause the third device to display the first avatar and the second avatar, cause the first avatar to perform the first virtual action and the second avatar to perform the second virtual action, and display a result of the first virtual action and a result of the second virtual action.
7. The non-transitory computer readable storage medium of claim 6 , wherein the operational information causes the first avatar to perform the first virtual action and the second avatar to perform the second virtual action subject to constraints associated with the physical environment.
8. The non-transitory computer readable storage medium of claim 6 , wherein the first virtual action is an action that relates to the second avatar and the second virtual action is an action that relates to the first avatar.
9. An electronic device, comprising:
one or more processors;
memory; and
one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for:
receiving a request from a first user to place a custom image on a matrix barcode associated with an online resource;
in response to receiving the request from the first user:
obtaining the custom image from the first user;
obtaining an image of the matrix barcode;
creating a tag by overlaying the custom image onto the image of the matrix barcode;
associating the tag with an avatar corresponding to the custom image; and
creating a digital downloadable format of the tag for the first user, wherein the digital downloadable format is associated with the online resource.
10. The electronic device of claim 9 , wherein the one or more programs further comprise instructions for:
receiving a request from a second user to scan the tag;
in response to receiving the request:
receiving scanner information corresponding to the tag from a first electronic device used by the second user;
retrieving the avatar associated with the tag using the scanner information; and
causing a second electronic device used by the second user to display the avatar.
11. The electronic device of claim 10 , wherein the one or more programs further comprise instructions for:
associating the avatar with a first action;
receiving a request from the second user to interact with the avatar in the tag; and
in response to receiving the request from the second user, sending operational information for the first avatar to the first electronic device, wherein the operational information is configured when executed on the first electronic device to enable interactions of the second user with the avatar, including causing the avatar to perform the first action.
12. The electronic device of claim 11 , wherein the first action comprises one of playing an audio or a video file, displaying an animation sequence, and displaying product or company information corresponding to the online resource.
13. The electronic device of claim 10 , further comprising causing the second electronic device used by the second user to scan the tag.
14. The electronic device of claim 10 , wherein obtaining the custom image from the first user comprises:
receiving a request from the first user to create a custom image;
in response to receiving the request:
retrieving a first image from an image database;
receiving an input from the first user to select one or more customizations to apply to the first image; and
in response to receiving the input, applying the one or more customizations to the first image thereby producing the custom image.
15. A method comprising:
receiving a request from a first user to place a first avatar of the first user at a first specific location in a physical environment, wherein the first avatar is configured to perform a first virtual action;
in response to receiving the request from the first user, associating the first avatar with the first virtual action and the first specific location; and
upon detecting that a second user is proximate to the first specific location:
sending operational information for the first avatar to a second device, wherein the operational information is configured when executed on the second device to enable first interactions of the second user with the first avatar, including causing the first avatar to perform the first virtual action;
receiving, from the second device, first session information regarding the second user's first interactions with the first avatar; and
updating online status of the first user, first avatar and/or the second user to reflect the first session information.
16. The method of claim 15 , wherein the first virtual action comprises one of performing a gift exchange operation between the first user and the second user, displaying marketing information associated with the first user, and exchanging payments between the first user and the second user.
17. The method of claim 15 , wherein the second user's first interactions with the first avatar causes the first avatar to perform the first virtual action subject to constraints associated with the physical environment.
18. The method of claim 15 , further comprising: associating the first avatar with a resource, and wherein performing the first virtual action comprises presenting the resource to the second user and enabling second interactions of the second user with the first avatar, including accepting or rejecting the resource.
19. The method of claim 18 , further comprising:
receiving, from the second device, second session information regarding the second user's second interactions with the first avatar;
in response to receiving the second session information:
determining, based on the second session information, if the second user's second interactions with the first avatar corresponds to the second user accepting the resource;
in accordance with a determination that the second user accepted the resource, updating a resource table with the information that the resource has been accepted by the second user.
20. The method of claim 15 , further comprising:
receiving a request from a third user to place a second avatar of the third user at a second specific location in a physical environment, wherein the second avatar is configured to perform a second virtual action;
in response to receiving the request from the third user, associating the second avatar with the second virtual action and the second specific location; and
upon detecting that the second specific location is proximate to the first specific location:
sending operational information for the first avatar and the second avatar to a third device used by the third user, wherein the operational information is configured when executed on the third device to cause the third device to display the first avatar and the second avatar, cause the first avatar to perform the first virtual action and the second avatar to perform the second virtual action, and display a result of the first virtual action and a result of the second virtual action.
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/634,398 US20200380486A1 (en) | 2018-03-20 | 2019-03-20 | Augmented reality and messaging |
US16/359,895 US20190295056A1 (en) | 2018-03-20 | 2019-03-20 | Augmented Reality and Messaging |
US17/691,027 US11989709B2 (en) | 2018-03-20 | 2022-03-09 | Augmented reality and messaging |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201862645537P | 2018-03-20 | 2018-03-20 | |
US16/359,895 US20190295056A1 (en) | 2018-03-20 | 2019-03-20 | Augmented Reality and Messaging |
Related Child Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/634,398 Continuation US20200380486A1 (en) | 2018-03-20 | 2019-03-20 | Augmented reality and messaging |
US17/691,027 Continuation US11989709B2 (en) | 2018-03-20 | 2022-03-09 | Augmented reality and messaging |
Publications (1)
Publication Number | Publication Date |
---|---|
US20190295056A1 true US20190295056A1 (en) | 2019-09-26 |
Family
ID=67983605
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/359,895 Abandoned US20190295056A1 (en) | 2018-03-20 | 2019-03-20 | Augmented Reality and Messaging |
US16/634,398 Abandoned US20200380486A1 (en) | 2018-03-20 | 2019-03-20 | Augmented reality and messaging |
US17/691,027 Active US11989709B2 (en) | 2018-03-20 | 2022-03-09 | Augmented reality and messaging |
Family Applications After (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/634,398 Abandoned US20200380486A1 (en) | 2018-03-20 | 2019-03-20 | Augmented reality and messaging |
US17/691,027 Active US11989709B2 (en) | 2018-03-20 | 2022-03-09 | Augmented reality and messaging |
Country Status (2)
Country | Link |
---|---|
US (3) | US20190295056A1 (en) |
WO (1) | WO2019183276A1 (en) |
Cited By (53)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10659405B1 (en) | 2019-05-06 | 2020-05-19 | Apple Inc. | Avatar integration with multiple applications |
USD900831S1 (en) * | 2019-03-12 | 2020-11-03 | AIRCAP Inc. | Display screen or portion thereof with graphical user interface |
CN111970190A (en) * | 2020-07-27 | 2020-11-20 | 上海连尚网络科技有限公司 | Method and equipment for providing energy information |
US10846905B2 (en) | 2017-05-16 | 2020-11-24 | Apple Inc. | Emoji recording and sending |
US10845968B2 (en) | 2017-05-16 | 2020-11-24 | Apple Inc. | Emoji recording and sending |
US10861248B2 (en) | 2018-05-07 | 2020-12-08 | Apple Inc. | Avatar creation user interface |
USD919658S1 (en) * | 2018-10-12 | 2021-05-18 | Huawei Technologies Co., Ltd. | Mobile phone with a graphical user interface |
USD919660S1 (en) * | 2018-10-12 | 2021-05-18 | Huawei Technologies Co., Ltd. | Mobile phone with a graphical user interface |
USD919661S1 (en) * | 2018-10-12 | 2021-05-18 | Huawei Technologies Co., Ltd. | Mobile phone with a graphical user interface |
USD919659S1 (en) * | 2018-10-12 | 2021-05-18 | Huawei Technologies Co., Ltd. | Mobile phone with a graphical user interface |
US11048873B2 (en) | 2015-09-15 | 2021-06-29 | Apple Inc. | Emoji and canned responses |
USD924249S1 (en) * | 2018-10-13 | 2021-07-06 | Huawei Technologies Co., Ltd. | Display screen or portion thereof with transitional graphical user interface |
US11107261B2 (en) | 2019-01-18 | 2021-08-31 | Apple Inc. | Virtual avatar animation based on facial feature movement |
US11188190B2 (en) * | 2019-06-28 | 2021-11-30 | Snap Inc. | Generating animation overlays in a communication session |
USD937897S1 (en) * | 2018-11-29 | 2021-12-07 | Huawei Technologies Co., Ltd. | Mobile device with graphical user interface |
USD937896S1 (en) * | 2018-11-29 | 2021-12-07 | Huawei Technologies Co., Ltd. | Mobile device with graphical user interface |
US11216627B2 (en) * | 2018-11-28 | 2022-01-04 | Advanced New Technologies Co., Ltd. | Method and device for providing and verifying two-dimensional code |
USD942473S1 (en) * | 2020-09-14 | 2022-02-01 | Apple Inc. | Display or portion thereof with animated graphical user interface |
US11282064B2 (en) | 2018-02-12 | 2022-03-22 | Advanced New Technologies Co., Ltd. | Method and apparatus for displaying identification code of application |
GB2598913A (en) * | 2020-09-17 | 2022-03-23 | 1616 Media Ltd | Augmented reality messaging |
WO2022072983A1 (en) * | 2020-09-30 | 2022-04-07 | Snap Inc. | Augmented reality content generators for identifying geolocations |
US11307763B2 (en) | 2008-11-19 | 2022-04-19 | Apple Inc. | Portable touch screen device, method, and graphical user interface for using emoji characters |
US11321731B2 (en) | 2015-06-05 | 2022-05-03 | Apple Inc. | User interface for loyalty accounts and private label accounts |
US11410359B2 (en) * | 2020-03-05 | 2022-08-09 | Wormhole Labs, Inc. | Content and context morphing avatars |
US11423620B2 (en) * | 2020-03-05 | 2022-08-23 | Wormhole Labs, Inc. | Use of secondary sources for location and behavior tracking |
US20220300297A1 (en) * | 2019-04-17 | 2022-09-22 | Snap Inc. | Automated scaling of application features based on rules |
US20220374137A1 (en) * | 2021-05-21 | 2022-11-24 | Apple Inc. | Avatar sticker editor user interfaces |
US11533280B1 (en) * | 2019-09-30 | 2022-12-20 | Snap Inc. | Scan to share |
US11538225B2 (en) | 2020-09-30 | 2022-12-27 | Snap Inc. | Augmented reality content generator for suggesting activities at a destination geolocation |
US11580608B2 (en) | 2016-06-12 | 2023-02-14 | Apple Inc. | Managing contact information for communication applications |
US20230156295A1 (en) * | 2019-11-29 | 2023-05-18 | Gree, Inc. | Video distribution system, information processing method, and computer program |
USD996453S1 (en) * | 2021-11-22 | 2023-08-22 | Snap Inc. | Display screen or portion thereof with a graphical user interface |
USD996454S1 (en) * | 2021-11-22 | 2023-08-22 | Snap Inc. | Display screen or portion thereof with a graphical user interface |
US11809507B2 (en) | 2020-09-30 | 2023-11-07 | Snap Inc. | Interfaces to organize and share locations at a destination geolocation in a messaging system |
USD1003924S1 (en) * | 2020-12-31 | 2023-11-07 | Snap Inc. | Display screen or portion thereof with a graphical user interface |
US11836826B2 (en) | 2020-09-30 | 2023-12-05 | Snap Inc. | Augmented reality content generators for spatially browsing travel destinations |
US20230393709A1 (en) * | 2020-06-08 | 2023-12-07 | Snap Inc. | Encoded image based messaging system |
US11921998B2 (en) | 2020-05-11 | 2024-03-05 | Apple Inc. | Editing features of an avatar |
EP4222685A4 (en) * | 2020-09-30 | 2024-04-17 | Snap Inc. | Utilizing lifetime values of users to select content for presentation in a messaging system |
USD1026004S1 (en) * | 2020-12-30 | 2024-05-07 | Samsung Electronics Co., Ltd. | Display screen or portion thereof with graphical user interface |
US11983390B2 (en) * | 2022-08-18 | 2024-05-14 | Snap Inc. | Interacting with visual codes within messaging system |
USD1028013S1 (en) * | 2021-11-24 | 2024-05-21 | Nike, Inc. | Display screen with icon |
USD1028113S1 (en) * | 2021-11-24 | 2024-05-21 | Nike, Inc. | Display screen with icon |
US11989709B2 (en) | 2018-03-20 | 2024-05-21 | Rocky Jerome Wright | Augmented reality and messaging |
USD1029031S1 (en) * | 2021-03-16 | 2024-05-28 | Snap Inc. | Display screen or portion thereof with a graphical user interface |
USD1033481S1 (en) * | 2021-11-24 | 2024-07-02 | Nike, Inc. | Display screen with icon |
US12033296B2 (en) | 2018-05-07 | 2024-07-09 | Apple Inc. | Avatar creation user interface |
USD1034691S1 (en) * | 2021-11-24 | 2024-07-09 | Nike, Inc. | Display screen with icon |
USD1035714S1 (en) * | 2020-06-18 | 2024-07-16 | Apple Inc. | Display screen or portion thereof with animated graphical user interface |
USD1036492S1 (en) * | 2022-12-29 | 2024-07-23 | Lg Electronics Inc. | Display panel with animated graphical user interface |
USD1038964S1 (en) * | 2022-05-26 | 2024-08-13 | Samsung Electronics Co., Ltd. | Display screen or portion thereof with transitional graphical user interface |
US12079458B2 (en) | 2016-09-23 | 2024-09-03 | Apple Inc. | Image data for enhanced user interactions |
WO2024193430A1 (en) * | 2023-03-21 | 2024-09-26 | 华为技术有限公司 | Virtual avatar display method and apparatus, and electronic device |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11496807B2 (en) | 2019-06-28 | 2022-11-08 | Gree, Inc. | Video distribution system, video distribution method, information processing device, and video viewing program |
US11140515B1 (en) * | 2019-12-30 | 2021-10-05 | Snap Inc. | Interfaces for relative device positioning |
USD1003319S1 (en) * | 2021-02-07 | 2023-10-31 | Huawei Technologies Co., Ltd. | Display screen or portion thereof with graphical user interface |
US11935198B2 (en) * | 2021-06-29 | 2024-03-19 | Snap Inc. | Marker-based virtual mailbox for augmented reality experiences |
Family Cites Families (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030156135A1 (en) * | 2002-02-15 | 2003-08-21 | Lucarelli Designs & Displays, Inc. | Virtual reality system for tradeshows and associated methods |
US20110225498A1 (en) * | 2010-03-10 | 2011-09-15 | Oddmobb, Inc. | Personalized avatars in a virtual social venue |
US9349118B2 (en) * | 2011-08-29 | 2016-05-24 | Avaya Inc. | Input, display and monitoring of contact center operation in a virtual reality environment |
JP5980432B2 (en) | 2012-08-27 | 2016-08-31 | エンパイア テクノロジー ディベロップメント エルエルシー | Augmented reality sample generation |
KR20230173231A (en) * | 2013-03-11 | 2023-12-26 | 매직 립, 인코포레이티드 | System and method for augmented and virtual reality |
US11024065B1 (en) * | 2013-03-15 | 2021-06-01 | William S. Baron | Process for creating an augmented image |
US9952042B2 (en) * | 2013-07-12 | 2018-04-24 | Magic Leap, Inc. | Method and system for identifying a user location |
US10852838B2 (en) * | 2014-06-14 | 2020-12-01 | Magic Leap, Inc. | Methods and systems for creating virtual and augmented reality |
US20170243403A1 (en) | 2014-11-11 | 2017-08-24 | Bent Image Lab, Llc | Real-time shared augmented reality experience |
US10585470B2 (en) | 2017-04-07 | 2020-03-10 | International Business Machines Corporation | Avatar-based augmented reality engagement |
US11249714B2 (en) * | 2017-09-13 | 2022-02-15 | Magical Technologies, Llc | Systems and methods of shareable virtual objects and virtual objects as message objects to facilitate communications sessions in an augmented reality environment |
EP3724855A4 (en) | 2017-12-14 | 2022-01-12 | Magic Leap, Inc. | Contextual-based rendering of virtual avatars |
US10846902B2 (en) * | 2018-03-02 | 2020-11-24 | Imvu, Inc. | Preserving the state of an avatar associated with a physical location in an augmented reality environment |
US20190295056A1 (en) | 2018-03-20 | 2019-09-26 | Rocky Jerome Wright | Augmented Reality and Messaging |
US11087553B2 (en) * | 2019-01-04 | 2021-08-10 | University Of Maryland, College Park | Interactive mixed reality platform utilizing geotagged social media |
-
2019
- 2019-03-20 US US16/359,895 patent/US20190295056A1/en not_active Abandoned
- 2019-03-20 US US16/634,398 patent/US20200380486A1/en not_active Abandoned
- 2019-03-20 WO PCT/US2019/023252 patent/WO2019183276A1/en active Application Filing
-
2022
- 2022-03-09 US US17/691,027 patent/US11989709B2/en active Active
Cited By (70)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11307763B2 (en) | 2008-11-19 | 2022-04-19 | Apple Inc. | Portable touch screen device, method, and graphical user interface for using emoji characters |
US11734708B2 (en) | 2015-06-05 | 2023-08-22 | Apple Inc. | User interface for loyalty accounts and private label accounts |
US11321731B2 (en) | 2015-06-05 | 2022-05-03 | Apple Inc. | User interface for loyalty accounts and private label accounts |
US11048873B2 (en) | 2015-09-15 | 2021-06-29 | Apple Inc. | Emoji and canned responses |
US11580608B2 (en) | 2016-06-12 | 2023-02-14 | Apple Inc. | Managing contact information for communication applications |
US11922518B2 (en) | 2016-06-12 | 2024-03-05 | Apple Inc. | Managing contact information for communication applications |
US12079458B2 (en) | 2016-09-23 | 2024-09-03 | Apple Inc. | Image data for enhanced user interactions |
US12045923B2 (en) | 2017-05-16 | 2024-07-23 | Apple Inc. | Emoji recording and sending |
US10846905B2 (en) | 2017-05-16 | 2020-11-24 | Apple Inc. | Emoji recording and sending |
US10845968B2 (en) | 2017-05-16 | 2020-11-24 | Apple Inc. | Emoji recording and sending |
US10997768B2 (en) | 2017-05-16 | 2021-05-04 | Apple Inc. | Emoji recording and sending |
US11532112B2 (en) | 2017-05-16 | 2022-12-20 | Apple Inc. | Emoji recording and sending |
US11282064B2 (en) | 2018-02-12 | 2022-03-22 | Advanced New Technologies Co., Ltd. | Method and apparatus for displaying identification code of application |
US11790344B2 (en) | 2018-02-12 | 2023-10-17 | Advanced New Technologies Co., Ltd. | Method and apparatus for displaying identification code of application |
US11989709B2 (en) | 2018-03-20 | 2024-05-21 | Rocky Jerome Wright | Augmented reality and messaging |
US12033296B2 (en) | 2018-05-07 | 2024-07-09 | Apple Inc. | Avatar creation user interface |
US10861248B2 (en) | 2018-05-07 | 2020-12-08 | Apple Inc. | Avatar creation user interface |
US11682182B2 (en) | 2018-05-07 | 2023-06-20 | Apple Inc. | Avatar creation user interface |
US11380077B2 (en) | 2018-05-07 | 2022-07-05 | Apple Inc. | Avatar creation user interface |
USD919659S1 (en) * | 2018-10-12 | 2021-05-18 | Huawei Technologies Co., Ltd. | Mobile phone with a graphical user interface |
USD919658S1 (en) * | 2018-10-12 | 2021-05-18 | Huawei Technologies Co., Ltd. | Mobile phone with a graphical user interface |
USD919660S1 (en) * | 2018-10-12 | 2021-05-18 | Huawei Technologies Co., Ltd. | Mobile phone with a graphical user interface |
USD919661S1 (en) * | 2018-10-12 | 2021-05-18 | Huawei Technologies Co., Ltd. | Mobile phone with a graphical user interface |
USD924249S1 (en) * | 2018-10-13 | 2021-07-06 | Huawei Technologies Co., Ltd. | Display screen or portion thereof with transitional graphical user interface |
US11216627B2 (en) * | 2018-11-28 | 2022-01-04 | Advanced New Technologies Co., Ltd. | Method and device for providing and verifying two-dimensional code |
USD937897S1 (en) * | 2018-11-29 | 2021-12-07 | Huawei Technologies Co., Ltd. | Mobile device with graphical user interface |
USD937896S1 (en) * | 2018-11-29 | 2021-12-07 | Huawei Technologies Co., Ltd. | Mobile device with graphical user interface |
US11107261B2 (en) | 2019-01-18 | 2021-08-31 | Apple Inc. | Virtual avatar animation based on facial feature movement |
USD900831S1 (en) * | 2019-03-12 | 2020-11-03 | AIRCAP Inc. | Display screen or portion thereof with graphical user interface |
US11704135B2 (en) * | 2019-04-17 | 2023-07-18 | Snap Inc. | Automated scaling of application features based on rules |
US20220300297A1 (en) * | 2019-04-17 | 2022-09-22 | Snap Inc. | Automated scaling of application features based on rules |
US10659405B1 (en) | 2019-05-06 | 2020-05-19 | Apple Inc. | Avatar integration with multiple applications |
US11188190B2 (en) * | 2019-06-28 | 2021-11-30 | Snap Inc. | Generating animation overlays in a communication session |
US11533280B1 (en) * | 2019-09-30 | 2022-12-20 | Snap Inc. | Scan to share |
US20230156295A1 (en) * | 2019-11-29 | 2023-05-18 | Gree, Inc. | Video distribution system, information processing method, and computer program |
US12022165B2 (en) * | 2019-11-29 | 2024-06-25 | Gree, Inc. | Video distribution system, information processing method, and computer program |
US11423620B2 (en) * | 2020-03-05 | 2022-08-23 | Wormhole Labs, Inc. | Use of secondary sources for location and behavior tracking |
US11410359B2 (en) * | 2020-03-05 | 2022-08-09 | Wormhole Labs, Inc. | Content and context morphing avatars |
US11921998B2 (en) | 2020-05-11 | 2024-03-05 | Apple Inc. | Editing features of an avatar |
US20230393709A1 (en) * | 2020-06-08 | 2023-12-07 | Snap Inc. | Encoded image based messaging system |
USD1035714S1 (en) * | 2020-06-18 | 2024-07-16 | Apple Inc. | Display screen or portion thereof with animated graphical user interface |
CN111970190A (en) * | 2020-07-27 | 2020-11-20 | 上海连尚网络科技有限公司 | Method and equipment for providing energy information |
USD942473S1 (en) * | 2020-09-14 | 2022-02-01 | Apple Inc. | Display or portion thereof with animated graphical user interface |
USD1036471S1 (en) | 2020-09-14 | 2024-07-23 | Apple Inc. | Display screen or portion thereof with animated graphical user interface |
USD996455S1 (en) * | 2020-09-14 | 2023-08-22 | Apple Inc. | Display screen or portion thereof with animated graphical user interface |
GB2598913A (en) * | 2020-09-17 | 2022-03-23 | 1616 Media Ltd | Augmented reality messaging |
EP4222685A4 (en) * | 2020-09-30 | 2024-04-17 | Snap Inc. | Utilizing lifetime values of users to select content for presentation in a messaging system |
US11816805B2 (en) | 2020-09-30 | 2023-11-14 | Snap Inc. | Augmented reality content generator for suggesting activities at a destination geolocation |
US11809507B2 (en) | 2020-09-30 | 2023-11-07 | Snap Inc. | Interfaces to organize and share locations at a destination geolocation in a messaging system |
WO2022072983A1 (en) * | 2020-09-30 | 2022-04-07 | Snap Inc. | Augmented reality content generators for identifying geolocations |
US11836826B2 (en) | 2020-09-30 | 2023-12-05 | Snap Inc. | Augmented reality content generators for spatially browsing travel destinations |
US12039499B2 (en) | 2020-09-30 | 2024-07-16 | Snap Inc. | Augmented reality content generators for identifying destination geolocations and planning travel |
US11538225B2 (en) | 2020-09-30 | 2022-12-27 | Snap Inc. | Augmented reality content generator for suggesting activities at a destination geolocation |
USD1026004S1 (en) * | 2020-12-30 | 2024-05-07 | Samsung Electronics Co., Ltd. | Display screen or portion thereof with graphical user interface |
USD1004609S1 (en) * | 2020-12-31 | 2023-11-14 | Snap Inc. | Display screen or portion thereof with a graphical user interface |
USD1003924S1 (en) * | 2020-12-31 | 2023-11-07 | Snap Inc. | Display screen or portion thereof with a graphical user interface |
USD1003925S1 (en) | 2020-12-31 | 2023-11-07 | Snap Inc. | Display screen or portion thereof with a graphical user interface |
USD1029031S1 (en) * | 2021-03-16 | 2024-05-28 | Snap Inc. | Display screen or portion thereof with a graphical user interface |
US20220374137A1 (en) * | 2021-05-21 | 2022-11-24 | Apple Inc. | Avatar sticker editor user interfaces |
US11714536B2 (en) * | 2021-05-21 | 2023-08-01 | Apple Inc. | Avatar sticker editor user interfaces |
USD996454S1 (en) * | 2021-11-22 | 2023-08-22 | Snap Inc. | Display screen or portion thereof with a graphical user interface |
USD996453S1 (en) * | 2021-11-22 | 2023-08-22 | Snap Inc. | Display screen or portion thereof with a graphical user interface |
USD1028113S1 (en) * | 2021-11-24 | 2024-05-21 | Nike, Inc. | Display screen with icon |
USD1034691S1 (en) * | 2021-11-24 | 2024-07-09 | Nike, Inc. | Display screen with icon |
USD1033481S1 (en) * | 2021-11-24 | 2024-07-02 | Nike, Inc. | Display screen with icon |
USD1028013S1 (en) * | 2021-11-24 | 2024-05-21 | Nike, Inc. | Display screen with icon |
USD1038964S1 (en) * | 2022-05-26 | 2024-08-13 | Samsung Electronics Co., Ltd. | Display screen or portion thereof with transitional graphical user interface |
US11983390B2 (en) * | 2022-08-18 | 2024-05-14 | Snap Inc. | Interacting with visual codes within messaging system |
USD1036492S1 (en) * | 2022-12-29 | 2024-07-23 | Lg Electronics Inc. | Display panel with animated graphical user interface |
WO2024193430A1 (en) * | 2023-03-21 | 2024-09-26 | 华为技术有限公司 | Virtual avatar display method and apparatus, and electronic device |
Also Published As
Publication number | Publication date |
---|---|
US20230026498A1 (en) | 2023-01-26 |
US20200380486A1 (en) | 2020-12-03 |
US11989709B2 (en) | 2024-05-21 |
WO2019183276A1 (en) | 2019-09-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11989709B2 (en) | Augmented reality and messaging | |
US10547575B2 (en) | Apparatus and method for control of access to communication channels | |
US20180189998A1 (en) | Method for sharing emotions through the creation of three-dimensional avatars and their interaction | |
US8244830B2 (en) | Linking users into live social networking interactions based on the users' actions relative to similar content | |
US20140316894A1 (en) | System and method for interfacing interactive systems with social networks and media playback devices | |
US10515371B2 (en) | Interactive networking systems with user classes | |
US10810526B2 (en) | Server for selecting a sequential task-oriented event and methods for use therewith | |
WO2011112941A1 (en) | Purchase and delivery of goods and services, and payment gateway in an augmented reality-enabled distribution network | |
US20160321762A1 (en) | Location-based group media social networks, program products, and associated methods of use | |
CN110300951A (en) | Media item attachment system | |
US20160132216A1 (en) | Business-to-business solution for picture-, animation- and video-based customer experience rating, voting and providing feedback or opinion based on mobile application or web browser | |
US10853869B2 (en) | Electronic wish list system | |
US11276111B2 (en) | Online social and collaborative commerce system and method thereof | |
US20220139041A1 (en) | Representations in artificial realty | |
CN113709022A (en) | Message interaction method, device, equipment and storage medium | |
JP2021522632A (en) | Systems and methods for generating and presenting detailed content about a product or service using a communication interface on demand | |
US20180350127A1 (en) | Methods and apparatus for dynamic, expressive animation based upon specific environments | |
TWI589337B (en) | Method for providing messenger service, messenger system and computer readable recording medium | |
US11743215B1 (en) | Artificial reality messaging with destination selection | |
CN117099365A (en) | Presenting participant reactions within a virtual conference system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PRE-INTERVIEW COMMUNICATION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |