CN115699703A - Dynamic augmented reality assembly - Google Patents

Dynamic augmented reality assembly Download PDF

Info

Publication number
CN115699703A
CN115699703A CN202180041688.6A CN202180041688A CN115699703A CN 115699703 A CN115699703 A CN 115699703A CN 202180041688 A CN202180041688 A CN 202180041688A CN 115699703 A CN115699703 A CN 115699703A
Authority
CN
China
Prior art keywords
augmented reality
camera
reality component
attributes
launch
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202180041688.6A
Other languages
Chinese (zh)
Inventor
埃博尼·詹姆斯·查尔顿
帕特里克·曼迪亚
希丽亚·妮科尔·穆尔库扬尼斯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Snap Inc
Original Assignee
Snap Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Snap Inc filed Critical Snap Inc
Publication of CN115699703A publication Critical patent/CN115699703A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/04Real-time or near real-time messaging, e.g. instant messaging [IM]
    • H04L51/046Interoperability with other network applications or services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/07User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail characterised by the inclusion of specific contents
    • H04L51/10Multimedia information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/07User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail characterised by the inclusion of specific contents
    • H04L51/18Commands or executable codes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/131Protocols for games, networked simulations or virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/024Multi-user, collaborative environment

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Computer Graphics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Multimedia (AREA)
  • Information Transfer Between Computers (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The technical problem of adding content originating from a third party app to an augmented reality component is solved by: an augmented reality component maintained by the message server system is configured to modify content captured by the camera device according to values of the one or more attributes, and to allow delivery of the values from the third party app to the message server system in a payload of a deep link referencing an ID of the augmented reality component. Content originating from the third-party app and provided to the augmented reality component maintained by the message server system may be dynamic in that content such as sports scores provided by the sports news channel app may change over time.

Description

Dynamic augmented reality assembly
Priority declaration
This application claims priority from U.S. provisional application serial No. 63/037,559 filed on 10/6/2020 and U.S. patent application serial No. 16/948,268 filed on 10/9/2020, each of which is incorporated herein by reference in its entirety.
Technical Field
The present disclosure relates generally to facilitating interaction between a messaging client and a third party resource.
Background
The popularity of computer-implemented programs that allow users to access content online and interact with other users continues to grow. There are a variety of computer-implemented applications that allow users to share content with other users through messaging clients. Some of such computer-implemented applications (referred to as apps) may be designed to run on mobile devices such as phones, tablets, or watches, while having backend services provide on a server computer system to perform operations that may require more resources than reasonable resources performed at the client device (e.g., store large amounts of data or perform computationally expensive processing). For example, a messaging client and associated messaging server system may be configured to allow online users to share content.
Drawings
In the drawings, which are not necessarily drawn to scale, like reference numerals may describe similar parts throughout the different views. To readily identify the discussion of any particular element or act, one or more of the most significant digits in a reference number refer to the figure number in which that element is first introduced. Some embodiments are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings:
FIG. 1 is a diagrammatic representation of a networked environment in which a developer tool system may be deployed, according to some examples.
Fig. 2 is a block diagram of an architecture of a system for providing dynamic augmented reality components, according to some examples.
Fig. 3 is a flow diagram of a method for providing a dynamic augmented reality component, according to some examples.
Fig. 4 is a diagrammatic representation of example content captured from a camera view screen of a messaging client.
Fig. 5 is a diagrammatic representation of example content modified using a dynamic augmented reality component captured from a camera view screen of a messaging client, according to some examples.
Fig. 6 is a diagrammatic representation of a machine in the form of a computer system within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, may be executed, according to some examples.
Detailed Description
A message server system (which hosts the backend service of an associated message client) is configured to allow users to capture images and videos with cameras provided to client devices (which host message clients) and share the captured content with other users via network communications. The message server system is also configured to provide an Augmented Reality (AR) component accessible via the message client that can be used to modify content captured by the camera, for example by superimposing pictures or animations on top of captured images or video frames and by adding three-dimensional (3D) effects, objects, characters and transformations.
The AR component may be implemented using a programming language suitable for app development (e.g., javaScript or Java). The AR component may execute within the messaging client when the camera device records the video, such that each captured frame is modified in a manner determined by the particular AR component configuration. For example, the AR component may be configured to detect a head position of a person whose video is capturing in each frame of the video, and to superimpose an image of the party cap over the detected head position on each frame so that a viewer of the video captured with the AR component will see the person being captured in the video wearing the party cap. When the camera view is turned on in the messaging client, the effects of the AR component may be applied to the captured image, and the messaging client may present one or more visual controls to the user that allow the visual effects produced by the AR component to be applied and removed, as well as select a desired AR component from a plurality of AR components.
The user may launch an AR component by opening the messaging client (in some embodiments, to a camera view screen that is ready to capture or has captured an image or video) and selecting an icon representing the desired AR component. The AR components are identified in the message server system by corresponding AR component identifiers. In some implementations, a particular AR component maintained by the message server system may be accessed directly from a third party resource, such as a third party app, via a deep link that references an AR component identifier, without having to first launch the message client (whether a standalone app or a web-based version of the app). In the context of apps, a deep link may be described as using a Uniform Resource Identifier (URI) that links to a particular location within the app, rather than simply launching the app. When such a deep link is activated from a third-party app executing at a camera-equipped client device, the deep link is passed to the message server system and returned to the client device's live camera, where the AR component referenced by the AR component identifier in the deep link is activated. For the purposes of this description, activating an AR component in a messaging client renders the AR component available for user selection. Activating the AR component will cause the effects of the AR component to be applied to the captured content. Activating the real-time camera, along with initiating operation of a given AR component, causes the effects of the given AR component to be applied to content being captured by the camera. In addition to providing access to the AR component directly from the third party app, the message server system may be configured to allow third party developers to share content originating from their third party apps to the AR component maintained by the message server system.
The technical problem of adding content originating from a third party app to an AR component maintained by a message server system is solved by: the AR component is configured to modify content captured by the camera device according to values of the one or more launch attributes, and to allow delivery of the values from the third-party app to the message server system in a payload of a deep link referencing an AR component identifier of the AR component. Content provided to an AR component maintained by the message server system (e.g., sports scores provided by a sports news channel app or current weather information provided by a weather app) originating from its third party app may be dynamic in that the content may change over time.
For purposes of this description, an AR component configured to modify content captured by a camera device according to values of one or more startup attributes is referred to as a dynamic augmented reality component or a dynamic AR component. The type of modification specified by the launch attribute may indicate: some type of content (e.g., text, images, or animation) will be superimposed on a portion of the captured image (e.g., centered on the lower right quadrant of the frame area). The value of such a start attribute may be, for example, text, an image, or an animation. For example, the value of the launch attribute provided from the sports news channel app may be an image or a string of characters indicating the cumulative score of the sports event in progress. For the purposes of this description, the value of a start-up attribute that may be delivered to a message server system via a deep link may be referred to simply as an attribute. Another example of a startup attribute is a startup attribute that references a binary asset (e.g., a three-dimensional machine learning model that takes as input content captured by a camera device), which may be executed relative to the captured content, resulting in the replacement of an entire object (e.g., a human body depicted in a frame) in the captured frame with a three-dimensional (3D) mesh or texture.
The message server system is configured to provide the user with a set of developer tools that include features that allow third party app developers to include in their third party apps deep links that reference AR component identifiers and that may include values of one or more launch attributes in the payload. The developer tools provided by the message server system may be invoked from third party computer systems via a developer portal that is accessible via a web browser. The developer portal may be downloaded to a third party computer system, in which case the use of a web browser may not be required. The developer portal may be used to obtain deep links referencing given AR component identifiers, and access an AR component service Application Programming Interface (API) configured to allow third party developers to create new dynamic AR components. In some embodiments, the message server system pushes the AR component template and attributes and assets determining the functionality of the new AR component to the developer portal via the AR component service API. A User Interface (UI) provided with a developer portal allows third party developers to select any attribute for a new AR component and specify at least one attribute whose value is to be received from a third party app via a deep link.
In operation, the third party app uses developer tools provided by the message server system to build a request for linking to an AR component having a particular AR component identifier. In some examples, the third party app is a registered client with respect to a developer tool provided by the message server system. The developer tool system is building a deep link and encrypting the payload, which stores the AR component identifier and the attributes that are ready for startup.
When the message server system detects activation of the deep link from the client device (e.g., when the user activates the deep link using associated visual controls provided by the UI of the third-party app), the message server system decrypts the deep link payload and obtains the AR component identifier, verifies that the AR component referenced by the AR component identifier in the deep link is valid, and verifies that the AR component is unlockable (meaning that it can be activated and used instantly in the message app). Attributes are then extracted from the payload of the deep link. The extracted attributes are stored at the persistent storage system in a specified data object associated with the AR component identifier. The data object is configured, for example by a developer via the developer portal 132 using a scripting language, to be populated with attributes from the payload of the deep link when the associated AR component is initialized. When the AR component is launched, the stored properties are exposed to the AR component using the data object. The persistent storage system may be associated with the messaging server system, or it may reside at a client computer system hosting the messaging client.
Once the AR component is activated, the content captured from the camera view screen is augmented with the modifications provided by the AR component. In some implementations, sending or publishing the captured content augmented with the modifications provided by the AR component causes the messaging client to have the AR component available to the user for a predetermined period of time (e.g., 48 hours). The attributes extracted from the payload of the deep link and stored in the persistent storage system remain in the persistent storage system until the next time the deep link is enabled from the third party app, at which point these stored attributes are overwritten with new attributes, which are the attributes extracted from the link at that time. Where the attribute extracted from the deep link is a reference to a binary asset, such attribute indicates a storage location of the referenced binary asset, which may be at a client computer device hosting the messaging client.
The methods described herein may be advantageously used with various third party resources. Third party resources may be understood as installed apps, as well as HTML-based mini applications (minis) and games. HTML is a markup language for structuring and rendering content on the world wide web. An installed app is a software file that is installed on a device using executable code and that can be launched independently of a messaging client. Mini-applications and games (which may be considered tiny apps that may be downloaded to a client device and run within a large app) are HTML-based apps that require a messaging client to download a markup language document and render the markup language document (in a web view) in a browser run by the messaging client. When launched by the user, the HTML file is retrieved from the server for rendering the app.
Fig. 1 is a block diagram illustrating an example messaging system 100 for exchanging data (e.g., messages and associated content) over a network. Messaging system 100 includes multiple instances of client devices 102, each client device 102 hosting several applications including a messaging client 104 and a third party app 103. The third party app103 is configured to allow a user to access functionality provided by the third party system 130. Each messaging client 104 is communicatively coupled to a messaging server system 108 and other instances of messaging clients 104 via a network 106 (e.g., the internet).
Messaging client 104 is capable of communicating and exchanging data with another messaging client 104 and messaging server system 108 via network 106. Data exchanged between messaging clients 104 and message server system 108 includes functions (e.g., commands to activate functions) and payload data (e.g., text, audio, video, or other multimedia data).
The message server system 108 is capable of providing server-side functionality to a particular message client 104 via the network 106. While certain functions of messaging system 100 are described herein as being performed by messaging client 104 or by messaging server system 108, the location of certain functions within messaging client 104 or within messaging server system 108 may be a design choice. For example, it is technically preferable that: certain techniques and functionality may be initially deployed within message server system 108, but later migrated to message client application 104 where client device 102 has sufficient processing power.
Message server system 108 supports various services and operations provided to message client 104. Such operations include sending data to messaging client 104, receiving data from messaging client 104, and processing data generated by messaging client 104. By way of example, the data may include message content, client device information, geo-location information, media enhancements and overlays, message content persistence conditions, social networking information, and live event information. Activating and controlling data exchange within messaging system 100 through functions available via a User Interface (UI) of a messaging client 104
Turning now specifically to the message server system 108, an Application Program Interface (API) server 110 is coupled to the application server 112 and provides a programming interface to the application server 112. Application server 112 is communicatively coupled to a database server 118 that facilitates access to a database 120. The web server 124 is coupled to the application server 112 and provides a web-based interface to the application server 112. To this end, the web server 124 processes incoming network requests through the hypertext transfer protocol (HTTP) and several other related protocols. The database 120 stores data associated with messages processed by the application server 112, such as profile data about a particular entity. Where the entity is a person, the profile data includes, for example, the user's name, notification and privacy settings, as well as the user's self-reported age and records relating to changes made by the user to their profile data.
An Application Program Interface (API) server 110 receives and transmits message data (e.g., commands and message payloads) between the client device 102 and the application server 112. In particular, application Program Interface (API) server 110 provides a set of interfaces (e.g., routines and protocols) that message client 104 can call or query to activate functions of application server 112. The API server 110 exposes various functions supported by the application server 112, including: registering an account; a login function; sending a message from a particular messaging client 104 to another messaging client 104 via an application server 112; sending media files (e.g., images or videos) from messaging client 104 to messaging server system 114 and for possible access by another messaging client 104; open application events (e.g., related to messaging client 104); as well as various functions supported by the developer tools provided by the message server system 108 for use by third party computer systems.
The application server 112 hosts several server applications and subsystems, including, for example, a message server system 114, an image processing server 116, and a social network server 122. Message server system 114 implements several message processing techniques and functions, particularly those related to aggregating and otherwise processing content (e.g., text and multimedia content) included in messages received from multiple instances of messaging client 104. The image processing server 116 is dedicated to performing various image processing operations, typically with respect to images or video within the payload of messages sent from the message server system 114 or received at the message server system 114. Social networking server 122 supports and makes available various social networking functions and services to message server system 114.
Also shown in fig. 1 is developer tool server 117. The developer tool server 117 maintains one or more software developer toolkits (SDKs) that allow users to integrate some of the features provided with the message server system across their apps (also referred to as third party apps). These features include a deep link of a dynamic AR component configured to modify content captured by the camera device according to values of one or more launch attributes delivered from the third-party app to the message server system in a payload of the deep link referencing an AR component identifier of the AR component. The functionality provided by the developer tool server 117 is accessible from third party computer systems via a developer portal accessible via a web browser. In some examples, a developer portal that provides third party computer systems (e.g., third party system 130) access to functionality provided by developer tool server 117 may be downloaded to the third party computer systems, in which case the use of a web browser may not be required. The third party system 130 is shown to include a developer portal 132. As explained above, the developer portal 132 may be accessed via a web browser executing on the third party app provider system 130, or downloaded to the third party computer system 130.
FIG. 2 is a block diagram illustrating an example system 200 for providing dynamic AR components. The system 200 includes a deep link generator 210, a deep link activation detector 220, a property extractor 230, and an augmented reality component activator 240. As mentioned above, for the purposes of this description, the augmented reality component may be referred to as an AR component. Deep link generator 210 is configured to construct a deep link that includes a payload for storing an identifier (augmented reality component identifier) representing an AR component in message server system 108 of fig. 1 and for storing values for one or more startup attributes. As explained above, the AR component is configured to modify content captured by the camera device according to the values of the one or more startup attributes. The deep link activation detector 220 is configured to detect activation of a deep link from a third party resource executing at the client device (e.g., from a third party app103 executing at the client device 102 of fig. 1). The attribute extractor 230 is configured to extract respective values of one or more launch attributes from the deeply linked payloads and store the extracted respective values in the persistent storage system.
Augmented reality component activator 240 is configured to cause messaging client 104 to launch into a camera view screen. The camera view screen indicates that the camera is activated. The augmented reality component activator 240 uses the corresponding values to dynamically configure the AR components as the corresponding values become available or updated in the persistent storage system. After the AR component is dynamically configured, the AR component is ready to modify the captured content according to the stored corresponding values. The augmented reality component activator 240 then loads the dynamically configured AR component into the messaging client. Each of the various components of system 200 may be provided at client device 102 and/or at message server system 108 of fig. 1. Additional details regarding the operation of system 200 are provided below.
FIG. 3 is a flow diagram of a method 300 for providing a dynamic AR component. Method 300 may be performed by processing logic that may comprise hardware (e.g., dedicated logic, programmable logic, microcode, etc.), software, or a combination thereof. In an example embodiment, some or all of the processing logic resides at the client device 102 of fig. 1 and/or at the message server system 108 of fig. 1. At operation 310, the message server system 108 detects activation of the deep link from a third party resource executing at the client device (e.g., from the third party app103 executing at the client device 102 of fig. 1). The deep link is structured to include in its payload a value representing an AR component identifier and one or more startup attributes of an AR component in message server system 108. At operation 320, values for one or more launch attributes are extracted and stored in the persistent storage system. As mentioned above, the persistent storage system may reside at the client device 103 or the message server system 108. At operation 330, messaging client 104 is caused to launch into a camera view screen. At operation 340, corresponding values are retrieved from the persistent storage system and the AR component is configured with the corresponding values so that it can be used to modify content captured by the camera in accordance with the stored corresponding values. At operation 350, the configured AR component is loaded into the messaging client.
Although the depicted flow chart may show the operations as a sequential process, many of the operations can be performed in parallel or concurrently. Further, the order of the operations may be rearranged. The process terminates when its operations are completed. A process may correspond to a method, a program, an algorithm, etc. The operations of a method may be performed in whole or in part, may be performed in combination with some or all of the operations of other methods, and may be performed by any number of different systems (e.g., the systems described herein) or any portion thereof (e.g., a processor included in any system).
Fig. 4 is a diagrammatic representation 400 of example content captured from a camera view screen of messaging client 104. Shown in area 410 is a person captured in a frame of a still image or video. Visual control 420 may be activated to apply the modifications of the associated AR component to the captured content. In this example, the visual control 420 can execute to enhance the captured content with the current sports game score obtained from the third party app103 from which the messaging client 104 launched via the deep link 103. Fig. 5 is a diagrammatic representation 500 of content captured from a camera view screen of a messaging client using an AR component referenced in a deep link and modified according to a value extracted from the payload of the deep link (in this example, the current sports game score). Shown in area 510 are: a person captured in a frame of a still image or video; and also the current sports game score shown in field 530 superimposed on the captured content. The visual control 520 is shown with a thicker black line to indicate that the associated AR component has activated. The deep links are constructed within a developer portal 132 executing at the third-party system 130 of FIG. 1. The process of constructing the deep link includes additional variables for the payload of the deep link. Example attributes included in the payload of the deep link that result in the current sports game score shown in area 530 of FIG. 5 are shown below as example 1-the variables team _ one _ score, team _ two _ score, team _ one _ name, team _ two _ name, and their associated values.
Example 1
team_one_score:5,team_two_score:3,team_one_name:
Beagle team, name "
The attributes shown in example 1 are stored in the data object associated with the AR component identifier referenced in the deep link. When the AR component is loaded into messaging client 104, these attributes are passed directly from the data object into the AR component. In one embodiment, using the example shown in fig. 5, the AR component may be configured to modify binary assets within the AR component. For example, the AR component may be configured to highlight the team name and score using the team color, and also configured to show the associated team logo. Team colors and logo images may be predefined in the AR component and selected based on attributes provided in the deep link payload.
Machine architecture
Fig. 6 is a diagrammatic representation of a machine 600 in which instructions 608 (e.g., software, program, application, applet, app, or other executable code) for causing the machine 700 to perform any one or more of the methodologies discussed herein may be executed. For example, the instructions 608 may cause the machine 600 to perform any one or more of the methods described herein. The instructions 608 transform the general purpose non-programmed machine 600 into a specific machine 600 programmed to perform the functions described and illustrated in the described manner. The machine 600 may operate as a standalone device or may be coupled (e.g., networked) to other machines. In a networked deployment, the machine 600 may operate in the capacity of a server machine or a client machine in server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine 600 may include, but is not limited to: a server computer, a client computer, a Personal Computer (PC), a tablet computer, a laptop computer, a netbook, a set-top box (STB), a Personal Digital Assistant (PDA), an entertainment media system, a cellular telephone, a smart phone, a mobile device, a wearable device (e.g., a smart watch), a smart home device (e.g., a smart appliance), other smart devices, a web appliance, a network router, a network switch, a network bridge, or any machine capable of sequentially or otherwise executing instructions 608 that specify actions to be taken by the machine 600. Further, while only a single machine 600 is illustrated, the term "machine" shall also be taken to include a collection of machines that individually or jointly execute the instructions 1100 to perform any one or more of the methodologies discussed herein. For example, machine 600 may comprise client device 102 or any of several server devices that form part of message server system 108. In some examples, the machine 600 may also include both client and server systems, where certain operations of a particular method or algorithm are performed on the server side and certain operations of the particular method or algorithm are performed on the client side.
The machine 600 may include a processor 602, a memory 604, and an input/output (I/O) component 638 that may be configured to communicate with each other via a bus 640. In an example, processor 602 (e.g., a Central Processing Unit (CPU), a Reduced Instruction Set Computing (RISC) processor, a Complex Instruction Set Computing (CISC) processor, a Graphics Processing Unit (GPU), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Radio Frequency Integrated Circuit (RFIC), another processor, or any suitable combination thereof) may include, for example, processor 606 and processor 610, which execute instructions 608. The term "processor" is intended to include multi-core processors, which may include two or more separate processors (sometimes referred to as "cores") that may execute instructions concurrently. Although fig. 6 illustrates multiple processors 602, the machine 600 may include a single processor with a single core, a single processor with multiple cores (e.g., a multi-core processor), multiple processors with a single core, multiple processors with multiple cores, or any combination thereof.
The memory 604 includes a main memory 612, a static memory 614, and a storage unit 616 that are accessible to the processor 602 via a bus 640. Main memory 604, static memory 614, and storage unit 616 store instructions 608 that implement any one or more of the methodologies or functions described herein. The instructions 608 may also reside, completely or partially, within the main memory 612, within the static memory 614, within the machine-readable medium 618 within the storage unit 616, within at least one of the processors 602 (e.g., within a cache memory of a processor), or any suitable combination thereof during execution thereof by the machine 600.
The I/O components 638 may include various components for receiving input, providing output, generating output, transmitting information, exchanging information, capturing measurement results, and so forth. The particular I/O components 638 included in a particular machine will depend on the type of machine. For example, portable machines such as mobile phones may include touch input devices or other such input mechanisms, while headless server machines will be unlikely to include such touch input devices. It will be understood that I/O component 638 may include many other components not shown in fig. 6. In various examples, I/O components 638 may include user output components 624 and user input components 626. User output components 624 may include visual components (e.g., a display such as a Plasma Display Panel (PDP), a Light Emitting Diode (LED) display, a Liquid Crystal Display (LCD), a projector, or a Cathode Ray Tube (CRT)), acoustic components (e.g., speakers), tactile components (e.g., vibration motors, resistance mechanisms), other signal generators, and so forth. User input components 626 may include alphanumeric input components (e.g., a keyboard, a touch screen configured to receive alphanumeric input, an electro-optical keyboard, or other alphanumeric input components), point-based input components (e.g., a mouse, a touchpad, a trackball, a joystick, a motion sensor, or other pointing instrument), tactile input components (e.g., physical buttons, a touch screen that provides the location and force of a touch or touch gesture, or other tactile input components), audio input components (e.g., a microphone), and so forth.
In other examples, the I/O components 638 may include biometric components 628, motion components 630, environmental components 632, or positioning components 634, as well as various other components. For example, biometric component 628 includes components for detecting expressions (e.g., hand expressions, facial expressions, vocal expressions, body gestures, or eye tracking), measuring bio-signals (e.g., blood pressure, heart rate, body temperature, perspiration, or brain waves), identifying a person (e.g., voice identification, retinal identification, facial identification, fingerprint identification, or electroencephalogram-based identification), and so forth. The motion components 630 include acceleration sensor components (e.g., accelerometers), gravity sensor components, rotation sensor components (e.g., gyroscopes).
Environmental components 632 include, for example: one or more camera devices (with still image/photo and video capabilities), lighting sensor components (e.g., photometer), temperature sensor components (e.g., one or more thermometers that detect ambient temperature), humidity sensor components, pressure sensor components (e.g., barometer), acoustic sensor components (e.g., one or more microphones that detect background noise), proximity sensor components (e.g., infrared sensors that detect nearby objects), gas sensors (e.g., gas detection sensors that detect concentrations of hazardous gases or measure pollutants in the atmosphere for safety), or other components that may provide indications, measurements, or signals corresponding to the surrounding physical environment.
Regarding the camera, the client apparatus 102 may have a camera system including, for example, a front camera on a front surface of the client apparatus 102 and a rear camera on a rear surface of the client apparatus 102. The front camera may, for example, be used to capture still images and video (e.g., "self-portrait") of a user of the client device 102, which may then be enhanced with the enhancement data (e.g., filters) described above. For example, a rear-facing camera may be used to capture still images and video in a more traditional camera mode, which images are similarly enhanced using enhancement data. In addition to front-facing and rear-facing cameras, the client device 102 may also include a 360 camera for capturing 360 photographs and videos.
Further, the camera system of the client device 102 may include dual rear cameras (e.g., a primary camera and a depth-sensitive camera), or even a triple, quadruple, or quintuple rear camera configuration on the front and back sides of the client device 102. For example, these multiple camera systems may include wide-angle cameras, ultra-wide-angle cameras, tele-cameras, macro-cameras, and depth sensors.
The location component 634 includes a positioning sensor component (e.g., a GPS receiver component), an altitude sensor component (e.g., an altimeter or barometer that detects barometric pressure at which altitude may be derived), an orientation sensor component (e.g., a magnetometer), and so forth.
Communication may be implemented using a variety of techniques. The I/O devices 638 further include a communications device 636, a communications device 1336 operable to couple the machine 600 to the network 620 or the device 622 via a respective coupling or connection. For example, the communication component 636 may include a network interface component or another suitable device to interface with the network 620. In a further example of the present invention, the communication component 636 can include a wired communication component, a wireless communication component, a cellular communication component, a Near Field Communication (NFC) component, a,
Figure BDA0003990677940000131
The components (e.g.,
Figure BDA0003990677940000132
low power consumption),
Figure BDA0003990677940000133
Components, and other communication components that provide communication via other modalities. The device 622 mayIs any of the other machines or various peripherals (e.g., peripherals coupled via USB).
Further, the communication component 636 can detect the identifier or include a component operable to detect the identifier. For example, the communication component 636 can include a Radio Frequency Identification (RFID) tag reader component, an NFC smart tag detection component, an optical reader component (e.g., an optical sensor for detecting one-dimensional barcodes such as Universal Product Code (UPC) barcodes, multidimensional barcodes such as Quick Response (QR) codes, aztec codes, data matrices, data symbols (Dataglyph), maximum codes (MaxiCode), PDF417, ultra codes (Ultra Code), UCC RSS-2D barcodes, and other optical codes), or an acoustic detection component (e.g., a microphone for identifying the audio signal of a tag). In addition, various information can be derived via communications component 636, such as location via Internet Protocol (IP) geolocation, via
Figure BDA0003990677940000134
Location of signal triangulation, location of NFC beacon signals that may indicate a particular location via detection, and the like.
Various memories (e.g., main memory 612, static memory 614, and memory of processor 602) and storage unit 616 may store one or more sets of instructions and data structures (e.g., software) embodying or used by any one or more of the methodologies or functions described herein. The instructions (e.g., instructions 608), when executed by the processor 602, cause various operations to implement the disclosed examples.
The instructions 608 may be sent or received over the network 620 via a network interface device (e.g., a network interface component included in the communications component 636), using a transmission medium and using any one of several well-known transmission protocols (e.g., the hypertext transfer protocol (HTTP)). Similarly, the instructions 608 may be sent or received via a coupling (e.g., a peer-to-peer coupling) to a device 622 using a transmission medium.
Glossary
"carrier wave signal" refers to any intangible medium that is capable of storing, encoding or carrying instructions for execution by the machine and includes digital or analog communications signals or other intangible medium to facilitate communication of such instructions. The instructions may be transmitted or received over a network using a transmission medium via the network interface device.
"client device" refers to any machine that interfaces with a communication network to obtain resources from one or more server systems or other client devices. A client device may be, but is not limited to, a mobile phone, desktop computer, laptop computer, portable Digital Assistant (PDA), smartphone, tablet, ultrabook, netbook, laptop computer, multiprocessor system, microprocessor-based or programmable consumer electronics, game console, set-top box, or any other communication device that a user may use to access a network.
<xnotran> " " , , , , (VPN), (LAN), LAN (WLAN), (WAN), WAN (WWAN), (MAN), , , (PSTN) , (POTS) , , , </xnotran>
Figure BDA0003990677940000141
A network, other type of network, or a combination of two or more such networks. For example, the network or a portion of the network may include a wireless network or a cellular network, and the coupling may be a Code Division Multiple Access (CDMA) connection, a global system for mobile communications (GSM) connection, or other type of cellular or wireless coupling. In this example, the coupling may implement any of various types of data transmission techniques, such as single carrier radio transmission technology (1 xRTT), evolution-data optimized (EVDO) techniques, general Packet Radio Service (GPRS) techniques, enhanced data rates for GSM evolution (EDGE) techniques, third generation partnership project (3 GPP) including 3G, fourth generation wireless (4G) networks, universal Mobile Telecommunications System (UMTS), high Speed Packet Access (HSPA), worldwide Interoperability for Microwave Access (WiMAX), long term evolution (LT)E) Standards, other data transmission techniques defined by various standards-setting organizations, other long-range protocols, or other data transmission techniques.
"component" refers to a device, physical entity, or logic having boundaries defined by function or subroutine calls, branch points, APIs, or other techniques that provide partitioning or modularity to particular processing or control functions. The components may be combined with other components via their interfaces to perform machine processing. A component may be a packaged functional hardware unit designed for use with other components and may be part of a program that typically performs specific ones of the associated functions. The components may constitute software components (e.g., code embodied on a machine-readable medium) or hardware components. A "hardware component" is a tangible unit that is capable of performing certain operations and may be configured or arranged in some physical manner. In various example embodiments, one or more computer systems (e.g., a standalone computer system, a client computer system, or a server computer system) or one or more hardware components of a computer system (e.g., a processor or a group of processors) may be configured by software (e.g., an application or application portion) as a hardware component for performing certain operations described herein. The hardware components may also be implemented mechanically, electronically, or in any suitable combination thereof. For example, a hardware component may comprise dedicated circuitry or logic that is permanently configured to perform certain operations. The hardware component may be a special purpose processor such as a Field Programmable Gate Array (FPGA) or an Application Specific Integrated Circuit (ASIC). The hardware components may also include programmable logic or circuitry that is temporarily configured by software to perform certain operations. For example, the hardware components may include software executed by a general purpose processor or other programmable processor. Once configured by such software, the hardware components become a particular machine (or particular components of a machine) uniquely customized to perform the configured functions, and no longer a general purpose processor. It will be appreciated that it may be decided to implement a hardware component mechanically, in a dedicated and permanently configured circuit, or in a temporarily configured (e.g., configured by software) circuit for cost and time considerations. Accordingly, the phrase "hardware component" (or "hardware-implemented component") should be understood to encompass a tangible entity, i.e., an entity that is physically constructed, permanently configured (e.g., hardwired), or temporarily configured (e.g., programmed) to operate in a certain manner or to perform certain operations described herein. Consider an embodiment in which the hardware components are temporarily configured (e.g., programmed) without configuring or instantiating each of the hardware components at any one time. For example, where the hardware components include a general-purpose processor that is configured by software to become a special-purpose processor, the general-purpose processor may be configured at different times to be different special-purpose processors, respectively (e.g., including different hardware components). Software configures one or more specific processors accordingly, for example to constitute specific hardware components at one time and to constitute different hardware components at different times. A hardware component may provide information to and receive information from other hardware components. Accordingly, the described hardware components may be considered to be communicatively coupled. Where multiple hardware components are present at the same time, communication may be achieved through signal transmission between or among two or more hardware components (e.g., through appropriate circuits and buses). In embodiments where multiple hardware components are configured or instantiated at different times, communication between such hardware components may be achieved, for example, by storing information in a memory structure accessed by the multiple hardware components and retrieving the information in the memory structure. For example, one hardware component may perform an operation and store the output of the operation in a memory device to which it is communicatively coupled. Other hardware components may then access the memory device at a later time to retrieve and process the stored output. The hardware component may also initiate communication with an input device or an output device and may operate on the resource (e.g., collection of information). Various operations of the example methods described herein may be performed, at least in part, by one or more processors that are temporarily configured (e.g., by software) or permanently configured to perform the relevant operations. Whether temporarily configured or permanently configured, such a processor may constitute processor-implemented components that operate to perform one or more operations or functions described herein. As used herein, "processor-implemented component" refers to a hardware component that is implemented using one or more processors. Similarly, the methods described herein may be implemented at least in part by processors, where a particular processor or processors are examples of hardware. For example, at least some of the operations of a method may be performed by one or more processors 1004 or processor-implemented components. Further, the one or more processors may also be operable to support execution of related operations in a "cloud computing" environment or operate as a "software as a service" (SaaS). For example, at least some of the operations may be performed by a group of computers (as an example of a machine including a processor), where the operations are accessible via a network (e.g., the internet) and via one or more appropriate interfaces (e.g., APIs). The performance of certain operations may be distributed among the processors, and not just reside within a single machine, but are deployed across multiple machines. In some example embodiments, the processors or processor-implemented components may be located in a single geographic location (e.g., within a home environment, an office environment, or a server farm). In other example embodiments, the processor or processor-implemented component may be distributed across several geographic locations.
"computer-readable storage medium" refers to both machine storage media and transmission media. Thus, these terms include both storage devices/media and carrier wave/modulated data signals. The terms "machine-readable medium," "computer-readable medium," and "device-readable medium" mean the same thing, and may be used interchangeably in this disclosure.
"machine-storage medium" refers to a single or multiple storage devices and media (e.g., a centralized or distributed database, and associated caches and servers) that store the executable instructions, routines, and data. Thus, the term should be taken to include, but not be limited to, solid-state memories and optical and magnetic media, including memories internal or external to the processor. Specific examples of machine storage media, computer storage media, and device storage media include: non-volatile memory including, for example, semiconductor memory devices such as erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), FPGA, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The terms "machine storage medium," "device storage medium," "computer storage medium" mean the same thing and may be used interchangeably in this disclosure. The terms "machine storage medium," computer storage medium, "and" device storage medium "expressly exclude a carrier wave, a modulated data signal, and other such media, at least some of which are encompassed by the term" signal medium.
A "non-transitory computer-readable storage medium" refers to a tangible medium capable of storing, encoding, or carrying instructions for execution by a machine.
"signal medium" refers to any intangible medium that is capable of storing, encoding, or carrying instructions for execution by the machine, and includes digital or analog communications signals or other intangible medium to facilitate communication of software or data. The term "signal medium" shall be taken to include any form of modulated data signal, carrier wave, or the like. The term "modulated data signal" means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. The terms "transmission medium" and "signal medium" refer to the same thing and may be used interchangeably in this disclosure.

Claims (20)

1. A method, comprising:
maintaining, at a back-end server providing a messaging client implemented by one or more processors of a client device, an augmented reality component identified by an augmented reality component identifier, the augmented reality component configured to modify content captured by a camera provided at the client device according to values of one or more launch attributes;
at the back-end server, providing a deep link comprising a payload for storing the augmented reality component identifier and values of the one or more launch attributes; and
at the back-end server, in response to detecting activation of the deep link from a third-party resource executing at the client device,:
extracting respective values of the one or more launch attributes from the payload of the deep link and storing the extracted respective values,
causing the messaging client to launch to a camera view screen indicating that the camera is activated,
using the respective values, dynamically configuring the augmented reality component to modify content captured by the camera device by applying modifications according to the respective values of the one or more startup attributes, and
loading a dynamically configured augmented reality component in the messaging client.
2. The method of claim 1, wherein storing the extracted respective values comprises: persisting the extracted respective values at a persistent storage system until an additional instance of the deep link is activated from the third party resource executing on the client device.
3. The method of claim 1, wherein:
the value of an attribute from the one or more launch attributes is an alphanumeric string; and is
The modification is to superimpose the alphanumeric string on a designated area of content captured by the camera.
4. The method of claim 1, wherein the value of an attribute from the one or more startup attributes is a reference to a binary asset.
5. The method of claim 4, wherein the binary asset indicates a persistent storage location, the method comprising: storing, at the client device, the binary asset at the persistent storage location.
6. The method of claim 4, wherein the binary asset is a three-dimensional machine learning model.
7. The method of claim 6, wherein applying the modification comprises: executing the three-dimensional machine learning model using the content captured by the camera as an input to the three-dimensional machine learning model.
8. The method of claim 1, further comprising: activating an augmented reality component represented by the augmented reality component identifier, the activation causing a modification identified by a respective value of the one or more launch attributes to be applied to content captured by the camera device.
9. The method of claim 1, further comprising:
in response to detecting activation of a further instance of the deep link from the third party resource executing at the client device, a payload of the further instance of the deep link includes a new value:
overwriting the stored corresponding value with the new value, an
Dynamically configuring the augmented reality component using the new value.
10. The method of claim 1, further comprising:
detecting a request from the messaging client to transmit content captured by the camera and modified by the dynamically configured augmented reality component to a further computing device; and
transmitting content captured by the camera and modified by the dynamically configured augmented reality component to the additional computing device.
11. A system, comprising:
one or more processors; and
a non-transitory computer-readable storage medium comprising instructions that, when executed by the one or more processors, cause the one or more processors to perform operations comprising:
maintaining, at a back-end server providing a messaging client implemented by one or more processors of a client device, an augmented reality component identified by an augmented reality component identifier, the augmented reality component configured to modify content captured by a camera provided at the client device that hosts the messaging client according to values of one or more launch attributes;
at the back-end server, providing a deep link comprising a payload for storing the augmented reality component identifier and values of the one or more launch attributes; and
at the back-end server, in response to detecting activation of the deep link from a third-party resource executing at the client device,:
extracting respective values of the one or more start-up attributes from the payload of the deep link and storing the extracted respective values,
causing the messaging client to launch to a camera view screen indicating that the camera is activated,
using the respective values, dynamically configuring the augmented reality component to modify content captured by the camera device by applying modifications according to the respective values of the one or more startup attributes, and
loading a dynamically configured augmented reality component in the messaging client.
12. The system of claim 11, wherein storing the extracted respective values comprises: persisting the extracted respective values at a persistent storage system until an additional instance of the deep link is activated from the third-party resource executing on the client device.
13. The system of claim 11, wherein:
the value of an attribute from the one or more launch attributes is an alphanumeric string; and is provided with
The modification is to superimpose the alphanumeric string on a designated area of the content captured by the camera.
14. The system of claim 11, wherein the value of an attribute from the one or more startup attributes is a reference to a binary asset.
15. The system of claim 14, wherein the binary asset indicates a persistent storage location, the method comprising: storing, at the client device, the binary asset at the persistent storage location.
16. The system of claim 14, wherein the binary asset is a three-dimensional machine learning model.
17. The system of claim 16, wherein applying the modification comprises: executing the three-dimensional machine learning model using the content captured by the camera as an input to the three-dimensional machine learning model.
18. The system of claim 11, the operations further comprising: activating an augmented reality component represented by the augmented reality component identifier, the activation causing a modification identified by a respective value of the one or more launch attributes to be applied to content captured by the camera device.
19. The system of claim 11, the operations further comprising:
in response to detecting activation of the further instance of the deep link from the third party resource executing at the client device, a payload of the further instance of the deep link includes a new value:
overwriting the stored corresponding value with the new value, an
Dynamically configuring the augmented reality component using the new value.
20. A machine-readable non-transitory storage medium having instruction data executable by a machine to cause the machine to perform operations comprising:
maintaining, at a back-end server providing a messaging client implemented by one or more processors of a client device, an augmented reality component identified by an augmented reality component identifier, the augmented reality component configured to modify content captured by a camera provided at the client device according to values of one or more launch attributes;
at the back-end server, providing a deep link comprising a payload for storing the augmented reality component identifier and values of the one or more launch attributes; and
at the back-end server, in response to detecting activation of the deep link from a third-party resource executing at the client device, performing the following:
extracting respective values of the one or more launch attributes from the payload of the deep link and storing the extracted respective values,
causing the messaging client to launch to a camera view screen indicating that the camera is activated,
using the respective values, dynamically configuring the augmented reality component to modify content captured by the camera device by applying modifications according to the respective values of the one or more startup attributes, and
loading a dynamically configured augmented reality component in the messaging client.
CN202180041688.6A 2020-06-10 2021-06-03 Dynamic augmented reality assembly Pending CN115699703A (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US202063037559P 2020-06-10 2020-06-10
US63/037,559 2020-06-10
US16/948,268 2020-09-10
US16/948,268 US11238660B2 (en) 2020-06-10 2020-09-10 Dynamic augmented reality components
PCT/US2021/035630 WO2021252256A1 (en) 2020-06-10 2021-06-03 Dynamic augmented reality components

Publications (1)

Publication Number Publication Date
CN115699703A true CN115699703A (en) 2023-02-03

Family

ID=78825838

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202180041688.6A Pending CN115699703A (en) 2020-06-10 2021-06-03 Dynamic augmented reality assembly

Country Status (5)

Country Link
US (3) US11238660B2 (en)
EP (1) EP4165608A1 (en)
KR (1) KR20230020525A (en)
CN (1) CN115699703A (en)
WO (1) WO2021252256A1 (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11238660B2 (en) 2020-06-10 2022-02-01 Snap Inc. Dynamic augmented reality components
US11360733B2 (en) 2020-09-10 2022-06-14 Snap Inc. Colocated shared augmented reality without shared backend
US20230123518A1 (en) * 2021-10-14 2023-04-20 Meta Platforms, Inc. Motion-based generation of applications in virtual reality and augmented reality systems
US20230214913A1 (en) * 2021-12-30 2023-07-06 Snap Inc. Product cards provided by augmented reality content generators
WO2023129999A1 (en) * 2021-12-30 2023-07-06 Snap Inc. Api to provide product cards
US20230214912A1 (en) * 2021-12-30 2023-07-06 Snap Inc. Dynamically presenting augmented reality content generators based on domains
US20230334787A1 (en) * 2022-04-19 2023-10-19 Snap Inc. Augmented reality experiences with dynamically loadable assets
US20230386144A1 (en) * 2022-05-27 2023-11-30 Snap Inc. Automated augmented reality experience creation system
US20230410440A1 (en) * 2022-06-21 2023-12-21 Snap Inc. Integrating augmented reality experiences with other components
US11943265B2 (en) * 2022-08-03 2024-03-26 Tmrw Foundation Ip S. À R.L. Videoconferencing meeting slots via specific secure deep links
US20240048599A1 (en) * 2022-08-03 2024-02-08 Tmrw Foundation Ip S. À R.L. Videoconferencing meeting slots via specific secure deep links

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10572215B1 (en) 2014-03-31 2020-02-25 Amazon Technologies, Inc. Extendable architecture for augmented reality system
US20170289338A1 (en) 2016-03-31 2017-10-05 Microsoft Technology Licensing, Llc Enabling stateful dynamic links in mobile applications
US10372784B2 (en) * 2017-02-08 2019-08-06 Documensions, Inc. Generating rich digital documents from limited instructional data
US10841249B2 (en) * 2017-10-02 2020-11-17 Samsung Electronics Co., Ltd. System and method for bot platform
US10712811B2 (en) 2017-12-12 2020-07-14 Facebook, Inc. Providing a digital model of a corresponding product in a camera feed
WO2019212908A1 (en) 2018-04-30 2019-11-07 Krikey, Inc. Networking in mobile augmented reality environments
US10855978B2 (en) 2018-09-14 2020-12-01 The Toronto-Dominion Bank System and method for receiving user input in virtual/augmented reality
US11770579B2 (en) * 2018-12-21 2023-09-26 Streamlayer, Inc. Method and system for providing interactive content delivery and audience engagement
US11238660B2 (en) 2020-06-10 2022-02-01 Snap Inc. Dynamic augmented reality components

Also Published As

Publication number Publication date
US11704880B2 (en) 2023-07-18
US11238660B2 (en) 2022-02-01
US20220130119A1 (en) 2022-04-28
WO2021252256A1 (en) 2021-12-16
KR20230020525A (en) 2023-02-10
EP4165608A1 (en) 2023-04-19
US20230306696A1 (en) 2023-09-28
US20210390781A1 (en) 2021-12-16

Similar Documents

Publication Publication Date Title
US11238660B2 (en) Dynamic augmented reality components
US11558325B2 (en) Generating interactive messages with asynchronous media content
US11887237B2 (en) Dynamic composite user identifier
US11716301B2 (en) Generating interactive messages with asynchronous media content
KR20210143289A (en) 3D avatar plugin for 3rd party games
US11805084B2 (en) Bidirectional bridge for web view
US11743340B2 (en) Deep linking to augmented reality components
US11579847B2 (en) Software development kit engagement monitor
US20210392141A1 (en) Stated age filter
US11870745B1 (en) Media gallery sharing and management
CN116710895A (en) Body UI for augmented reality components

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination