US20210183163A1 - Augmented reality remote authoring and social media platform and system - Google Patents
Augmented reality remote authoring and social media platform and system Download PDFInfo
- Publication number
- US20210183163A1 US20210183163A1 US17/126,611 US202017126611A US2021183163A1 US 20210183163 A1 US20210183163 A1 US 20210183163A1 US 202017126611 A US202017126611 A US 202017126611A US 2021183163 A1 US2021183163 A1 US 2021183163A1
- Authority
- US
- United States
- Prior art keywords
- content
- user
- environment
- coordinates
- point clouds
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/20—Input arrangements for video game devices
- A63F13/21—Input arrangements for video game devices characterised by their sensors, purposes or types
- A63F13/213—Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/20—Input arrangements for video game devices
- A63F13/21—Input arrangements for video game devices characterised by their sensors, purposes or types
- A63F13/216—Input arrangements for video game devices characterised by their sensors, purposes or types using geographical information, e.g. location of the game device or player using GPS
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/60—Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
- A63F13/63—Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor by the player, e.g. authoring using a level editor
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/60—Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
- A63F13/65—Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/047—Probabilistic or stochastic networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/088—Non-supervised learning, e.g. competitive learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/06—Ray-tracing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/50—Lighting effects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L9/00—Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
- H04L9/06—Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols the encryption apparatus using shift registers or memories for block-wise or stream coding, e.g. DES systems or RC4; Hash functions; Pseudorandom sequence generators
- H04L9/0618—Block ciphers, i.e. encrypting groups of characters of a plain text message using fixed encryption transformation
- H04L9/0637—Modes of operation, e.g. cipher block chaining [CBC], electronic codebook [ECB] or Galois/counter mode [GCM]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L9/00—Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
- H04L9/08—Key distribution or management, e.g. generation, sharing or updating, of cryptographic keys or passwords
- H04L9/0894—Escrow, recovery or storing of secret information, e.g. secret key escrow or cryptographic key storage
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L9/00—Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
- H04L9/32—Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials
- H04L9/3236—Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials using cryptographic hash functions
- H04L9/3239—Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials using cryptographic hash functions involving non-keyed hash functions, e.g. modification detection codes [MDCs], MD5, SHA or RIPEMD
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L9/00—Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
- H04L9/32—Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials
- H04L9/3297—Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials involving time stamps, e.g. generation of time stamps
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/08—Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/56—Particle system, point based geometry or rendering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2215/00—Indexing scheme for image rendering
- G06T2215/12—Shadow map, environment map
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/20—Indexing scheme for editing of 3D models
- G06T2219/2004—Aligning objects, relative positioning of parts
-
- H04L2209/38—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/10—Network architectures or network communication protocols for network security for controlling access to devices or network resources
- H04L63/102—Entity profiles
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L9/00—Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
- H04L9/50—Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols using hash chains, e.g. blockchains or hash trees
Definitions
- augmented reality AR
- 3D content largely has to be authored locally for markerless AR, or is authored via what is known as marker-based AR.
- Spatial anchors shared between users via a mutual server provide a more integrated and persistent experience, but that method lacks consistent accuracy in the placement of content.
- spatial 3D mapping of environments brings inherent user, data, and property privacy and security issues that have yet to be solved at a broad scale.
- FIG. 1 is an overview of an embodiment of a system illustrating a game engine, a cloud server, a real-world environment, an augmented reality environment, many points within the augmented reality environment, and a point cloud.
- FIG. 2 is a process map for an embodiment of the invention starting with an AR camera activation to generating and saving point clouds to the server.
- FIG. 3 is a process map for an embodiment of the invention that determines an exact location for a client device and if there is any point clouds on the server for the location.
- FIG. 4 demonstrates how multiple point clouds merge to create a larger point cloud mesh in an embodiment of the invention.
- FIG. 5 is one method of using image databases to calculate a client device location and calculate the difference between local point clouds and the global point clouds, according to an embodiment.
- FIG. 6 demonstrates how an embodiment of the system makes objects real.
- FIG. 7 describes a method that an author follows to upload objects into the system, according to an embodiment.
- FIG. 8 is a not-to-scale rendering of one state map and a general breakdown of tiles.
- FIG. 9 demonstrates a method of determining the optimum location within a tile for users to automatically post content, according to an embodiment.
- FIG. 10 demonstrates a break down in areas from public to private and the security provided to private property owners.
- FIG. 11 demonstrates an embodiment of blockchain and a secure contract.
- FIG. 12 is a general topographic map which would eventually have point clouds overlay as users digitally map the environment in an embodiment of the invention.
- This invention describes methods to author content remotely and locally, as well as the methods and systems needed to be in place to facilitate such an endeavor, such as a shared 3D map of the environment, standardization of maps via conversion of local euclidian coordinates to global geodesic coordinates, as well as a system for managing spatial data ownership in regards to viewing and authoring content within the bounds of public and/or private digital-spatial property.
- a conventional augmented reality (AR) system stores objects 105 in a local library to display for a user 106 on the user's phone or through specially designed glasses 102 .
- the conventional system triggers displaying the objects 105 based on a target.
- the present invention improves on the conventional system.
- Users 106 of the present invention can collaborate to generate a digital globally-mapped, persistent 3D environment 101 .
- users remotely author content 701 to manually or automatically display locally or globally.
- users 106 protect their content 105 through blockchain-encrypted 1101 smart contracts 1102 .
- the system 103 also referenced herein as the game engine
- Another embodiment allows users 106 to filter content 105 displayed to them within the AR environment 101 .
- An embodiment of this invention includes a game engine 103 in conjunction with an integrated cloud-server system 104 .
- the gaming engine 103 communicates with the cloud-server system 104 through a software development kit (SDK) 107 or a Graphics Development Kit (GDK) 107 .
- SDK software development kit
- GDK Graphics Development Kit
- the gaming engine SDK 103 services the construction and management of 3D environments 111 .
- a gaming engine SDK 103 or application programming interface (API) 103 provides a rendering engine, a physics engine, collision detection, sound, scripting, animation, AI, networking, streaming, memory management, threading, localization support, scene graph, and AR camera tracking.
- the present invention is based on a 3D point cloud environment 111 (AR Cloud) that is generated by a user 106 by utilizing the cameras 108 and sensors on the user's 106 client device 102 .
- sensors may include, but are not limited to, gyroscopes, accelerometers, Global Positioning System (GPS), Bluetooth Low Energy (BLE), and WiFi.
- an augmented reality system 103 merges a user's 106 locally created 3D cloud environment 111 with a base digital topological map 1201 of the world constructed out of polygon-meshes and planes.
- the base map includes 3D buildings, such as skyscrapers, but the buildings have the option of being edited with point clouds 111 and meshes.
- the merging 403 of locally generated content 105 with the base map creates a 1:1 scale digital representation of the real, physical world while users explore and map the world.
- the system 103 stores the topological map 1201 on a series of linked servers 104 or a cloud based server 104 .
- the invention divides the base map into 10 meter ⁇ 10 meter sections, referred to herein as tiles 801 .
- the user-generated 3D point cloud environments 111 are linked to the base map based on the tile 801 where the user 106 is located.
- the system 103 tracks the user's 106 position in the 3D environment.
- the system 103 downloads digital content to the user's device based on the tile 801 in which the user is located instead of downloading the entire world map.
- the system 103 makes a GPS API call 210 to facilitate localization of the user.
- the user's device 102 generates a geo-reference point 211 corresponding to its location, through a geo-reference system including, but not limited to, GPS, WiFi localization, LiFi localization, cellular phone localization, BLE beacons, or a combination thereof.
- the system 103 attempts to merge the user's 3D environment point cloud 111 to any overlapping previously saved point clouds 111 in tiles 801 near the user's location because the geo-reference point might not be accurate.
- Features are matched 307 between the user point cloud and the system point clouds 111 .
- the system merges the user's point cloud 111 with the system point cloud 111 that matches the most features.
- the merged point cloud 402 consists of two or more overlapping point clouds 111 .
- the system 103 saves the merged point cloud 402 the server as a subordinate point cloud 402 under the identification number of the old point cloud 111 .
- the point clouds are broken into tile 801 groups where the system 103 assigns a unique tile identification number to each voxel within the go-coordinates of the tile 801 .
- the tile 801 contains the voxels and geo-coordinates from many different point clouds 111 that existin within the tile 801 , optimizing the download process of point clouds 111 to a client device 102 .
- One embodiment of the invention utilizes an AR camera 108 on the client device 102 .
- the AR camera 108 provides a viewport into the digital 3D environment 101 through raycasting provided by the system 103 .
- the system 103 listens for and registers which 3D objects 105 are being virtually touched and interacted with through the AR camera's 108 field of vision (FOV) 109 .
- FOV field of vision
- the system 103 makes a GPS call 210 to determine the device's location 211 .
- the system 103 searches for any pre-existing point clouds 111 geo-tagged with IDs within a predetermined geo-coordinated radius.
- the system 103 downloads 305 any existing point clouds 111 and their corresponding tiles 801 within the radius.
- the system 103 uses third-party Simultaneous Localization and Mapping (SLAM) and relocalization APIs to relocalize the client device within the pre-existing point cloud environment 111 .
- SLAM Simultaneous Localization and Mapping
- the system 103 relocalizes the client device 102 by matching monochrome or colored features in the image frames between the preexisting point clouds 111 and the current image frame 101 of the camera 102 feed.
- the system also utilizes image processing APIs and computer vision APIs to relocalize the client device within the pre-existing point cloud environment 111 .
- a convolutional neural network (CNN) 901 compares the provided, retrieved, and calculated depths of the feature points 110 for images frames that share a different lighting.
- the CNN 901 compares the 3D shapes of the point clouds 111 generated in the image frames to enhance the localization process and provide better accuracy in low and contrasting light situations between the current and the saved camera frame.
- One method of achieving the CNN comparison is through a object recognition API.
- the system 103 may capture additional point clouds 111 that did not already exist in the pre-existing point cloud environment 111 if relocalization is successful.
- the system 103 differentiates by feature-matching points 110 already in the pre-existing point cloud 111 with newly generated points 110 and eliminates the newly-generated points 110 that matched with points 110 already in the pre-existing point cloud 111 .
- SLAM APIs localize the camera 108 in the spatial environment, while simultaneously generating a 3D point cloud map 111 of the spatial environment.
- a Mono-SLAM API or a Stereo-SLAM API calculates depth through triangulation or through client device-equipped hardware that calculates depth data per pixel.
- Semantic Segmentation algorithms 205 run concurrently on the point clouds 111 while SLAM algorithms generate the point clouds. Semantic Segmentation 205 identifies point clouds 111 that compose a specific object and track those point clouds 111 within the scene.
- One method the system 103 uses is an Interactive Closest Point (ICP) algorithm to match up nearby point clouds 111 , identify the point cloud object using image or object recognition APIs that are trained to recognize such objects, construct a bounding box around the identified object, and track the object in the 3D environment across camera image frames.
- Semantic segmentation omits identified point clouds 111 that should not be saved in the 3D environment such as dynamic objects. Point clouds 111 that should be saved include stationary object and static objects identified through machine learning algorithms since they are likely to remain in the scene.
- the system 103 maps and localizes to the 3D point cloud scene 111 at an appropriate frequency as the user traverses the environment.
- the client sensor data determines the update frequency.
- One such sensor data point is how fast the user is moving within the environment. Faster movement may be sensed and this information may be used to increase the update frequency.
- This method localizes the client device and the local environment within the global environment.
- the system 103 performs relocalization to reduce drift so that geo-coordinates remain synced.
- spatial coordinates 110 of the environment are oriented based on where the client device was positioned when the session began.
- the system 103 automatically sets the starting position as (0,0,0) 203 .
- the local coordinates may be converted into global coordinates in order to be consistent with other point clouds 111 that are generated by other users.
- the global environment coordinates are based on real-life GPS coordinates. The exact geo-coordinates of the device may be determined in order to convert the local coordinates to global coordinates.
- the system 103 compares the image frames of the camera feed to those in an already-tagged geospatial database to globally localize the local point cloud 111 of a client device 102 .
- the geospatial database includes images of the same scene captured during different times of day, different weather conditions, and different seasons.
- the system 103 calculates the transform 212 (difference in distance and offset of orientation) by comparing image frames that are geo-tagged to those that are not. This calculates the geo-coordinates of the input images in real-time.
- One method of obtaining a geo-tagged database utilizes a pre-existing Streetview API 501 .
- the system 103 searches for images in a Streetview dataset 502 that has a geo-coordinate 503 within the radius of the user.
- the system 103 identifies an image 504 , using feature matching, that has the most features in common with the current device frame 505 .
- the system 103 calculates the difference in orientation of the two images' feature points 110 by using a homography matrix (perspective transform), extended Kalman filters, depth data, and epipolar geometry through triangulation 506 .
- the system 103 then calculates, with 6 degrees of freedom, the transformation 507 in position between the two images 504 , 505 using the orientation and depth data.
- the system 103 finds the latitude, longitude, and altitude of the current image frame by multiplying the transformation value 507 by the geo-coordinates. Altitude is used if provided as a data point within the geotagged image 504 . Once the current geo-coordinates are found, the system 103 adds the transformation 507 to the rest of the coordinates in the local environment to become global coordinates 213 .
- Meshing 403 occurs after global localization.
- Point clouds 111 are converted into polygon meshes 402 by connecting nearby point clouds 111 into planes 601 .
- Points 110 that reside on the same plane, with a set threshold of variance, are connected at the edges with other nearby points 110 .
- the system 103 converts meshes 402 into game-objects 105 and adds game engine colliders 602 to the objects 105 .
- Colliders are scripts associated with a game-object that recognize when a second game-object 105 contacts the first game-object 105 . Collider scripts may trigger an event such as preventing the first game-object from moving past the second game-object 105 .
- Game-objects 105 allow a 3D object 105 to be interacted with and programmed within the game engine environment 103 .
- the system 103 adds colliders to objects to make other 3D objects/game-objects interact with that game-object realistically. For example, objects are not able to pass through other objects that have colliders.
- the system 103 recognizes when two identical point clouds 111 at the same location are different because of different current weather conditions compared to when the environment was scanned such as sunshine, rain and snow.
- the system 103 collects this data from video feeds, images, or from synthetic data.
- the system 103 uses the images' geo-location data when captured, determined by a basic localization method, such as GPS, or are determined via triangulation and transformation calculations between other localized video feeds and images.
- This synthetic data may include images and videos (consecutive image frames) that are taken as input into a pre-trained GAN (Generative Adversarial Neural Network) and, by way of deep learning algorithms (neural network) manipulating the RGB and alpha values of the pixels of the inputted image frames, and adversarial deep learning algorithms, such as a neural network, concurrently evaluating the success/effectiveness of the manipulated image, can reproduce accurate copies of the original image frame, making the copy of the image realistically appear as though it is of the same scene as the original image, only during a different time of day, lighting environment/level, season of the year, etc.
- GAN Geneative Adversarial Neural Network
- a game engine 103 uses raycasting to calculate and display what is in the AR camera's FOV 109 .
- the game engine 103 uses vector calculations to track a vector or “beam of light” from each pixel in the frame to the 3D environment.
- the environment is programmed so that a vector does not pass through the game-object if the object is programmed not to allow a vector to pass through the object's surface 603 .
- the engine 103 will not render 604 any object behind the game-object 105 that the vector interacts with and is occluded from the AR camera's sight 101 .
- Meshing may be performed in one of at least two locations.
- the first location is on the client device utilizing a game engine API or SDK.
- Another location is in the cloud utilizing a game engine API or SDK.
- meshing of environments that are preparing to be downloaded to the client are meshed first.
- the system 103 places an AR camera object at the digital 3D environment geo-location and orientation when the client device is localized in the global environment. This method also occurs when the client device is located in a local point cloud environment 111 . This method enables users 106 to view the 3D content that is placed at the user's location within the 3D environment.
- the system 103 handles the occlusion, scaling, rendering, and perceived movement of the 3D content in the environment, based on how the content is scripted or programmed to behave.
- Point clouds 111 and meshes 402 are not rendered within the user's AR camera 108 FOV 109 .
- the meshes 402 are present in the scene but their alpha value or visibility value is set to zero. This makes the meshes invisible within the scene. Users only see content rendered on the screen and not the background 3D reconstruction of the scene. This seamlessly integrates the user's AR experience with their real-world environment.
- the system 103 saves the point cloud 111 to the 3D environment at the end of the AR camera 108 session.
- 3D point clouds 111 stored in the spatial database on the cloud server system represent the 3D environment.
- the system 103 stores the point clouds 111 with their monochrome values as well as their color coordinate values (e.g., RGB). Semantic segmentation identifies undesirable point clouds.
- the system 103 omits undesirable point clouds from the upload to the 3D environment.
- the system 103 attaches relevant metadata to the point clouds 111 then uploads the point clouds 111 to the server 104 .
- Relevant metadata may include, but is not limited to, the client device ID, the transformed global geo-coordinates, and a timestamp.
- the point clouds 111 are organized in a data structure by geolocation ID.
- the system 103 parses the geo-location coordinates from the server and populates the point clouds 111 in the 3D environment 111 based on the metadata whenever the point clouds 111 are rendered or otherwise used in the
- the previously described 3D environment system 103 also serves as a framework for user content generation 701 .
- an author 704 uploads an asset 705 to the system 103 or creates an asset 705 within the system 103 .
- the asset 705 (e.g., file or object) includes, but is not limited to, a 3D object or asset file, a 2D or 3D image file, a 2D GIF, a text file, an audio file, an animated asset, a compatible native program file, an image or video captured in the system, a video converted to GIF, or text typed into a dialog.
- the system 103 converts the asset 705 into a 3D object 105 compatible with display in the system 103 .
- the system 103 calculates the ambient lighting level and direction of the client device 102 using native APIs 703 .
- the sun's 707 movement in the sky is modeled and represented as an artificial light source 707 within the system 103 so that shadows 711 are cast over the meshes 402 in the 3D environment 101 .
- This provides semi-realistic shadows 711 on user-generated content 105 .
- the system 103 displays the 3D object 105 on the ground of the environment in the user's FOV 109 .
- the user may manipulate the object 105 in different ways in different embodiments, e.g., by using one or two fingers on the device's screen to rotate, elevate, scale, or move the object laterally or vertically in the scene.
- the system 103 may generate a shadow 711 below the object 105 to indicate the object's position.
- the system 103 adds colliders on the object's meshes 402 so that the object 105 cannot be placed inside of objects in the environment 112 .
- the system 103 saves metadata 709 for the object 105 including, but not limited to, the object's geo-coordinates within the scene, orientation, scale, elevation, the user's creator ID, post settings, visibility settings, permissions, options, post ID, tags, description, timestamp, and expiration time stamp.
- the object 105 and its relevant content 708 and spatial data 709 may be sent to the system 103 to populate a corresponding polygon-mesh environment. 101 .
- Post content data 709 may be defined as user-generated content 105 encapsulated in a data object.
- the post content data and metadata are saved to the server database.
- a check is made to verify if the post 710 is set as ephemeral once the post data 709 is uploaded to the server's database 104 .
- the expiration date is verified if the post 710 is set as ephemeral.
- the system 103 notifies the distributed clients 712 of a blockchain if the author 704 chooses to notify clients 712 .
- the system 103 compiles a list of users 106 and corresponding client devices 102 to which to send the notification by checking the author's 704 visibility setting for the post 710 .
- This list of users 106 sets who has access to view the post.
- the list may be compiled by checking the author's 704 visibility graph database.
- the system 103 sends out a notification to associated client devices 102 based on the notification settings of the individual users 106 .
- the system 103 then adds the post ID 709 and post geolocation 709 to the users' 106 “permission-to-see” database, i.e., a database of all the content the user has permission to see.
- the user's client 102 sends out a request to their “permission-to-see” database at a user set frequency of time.
- the system 103 retrieves any and all posts 710 that the user can view within their visibility radius.
- the visibility radius is an adjustable radius of the rendered tiles 801 of the digital environment around the user 106 .
- the system 103 retrieves and renders any content 105 at the content's 105 appropriate geolocation if the post 710 is viewable in the “permission-to-see” database.
- the expiration timestamp of all posts 710 are checked against the current timestamp with every “permission-to-see” database request.
- the system 103 removes any posts 710 from the database 104 and animatedly fades the post 710 to invisible if the current timestamp is equal to or greater than the expiration timestamp.
- the system 103 opens a 2D user interface (UI) window fragment if the user interacts with rendered content 105 by tapping or touching the content.
- UI user interface
- the system 103 displays or posts content 105 and other relevant data on the screen in a templated form for the user to interact with.
- the user 106 has the ability to leave a comment on the rendered content 105 .
- the original author 704 is credited as the owner if other users 106 collaborate on the content creation 701 .
- the other contributors are added as co-creators in the content's metadata 708 .
- the co-creators' 106 content visibility settings are overridden by the owner's 704 visibility settings if a conflict arises in the settings.
- Each individual piece displays who contributed to the piece of content 105 when other users 106 interact with collaborated content 105 .
- An embodiment of the invention permits users 106 to remotely generate content.
- the previously described transforming local geo-coordinates to global geo-coordinates enables the system 103 to handle remote content generation.
- One embodiment of remote user content generation involves saving the content's geolocation and orientation where the user 106 is currently located. This posts the content to the 3D environment at the global geolocation and facing the direction in which the user had faced his or her client device 102 when the user saved the content's location.
- Another embodiment of remote user content generation involves manually posting the user's content in the 3D environment.
- the system 103 opens a camera into a game engine environment which shows a digital 3D topological map of the world with the underlying polygon meshes rendered.
- the user 106 may move the camera through the environment by touching the screen with gestures such as pinching, tapping, swiping, or tracing.
- the user moves or zooms the camera's FOV to traverse the environment and pass through meshes.
- the user 106 only has the ability to view public areas or any private area that the user has permission to enter.
- the author 704 creates content 105 in a manner such as that previously discussed 701 , and may then place the content 105 in a scene where the user 106 moves the camera.
- the system 103 may not permit the user to place content in a private area 1001 where the user 106 has been given permission to enter, but has not been given permission to place content 105 .
- remote user content generation 701 involves automatically placing the user-generated content 105 in the 3D environment 101 using external data APIs and data input.
- One type of data input is point of sale data from a business.
- the author 704 has an option to designate the content 105 for automatic display.
- the system 103 places a post or content 105 in an area that will target a certain type of user 106 and generate the most impressions by a target user type within a certain area.
- the system 103 trains a Convolutional Neural Network (CNN) 901 to locate a tile 801 in an optimal location for a post 710 to generate the most amount of impressions using data gathered by APIs, user behavior, and geospatial data.
- GIS Geographical Information System
- a spatial map API utilizing a building map, may determine the closest route from the tile 801 to a hallway 903 then orient a post 710 in the direction of the vector of “least distance to more impressions.”
- the system 103 may segment the tile 801 into smaller tiles 904 where no points 110 in the point cloud 111 are not in a horizontal plane.
- the smaller tile 904 only has floor or ground plane points 110 .
- the system 103 finds the optimal smaller floor tile 904 by balancing which of the tiles 904 is the farthest from a wall or vertical object tile 801 with the closest area of most impressions.
- the post 710 is anchored to the optimal location and the metadata is saved to the database 104 .
- Another embodiment of the invention allows a author 704 to automatically place content 105 at a location 905 by reserving the location 905 prior to placing the content 105 .
- a trigger 906 is activated the post 710 and all relevant content 105 is loaded from the database 104 based on the commands of the trigger 906 .
- the system 103 displays the post 710 to all appropriate users 106 .
- the system 103 utilizes an API to access a database that provides contextual information about a person, place or thing to parse the information and gather the geolocation associated with the information.
- the system 103 uses GIS APIs 902 to calculate where to place the contextual information for the most impressions based on the relative radius of the geolocation. This makes the contextual information useful and effective.
- the information on the underlying site is updated, the information in the post 710 will be updated as well by utilizing calls to the API's database listener.
- One embodiment of the invention allows users 106 to filter the content 105 displayed within their AR display 101 . This allows users 106 to choose what content 105 they want to see. Limiting displayed content 105 may help prevent or mitigate overwhelming the user 106 with visual stimuli.
- the system 103 allows a user 106 to filter content 105 without causing the content 105 to never be viewable again through content visibilities and filters.
- a user 106 has the option to categorize their connection with another user 106 .
- the user 106 may set the other user 106 as a close friend, friend, acquaintance, colleague, or employer. Setting different connection types allows the user 106 to set a separate visibility for content 105 from the connection type.
- Other possible methods of visibility control include filtering content 105 from a group the user 106 subscribes to, topics that users 106 tag their content 105 , or by types of media (i.e. images or videos). Content 105 might have multiple visibility settings.
- An author 704 selects the granted visibility for their content 105 when the author 704 creates a post or when editing the post. Other users 106 access the post 710 based on the visibility settings granted by the post's author 704 .
- One method the system 103 uses to determine the visibility of the post 710 is through social graph traversal in the user's 106 connections graph database.
- An embodiment of the invention allows users 106 to filter content 105 by searching for specific visibilities, tags, and characteristics of posts 710 when viewing content 105 in the AR camera view 101 .
- a user 106 has access to view and interact with content 105 , but also has the ability to filter the content 105 out of sight. For example, when a user 106 approaches an area that is cluttered or overlapping, to filter out unwanted content 105 , the user 106 may touch the content 105 on their device's 102 screen or, through their AR camera view 101 , and swipe the content 105 away. The user 106 may also have the option of moving the content 105 within the scene. This content 105 is only moved for the individual user 106 for the duration of the AR camera session.
- a small orb 907 may appear underneath where the content 105 was originally located when the user 106 swipes the content away. Tapping on the orb 907 returns the content 105 to its original position.
- Another embodiment of the invention also allows a user 105 to filter posts 710 by the post's metadata.
- Options for filtering include the post's 710 characteristics, its topic, its type, or its creator.
- a user 105 searches for this metadata and chooses to only display posts 710 containing the search inquiry or hide posts 710 based on the inquiry.
- Posts 710 that do not meet the previous criteria are not rendered in the system 103 and not viewable in the AR display 102 .
- Content 105 appears as if it exists in different layers due to the results of what content 105 is rendered and displayed.
- Game-objects 105 may be deactivated so that the user 106 does not accidentally interact with the content 105 when the content 105 is not displayed.
- Another embodiment of the invention displays content 105 to a user 106 based on post 710 priority and the user 106 behavior and settings.
- the system 103 assigns a post 710 priority value based on the post's 710 characteristics. These characteristics may include, but are not limited to, the post author 704 , the type of post, or the post topic.
- Post 710 priority allows the system 103 to display higher priority content 105 to a user 106 in a content-crowded 3D environment 101 .
- the system 103 displays higher priority content 105 in front of or on top of lower priority content 105 .
- An embodiment of the invention uses blockchain 1101 and smart contracts 1102 to protect user 106 security.
- one embodiment of the invention allows users 106 to manage the content 105 visibility permissions through blockchain 1101 using smart contracts 1102 .
- Another example is where an embodiment of the invention allows users 106 to store their point clouds 111 to a blockchain 1101 instead of to a single server or a cloud 104 .
- point clouds 111 While there are many types of point clouds 111 , two basic types are discussed here: public and private point clouds 111 .
- Public point clouds 111 are point clouds 111 that exist in public areas such as parks 1004 , roads 1009 , or the facades of buildings 1010 .
- the facades of buildings 1010 may be considered public even if the underlying building exists on private property 1001 .
- a client device 102 must have access to the public point clouds 111 in order to localize the client device 102 .
- This invention may use an AR cloud generation API that encrypts point clouds 111 and converts them into sparse point clouds 111 . Users 106 will not be able to reverse engineer the public point clouds 111 into an identifiable 3D scene 101 because the encryption prevents access to the underlying public point clouds 111 .
- Private point clouds 111 may be determined through geofencing API 1008 and GIS API 902 calls using publicly-available data.
- Private property owners 1003 may provide proof of ownership of the property 1001 to a system administrator 1005 . Once approved, the administrator 1005 assigns a certificate of ownership 1006 to the property owner 1003 .
- the system 103 stores the certificate of ownership 1006 to a blockchain ledger in the property owner's name 1003 .
- the system 103 will also store the geofence 1002 coordinates on a blockchain ledger to indicate the boundaries 1002 of the property owner's property 1001 .
- the system 103 allows a property owner 1003 to grant, deny, or revoke access to other users 106 .
- the categories of other users 106 may include, but are not limited to, certain other specific users 106 , groups of users 712 , specific companies, or types of companies.
- the property owner 1003 may control the length of access, the type of access, or the cost of accessing the property 1001 .
- the type of access includes, but is not limited to, whether the user 106 only has permission to view content 105 , interact with content 105 , or post content 105 on or within the boundary of the property 1001 .
- the system 103 forms a smart contract 1102 between the property owner 1003 and the other user 106 when the property owner 1003 grants, denies, or revokes access to the property owner's property 1001 .
- the system 103 saves the smart contract to a blockchain ledger 1101 .
- the system 103 forks the blockchain 1101 to form a new smart contract 1102 when a smart contract 1102 needs to be amended.
- the new smart contract 1102 is forked by the rest of the nodes on the blockchain 1101 as long the smart contract 1102 displays no sign of being tampered with and the property owner 1003 shows the desire to continue to interact with that user 106 .
- the user 106 interacting with the blockchain 1101 thread automatically accepts the fork that was made by that user 106 .
- One embodiment allows the property owner 1003 to grant permission to a property manager 1007 to administer the property owner's property 1001 .
- the system 103 applies the same rules to 3D polygon meshes 402 that it applies to point clouds 111 with one exception.
- Public meshes 402 do not include the facades of buildings 1010 within the bounds of a private property 1002 .
- Only sparse point clouds 111 include those facades 1010 .
- the system 103 makes a call to a geofencing API 1008 to determine the zoning location of a point cloud 111 when the point cloud 111 is constructed and globally localized.
- the system 103 If the point cloud 111 is located within the bounds of a private property 1002 the system 103 indicates this in the point cloud's 111 metadata. The system 103 then queries the blockchain ledger 1101 to determine the property owner 1003 where the point cloud 111 resides. The system 103 saves the user's 106 encrypted ID in the metadata of the point cloud 111 which will be decrypted when accessed.
- point cloud 111 data indicates that it resides on private property 1001 but is generated from a device 102 positioned on public property 1004 or other private property 1001 , then the system 103 may save within that point cloud's 111 metadata that the point cloud 111 is public access on private property. Point clouds 111 with this metadata distinction enable client device 102 localization, but prohibit mesh construction from the point clouds 111 .
- point cloud 111 is located on private property 1001 , then the system 103 saves that only the point cloud 111 is public to the point cloud's 111 metadata.
- the point clouds 111 are not meshed 403 if the private property owner 1003 or manager 1007 does not grant private permissions to the user 106 .
- Point clouds 111 with this metadata distinction enable client device 102 localization, but prohibit mesh construction from the point clouds 111 .
- the system 103 verifies point cloud 111 metadata when localizing a client device 102 or when performing feature matching.
- the system 103 checks whether the point cloud 111 is public, public access on private property, or private.
- the system 103 checks the owner metadata and the blockchain 1101 smart contracts 1102 for user permissions if the point cloud 111 is private or public access on private property.
- the system 103 permits access to the extent granted by the owner 1003 through the smart contract 1102 . Only the client device 102 is localized if the point cloud 111 is public or public access on private property.
Abstract
Frontend and backend systems and processes. Technical foundations on which an Augmented Reality (AR) platform, such as an AR Social Media Platform. Systems and methods are used to construct and manage an AR Cloud backend and frontend environment facilitation: persistent 3-Dimensional and 2-Dimensional geo-located content that can be created, viewed, changed, and interacted with by users in the same or different sessions; ephemeral content; local creation and posting of content; remote creation and posting of content; remote visualization, altering, and placing content on a 3D map; filtering and management of content in the camera view based on a visibility layer/similar theme and content priority based on preferences, categorization, and ownership; automated creation and posting; lighting of content and digital environments; linking of point clouds with real-world geo-coordinates for accurate map construction; and the security of property and content rights and ownership via smart contracts on a blockchain.
Description
- This application in a divisional continuation of prior U.S. patent application Ser. No. 16/714,084, filed Dec. 13, 2019, which claims priority to U.S. Provisional Application Ser. No. 62/779,177, filed Dec. 13, 2018. The disclosures of both are incorporated herein by reference in their entireties.
- Conventionally, authoring for augmented reality (AR) experiences and applications is limited by ephemeral sessions, where the 3D content largely has to be authored locally for markerless AR, or is authored via what is known as marker-based AR. Spatial anchors shared between users via a mutual server provide a more integrated and persistent experience, but that method lacks consistent accuracy in the placement of content. Additionally, spatial 3D mapping of environments brings inherent user, data, and property privacy and security issues that have yet to be solved at a broad scale.
-
FIG. 1 is an overview of an embodiment of a system illustrating a game engine, a cloud server, a real-world environment, an augmented reality environment, many points within the augmented reality environment, and a point cloud. -
FIG. 2 is a process map for an embodiment of the invention starting with an AR camera activation to generating and saving point clouds to the server. -
FIG. 3 is a process map for an embodiment of the invention that determines an exact location for a client device and if there is any point clouds on the server for the location. -
FIG. 4 demonstrates how multiple point clouds merge to create a larger point cloud mesh in an embodiment of the invention. -
FIG. 5 is one method of using image databases to calculate a client device location and calculate the difference between local point clouds and the global point clouds, according to an embodiment. -
FIG. 6 demonstrates how an embodiment of the system makes objects real. -
FIG. 7 describes a method that an author follows to upload objects into the system, according to an embodiment. -
FIG. 8 is a not-to-scale rendering of one state map and a general breakdown of tiles. -
FIG. 9 demonstrates a method of determining the optimum location within a tile for users to automatically post content, according to an embodiment. -
FIG. 10 demonstrates a break down in areas from public to private and the security provided to private property owners. -
FIG. 11 demonstrates an embodiment of blockchain and a secure contract. -
FIG. 12 is a general topographic map which would eventually have point clouds overlay as users digitally map the environment in an embodiment of the invention. - The present invention will be described in the preferred embodiments. The invention may, however, be embodied in many different forms and should not be construed as being limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the invention to those skilled in the art.
- This invention describes methods to author content remotely and locally, as well as the methods and systems needed to be in place to facilitate such an endeavor, such as a shared 3D map of the environment, standardization of maps via conversion of local euclidian coordinates to global geodesic coordinates, as well as a system for managing spatial data ownership in regards to viewing and authoring content within the bounds of public and/or private digital-spatial property.
- A conventional augmented reality (AR) system stores
objects 105 in a local library to display for auser 106 on the user's phone or through specially designedglasses 102. The conventional system triggers displaying theobjects 105 based on a target. The present invention improves on the conventional system.Users 106 of the present invention can collaborate to generate a digital globally-mapped,persistent 3D environment 101. In one embodiment of the invention, users remotely authorcontent 701 to manually or automatically display locally or globally. In another embodiment of the invention,users 106 protect theircontent 105 through blockchain-encrypted 1101smart contracts 1102. In another embodiment, the system 103 (also referenced herein as the game engine) provides security forproperty owners 1003 through blockchain-encrypted 1101smart contracts 1102. Another embodiment allowsusers 106 to filtercontent 105 displayed to them within theAR environment 101. - An embodiment of this invention includes a
game engine 103 in conjunction with an integrated cloud-server system 104. Thegaming engine 103 communicates with the cloud-server system 104 through a software development kit (SDK) 107 or a Graphics Development Kit (GDK) 107. Thegaming engine SDK 103 services the construction and management of3D environments 111. For example, a gaming engine SDK 103 or application programming interface (API) 103 provides a rendering engine, a physics engine, collision detection, sound, scripting, animation, AI, networking, streaming, memory management, threading, localization support, scene graph, and AR camera tracking. - The present invention is based on a 3D point cloud environment 111 (AR Cloud) that is generated by a
user 106 by utilizing thecameras 108 and sensors on the user's 106client device 102. These sensors may include, but are not limited to, gyroscopes, accelerometers, Global Positioning System (GPS), Bluetooth Low Energy (BLE), and WiFi. - In an embodiment of the invention, an augmented
reality system 103 merges a user's 106 locally created3D cloud environment 111 with a base digitaltopological map 1201 of the world constructed out of polygon-meshes and planes. The base map includes 3D buildings, such as skyscrapers, but the buildings have the option of being edited withpoint clouds 111 and meshes. The merging 403 of locally generatedcontent 105 with the base map creates a 1:1 scale digital representation of the real, physical world while users explore and map the world. Thesystem 103 stores thetopological map 1201 on a series of linkedservers 104 or a cloud basedserver 104. - The invention divides the base map into 10 meter×10 meter sections, referred to herein as
tiles 801. The user-generated 3Dpoint cloud environments 111 are linked to the base map based on thetile 801 where theuser 106 is located. Thesystem 103 tracks the user's 106 position in the 3D environment. Thesystem 103 downloads digital content to the user's device based on thetile 801 in which the user is located instead of downloading the entire world map. - The
system 103 makes a GPS API call 210 to facilitate localization of the user. The user'sdevice 102 generates a geo-reference point 211 corresponding to its location, through a geo-reference system including, but not limited to, GPS, WiFi localization, LiFi localization, cellular phone localization, BLE beacons, or a combination thereof. Thesystem 103 attempts to merge the user's 3Denvironment point cloud 111 to any overlapping previously savedpoint clouds 111 intiles 801 near the user's location because the geo-reference point might not be accurate. Features are matched 307 between the user point cloud and thesystem point clouds 111. The system merges the user'spoint cloud 111 with thesystem point cloud 111 that matches the most features. The mergedpoint cloud 402 consists of two or more overlappingpoint clouds 111. Thesystem 103 saves the mergedpoint cloud 402 the server as asubordinate point cloud 402 under the identification number of theold point cloud 111. - As
new point clouds 111 are merged 403, the merges compound to create onelarge point cloud 402 with individual subsets ofpoint clouds 401 for optimized searching. After merging, the point clouds are broken intotile 801 groups where thesystem 103 assigns a unique tile identification number to each voxel within the go-coordinates of thetile 801. Thetile 801 contains the voxels and geo-coordinates from manydifferent point clouds 111 that existin within thetile 801, optimizing the download process ofpoint clouds 111 to aclient device 102. - One embodiment of the invention utilizes an
AR camera 108 on theclient device 102. TheAR camera 108 provides a viewport into thedigital 3D environment 101 through raycasting provided by thesystem 103. Thesystem 103 listens for and registers which3D objects 105 are being virtually touched and interacted with through the AR camera's 108 field of vision (FOV) 109. - Several processes may happen concurrently while the
AR camera 108 is open or active. Thesystem 103 makes aGPS call 210 to determine the device'slocation 211. Thesystem 103 searches for any pre-existingpoint clouds 111 geo-tagged with IDs within a predetermined geo-coordinated radius. Thesystem 103 downloads 305 anyexisting point clouds 111 and theircorresponding tiles 801 within the radius. Thesystem 103 uses third-party Simultaneous Localization and Mapping (SLAM) and relocalization APIs to relocalize the client device within the pre-existingpoint cloud environment 111. Thesystem 103 relocalizes theclient device 102 by matching monochrome or colored features in the image frames between thepreexisting point clouds 111 and thecurrent image frame 101 of thecamera 102 feed. The system also utilizes image processing APIs and computer vision APIs to relocalize the client device within the pre-existingpoint cloud environment 111. - A convolutional neural network (CNN) 901 compares the provided, retrieved, and calculated depths of the feature points 110 for images frames that share a different lighting. The
CNN 901 compares the 3D shapes of thepoint clouds 111 generated in the image frames to enhance the localization process and provide better accuracy in low and contrasting light situations between the current and the saved camera frame. One method of achieving the CNN comparison is through a object recognition API. - The
system 103 may captureadditional point clouds 111 that did not already exist in the pre-existingpoint cloud environment 111 if relocalization is successful. Through SLAM APIs thesystem 103 differentiates by feature-matchingpoints 110 already in thepre-existing point cloud 111 with newly generatedpoints 110 and eliminates the newly-generatedpoints 110 that matched withpoints 110 already in thepre-existing point cloud 111. SLAM APIs localize thecamera 108 in the spatial environment, while simultaneously generating a 3Dpoint cloud map 111 of the spatial environment. Depending on the client device, a Mono-SLAM API or a Stereo-SLAM API calculates depth through triangulation or through client device-equipped hardware that calculates depth data per pixel. - If no
point clouds 111 already exist within the geo-coordinated radius, or if relocalization failed, thesystem 103 continues to SLAM operations.Semantic Segmentation algorithms 205 run concurrently on thepoint clouds 111 while SLAM algorithms generate the point clouds.Semantic Segmentation 205 identifiespoint clouds 111 that compose a specific object and track thosepoint clouds 111 within the scene. One method thesystem 103 uses is an Interactive Closest Point (ICP) algorithm to match upnearby point clouds 111, identify the point cloud object using image or object recognition APIs that are trained to recognize such objects, construct a bounding box around the identified object, and track the object in the 3D environment across camera image frames. Semantic segmentation omits identifiedpoint clouds 111 that should not be saved in the 3D environment such as dynamic objects. Point clouds 111 that should be saved include stationary object and static objects identified through machine learning algorithms since they are likely to remain in the scene. - The
system 103 maps and localizes to the 3Dpoint cloud scene 111 at an appropriate frequency as the user traverses the environment. The client sensor data determines the update frequency. One such sensor data point is how fast the user is moving within the environment. Faster movement may be sensed and this information may be used to increase the update frequency. This method localizes the client device and the local environment within the global environment. Thesystem 103 performs relocalization to reduce drift so that geo-coordinates remain synced. - When a local
point cloud environment 111 is generated,spatial coordinates 110 of the environment are oriented based on where the client device was positioned when the session began. Thesystem 103 automatically sets the starting position as (0,0,0) 203. The local coordinates may be converted into global coordinates in order to be consistent withother point clouds 111 that are generated by other users. The global environment coordinates are based on real-life GPS coordinates. The exact geo-coordinates of the device may be determined in order to convert the local coordinates to global coordinates. - The
system 103 compares the image frames of the camera feed to those in an already-tagged geospatial database to globally localize thelocal point cloud 111 of aclient device 102. The geospatial database includes images of the same scene captured during different times of day, different weather conditions, and different seasons. Thesystem 103 calculates the transform 212 (difference in distance and offset of orientation) by comparing image frames that are geo-tagged to those that are not. This calculates the geo-coordinates of the input images in real-time. - One method of obtaining a geo-tagged database utilizes a
pre-existing Streetview API 501. In this embodiment, thesystem 103 searches for images in aStreetview dataset 502 that has a geo-coordinate 503 within the radius of the user. Thesystem 103 identifies animage 504, using feature matching, that has the most features in common with thecurrent device frame 505. Thesystem 103 calculates the difference in orientation of the two images' feature points 110 by using a homography matrix (perspective transform), extended Kalman filters, depth data, and epipolar geometry throughtriangulation 506. Thesystem 103 then calculates, with 6 degrees of freedom, thetransformation 507 in position between the twoimages system 103 finds the latitude, longitude, and altitude of the current image frame by multiplying thetransformation value 507 by the geo-coordinates. Altitude is used if provided as a data point within thegeotagged image 504. Once the current geo-coordinates are found, thesystem 103 adds thetransformation 507 to the rest of the coordinates in the local environment to becomeglobal coordinates 213. - Meshing 403 occurs after global localization. Point clouds 111 are converted into polygon meshes 402 by connecting
nearby point clouds 111 intoplanes 601.Points 110 that reside on the same plane, with a set threshold of variance, are connected at the edges with othernearby points 110. Thesystem 103 converts meshes 402 into game-objects 105 and addsgame engine colliders 602 to theobjects 105. Colliders are scripts associated with a game-object that recognize when a second game-object 105 contacts the first game-object 105. Collider scripts may trigger an event such as preventing the first game-object from moving past the second game-object 105. Game-objects 105 allow a3D object 105 to be interacted with and programmed within thegame engine environment 103. Thesystem 103 adds colliders to objects to make other 3D objects/game-objects interact with that game-object realistically. For example, objects are not able to pass through other objects that have colliders. - Additionally, the
system 103 recognizes when twoidentical point clouds 111 at the same location are different because of different current weather conditions compared to when the environment was scanned such as sunshine, rain and snow. Thesystem 103 collects this data from video feeds, images, or from synthetic data. Thesystem 103 uses the images' geo-location data when captured, determined by a basic localization method, such as GPS, or are determined via triangulation and transformation calculations between other localized video feeds and images. This synthetic data may include images and videos (consecutive image frames) that are taken as input into a pre-trained GAN (Generative Adversarial Neural Network) and, by way of deep learning algorithms (neural network) manipulating the RGB and alpha values of the pixels of the inputted image frames, and adversarial deep learning algorithms, such as a neural network, concurrently evaluating the success/effectiveness of the manipulated image, can reproduce accurate copies of the original image frame, making the copy of the image realistically appear as though it is of the same scene as the original image, only during a different time of day, lighting environment/level, season of the year, etc. - Meshing is important for occlusion of
3D content 105 in AR. Agame engine 103 uses raycasting to calculate and display what is in the AR camera'sFOV 109. Thegame engine 103 uses vector calculations to track a vector or “beam of light” from each pixel in the frame to the 3D environment. The environment is programmed so that a vector does not pass through the game-object if the object is programmed not to allow a vector to pass through the object'ssurface 603. Theengine 103 will not render 604 any object behind the game-object 105 that the vector interacts with and is occluded from the AR camera'ssight 101. - Meshing may be performed in one of at least two locations. The first location is on the client device utilizing a game engine API or SDK. Another location is in the cloud utilizing a game engine API or SDK. In the cloud embodiment, meshing of environments that are preparing to be downloaded to the client are meshed first.
- During every frame of the camera feed, the
system 103 places an AR camera object at the digital 3D environment geo-location and orientation when the client device is localized in the global environment. This method also occurs when the client device is located in a localpoint cloud environment 111. This method enablesusers 106 to view the 3D content that is placed at the user's location within the 3D environment. Thesystem 103 handles the occlusion, scaling, rendering, and perceived movement of the 3D content in the environment, based on how the content is scripted or programmed to behave. - Point clouds 111 and meshes 402 are not rendered within the user's
AR camera 108FOV 109. Themeshes 402 are present in the scene but their alpha value or visibility value is set to zero. This makes the meshes invisible within the scene. Users only see content rendered on the screen and not the background 3D reconstruction of the scene. This seamlessly integrates the user's AR experience with their real-world environment. - The
system 103 saves thepoint cloud 111 to the 3D environment at the end of theAR camera 108 session.3D point clouds 111 stored in the spatial database on the cloud server system represent the 3D environment. Thesystem 103 stores thepoint clouds 111 with their monochrome values as well as their color coordinate values (e.g., RGB). Semantic segmentation identifies undesirable point clouds. Thesystem 103 omits undesirable point clouds from the upload to the 3D environment. Thesystem 103 attaches relevant metadata to thepoint clouds 111 then uploads thepoint clouds 111 to theserver 104. Relevant metadata may include, but is not limited to, the client device ID, the transformed global geo-coordinates, and a timestamp. The point clouds 111 are organized in a data structure by geolocation ID. Thesystem 103 parses the geo-location coordinates from the server and populates thepoint clouds 111 in the3D environment 111 based on the metadata whenever thepoint clouds 111 are rendered or otherwise used in thesystem 103. - In one embodiment of the invention, the previously described
3D environment system 103 also serves as a framework foruser content generation 701. - Within a user upload
interface 702, anauthor 704 uploads anasset 705 to thesystem 103 or creates anasset 705 within thesystem 103. The asset 705 (e.g., file or object) includes, but is not limited to, a 3D object or asset file, a 2D or 3D image file, a 2D GIF, a text file, an audio file, an animated asset, a compatible native program file, an image or video captured in the system, a video converted to GIF, or text typed into a dialog. Thesystem 103 converts theasset 705 into a3D object 105 compatible with display in thesystem 103. Thesystem 103 calculates the ambient lighting level and direction of theclient device 102 usingnative APIs 703. Otherwise, the sun's 707 movement in the sky is modeled and represented as an artificiallight source 707 within thesystem 103 so thatshadows 711 are cast over themeshes 402 in the3D environment 101. This providessemi-realistic shadows 711 on user-generatedcontent 105. - The
system 103 displays the3D object 105 on the ground of the environment in the user'sFOV 109. The user may manipulate theobject 105 in different ways in different embodiments, e.g., by using one or two fingers on the device's screen to rotate, elevate, scale, or move the object laterally or vertically in the scene. Thesystem 103 may generate ashadow 711 below theobject 105 to indicate the object's position. Thesystem 103 adds colliders on the object'smeshes 402 so that theobject 105 cannot be placed inside of objects in theenvironment 112. Thesystem 103 saves metadata 709 for theobject 105 including, but not limited to, the object's geo-coordinates within the scene, orientation, scale, elevation, the user's creator ID, post settings, visibility settings, permissions, options, post ID, tags, description, timestamp, and expiration time stamp. - The
object 105 and itsrelevant content 708 andspatial data 709 may be sent to thesystem 103 to populate a corresponding polygon-mesh environment. 101.Post content data 709 may be defined as user-generatedcontent 105 encapsulated in a data object. The post content data and metadata are saved to the server database. A check is made to verify if thepost 710 is set as ephemeral once thepost data 709 is uploaded to the server'sdatabase 104. The expiration date is verified if thepost 710 is set as ephemeral. Thesystem 103 notifies the distributedclients 712 of a blockchain if theauthor 704 chooses to notifyclients 712. Thesystem 103 compiles a list ofusers 106 andcorresponding client devices 102 to which to send the notification by checking the author's 704 visibility setting for thepost 710. This list ofusers 106 sets who has access to view the post. The list may be compiled by checking the author's 704 visibility graph database. Thesystem 103 sends out a notification to associatedclient devices 102 based on the notification settings of theindividual users 106. Thesystem 103 then adds thepost ID 709 andpost geolocation 709 to the users' 106 “permission-to-see” database, i.e., a database of all the content the user has permission to see. - The user's
client 102 sends out a request to their “permission-to-see” database at a user set frequency of time. Thesystem 103 retrieves any and allposts 710 that the user can view within their visibility radius. The visibility radius is an adjustable radius of the renderedtiles 801 of the digital environment around theuser 106. Thesystem 103 retrieves and renders anycontent 105 at the content's 105 appropriate geolocation if thepost 710 is viewable in the “permission-to-see” database. The expiration timestamp of allposts 710 are checked against the current timestamp with every “permission-to-see” database request. Thesystem 103 removes anyposts 710 from thedatabase 104 and animatedly fades thepost 710 to invisible if the current timestamp is equal to or greater than the expiration timestamp. - The
system 103 opens a 2D user interface (UI) window fragment if the user interacts with renderedcontent 105 by tapping or touching the content. Thesystem 103 displays orposts content 105 and other relevant data on the screen in a templated form for the user to interact with. In one embodiment, theuser 106 has the ability to leave a comment on the renderedcontent 105. - The
original author 704 is credited as the owner ifother users 106 collaborate on thecontent creation 701. The other contributors are added as co-creators in the content'smetadata 708. The co-creators' 106 content visibility settings are overridden by the owner's 704 visibility settings if a conflict arises in the settings. Each individual piece displays who contributed to the piece ofcontent 105 whenother users 106 interact with collaboratedcontent 105. - An embodiment of the invention permits
users 106 to remotely generate content. The previously described transforming local geo-coordinates to global geo-coordinates enables thesystem 103 to handle remote content generation. - One embodiment of remote user content generation involves saving the content's geolocation and orientation where the
user 106 is currently located. This posts the content to the 3D environment at the global geolocation and facing the direction in which the user had faced his or herclient device 102 when the user saved the content's location. - Another embodiment of remote user content generation involves manually posting the user's content in the 3D environment. When the user elects to manually post content, the
system 103 opens a camera into a game engine environment which shows a digital 3D topological map of the world with the underlying polygon meshes rendered. Theuser 106 may move the camera through the environment by touching the screen with gestures such as pinching, tapping, swiping, or tracing. The user moves or zooms the camera's FOV to traverse the environment and pass through meshes. Theuser 106 only has the ability to view public areas or any private area that the user has permission to enter. - The
author 704 createscontent 105 in a manner such as that previously discussed 701, and may then place thecontent 105 in a scene where theuser 106 moves the camera. Thesystem 103 may not permit the user to place content in aprivate area 1001 where theuser 106 has been given permission to enter, but has not been given permission to placecontent 105. - Another embodiment of remote
user content generation 701 involves automatically placing the user-generatedcontent 105 in the3D environment 101 using external data APIs and data input. One type of data input is point of sale data from a business. - In this embodiment, the
author 704 has an option to designate thecontent 105 for automatic display. Thesystem 103 places a post orcontent 105 in an area that will target a certain type ofuser 106 and generate the most impressions by a target user type within a certain area. Thesystem 103 trains a Convolutional Neural Network (CNN) 901 to locate atile 801 in an optimal location for apost 710 to generate the most amount of impressions using data gathered by APIs, user behavior, and geospatial data. Geographical Information System (GIS)APIs 902 may determine the optimal orientation for thepost 710 within the optimal location. For example, a spatial map API, utilizing a building map, may determine the closest route from thetile 801 to ahallway 903 then orient apost 710 in the direction of the vector of “least distance to more impressions.” - In order to reduce the possibility of placing the post in an occluded position, the
system 103 may segment thetile 801 intosmaller tiles 904 where nopoints 110 in thepoint cloud 111 are not in a horizontal plane. Thesmaller tile 904 only has floor or ground plane points 110. Thesystem 103 finds the optimalsmaller floor tile 904 by balancing which of thetiles 904 is the farthest from a wall orvertical object tile 801 with the closest area of most impressions. Thepost 710 is anchored to the optimal location and the metadata is saved to thedatabase 104. - Another embodiment of the invention allows a
author 704 to automatically placecontent 105 at alocation 905 by reserving thelocation 905 prior to placing thecontent 105. When atrigger 906 is activated thepost 710 and allrelevant content 105 is loaded from thedatabase 104 based on the commands of thetrigger 906. Thesystem 103 displays thepost 710 to allappropriate users 106. - Another embodiment of the invention combines both automatic remote user content embodiments. For example, the
system 103 utilizes an API to access a database that provides contextual information about a person, place or thing to parse the information and gather the geolocation associated with the information. Thesystem 103 usesGIS APIs 902 to calculate where to place the contextual information for the most impressions based on the relative radius of the geolocation. This makes the contextual information useful and effective. When the information on the underlying site is updated, the information in thepost 710 will be updated as well by utilizing calls to the API's database listener. - One embodiment of the invention allows
users 106 to filter thecontent 105 displayed within theirAR display 101. This allowsusers 106 to choose whatcontent 105 they want to see. Limiting displayedcontent 105 may help prevent or mitigate overwhelming theuser 106 with visual stimuli. - The
system 103 allows auser 106 to filtercontent 105 without causing thecontent 105 to never be viewable again through content visibilities and filters. Auser 106 has the option to categorize their connection with anotheruser 106. For example, theuser 106 may set theother user 106 as a close friend, friend, acquaintance, colleague, or employer. Setting different connection types allows theuser 106 to set a separate visibility forcontent 105 from the connection type. Other possible methods of visibility control includefiltering content 105 from a group theuser 106 subscribes to, topics thatusers 106 tag theircontent 105, or by types of media (i.e. images or videos).Content 105 might have multiple visibility settings. - An
author 704 selects the granted visibility for theircontent 105 when theauthor 704 creates a post or when editing the post.Other users 106 access thepost 710 based on the visibility settings granted by the post'sauthor 704. One method thesystem 103 uses to determine the visibility of thepost 710 is through social graph traversal in the user's 106 connections graph database. - An embodiment of the invention allows
users 106 to filtercontent 105 by searching for specific visibilities, tags, and characteristics ofposts 710 when viewingcontent 105 in theAR camera view 101. Auser 106 has access to view and interact withcontent 105, but also has the ability to filter thecontent 105 out of sight. For example, when auser 106 approaches an area that is cluttered or overlapping, to filter outunwanted content 105, theuser 106 may touch thecontent 105 on their device's 102 screen or, through theirAR camera view 101, and swipe thecontent 105 away. Theuser 106 may also have the option of moving thecontent 105 within the scene. Thiscontent 105 is only moved for theindividual user 106 for the duration of the AR camera session. Asmall orb 907 may appear underneath where thecontent 105 was originally located when theuser 106 swipes the content away. Tapping on theorb 907 returns thecontent 105 to its original position. - Another embodiment of the invention also allows a
user 105 to filterposts 710 by the post's metadata. Options for filtering include the post's 710 characteristics, its topic, its type, or its creator. Auser 105 searches for this metadata and chooses to only displayposts 710 containing the search inquiry or hideposts 710 based on the inquiry.Posts 710 that do not meet the previous criteria are not rendered in thesystem 103 and not viewable in theAR display 102.Content 105 appears as if it exists in different layers due to the results of whatcontent 105 is rendered and displayed. Game-objects 105 may be deactivated so that theuser 106 does not accidentally interact with thecontent 105 when thecontent 105 is not displayed. - Another embodiment of the invention displays
content 105 to auser 106 based onpost 710 priority and theuser 106 behavior and settings. Thesystem 103 assigns apost 710 priority value based on the post's 710 characteristics. These characteristics may include, but are not limited to, thepost author 704, the type of post, or the post topic.Post 710 priority allows thesystem 103 to displayhigher priority content 105 to auser 106 in a content-crowded3D environment 101. Thesystem 103 displayshigher priority content 105 in front of or on top oflower priority content 105. - An embodiment of the invention uses
blockchain 1101 andsmart contracts 1102 to protectuser 106 security. For example, one embodiment of the invention allowsusers 106 to manage thecontent 105 visibility permissions throughblockchain 1101 usingsmart contracts 1102. Another example is where an embodiment of the invention allowsusers 106 to store theirpoint clouds 111 to ablockchain 1101 instead of to a single server or acloud 104. - While there are many types of
point clouds 111, two basic types are discussed here: public and private point clouds 111. - Public point clouds 111 are
point clouds 111 that exist in public areas such asparks 1004,roads 1009, or the facades ofbuildings 1010. The facades ofbuildings 1010 may be considered public even if the underlying building exists onprivate property 1001. Aclient device 102 must have access to thepublic point clouds 111 in order to localize theclient device 102. This invention may use an AR cloud generation API that encryptspoint clouds 111 and converts them into sparse point clouds 111.Users 106 will not be able to reverse engineer thepublic point clouds 111 into anidentifiable 3D scene 101 because the encryption prevents access to the underlying public point clouds 111. -
Private point clouds 111 may be determined throughgeofencing API 1008 andGIS API 902 calls using publicly-available data.Private property owners 1003 may provide proof of ownership of theproperty 1001 to asystem administrator 1005. Once approved, theadministrator 1005 assigns a certificate ofownership 1006 to theproperty owner 1003. Thesystem 103 stores the certificate ofownership 1006 to a blockchain ledger in the property owner'sname 1003. Thesystem 103 will also store thegeofence 1002 coordinates on a blockchain ledger to indicate theboundaries 1002 of the property owner'sproperty 1001. Thesystem 103 allows aproperty owner 1003 to grant, deny, or revoke access toother users 106. The categories ofother users 106 may include, but are not limited to, certain otherspecific users 106, groups ofusers 712, specific companies, or types of companies. Theproperty owner 1003 may control the length of access, the type of access, or the cost of accessing theproperty 1001. The type of access includes, but is not limited to, whether theuser 106 only has permission to viewcontent 105, interact withcontent 105, or postcontent 105 on or within the boundary of theproperty 1001. - The
system 103 forms asmart contract 1102 between theproperty owner 1003 and theother user 106 when theproperty owner 1003 grants, denies, or revokes access to the property owner'sproperty 1001. Thesystem 103 saves the smart contract to ablockchain ledger 1101. Thesystem 103 forks theblockchain 1101 to form a newsmart contract 1102 when asmart contract 1102 needs to be amended. The newsmart contract 1102 is forked by the rest of the nodes on theblockchain 1101 as long thesmart contract 1102 displays no sign of being tampered with and theproperty owner 1003 shows the desire to continue to interact with thatuser 106. Theuser 106 interacting with theblockchain 1101 thread automatically accepts the fork that was made by thatuser 106. - One embodiment allows the
property owner 1003 to grant permission to aproperty manager 1007 to administer the property owner'sproperty 1001. - The
system 103 applies the same rules to 3D polygon meshes 402 that it applies to pointclouds 111 with one exception. Public meshes 402 do not include the facades ofbuildings 1010 within the bounds of aprivate property 1002. Onlysparse point clouds 111 include thosefacades 1010. - The
system 103 makes a call to ageofencing API 1008 to determine the zoning location of apoint cloud 111 when thepoint cloud 111 is constructed and globally localized. - If the
point cloud 111 is located within the bounds of aprivate property 1002 thesystem 103 indicates this in the point cloud's 111 metadata. Thesystem 103 then queries theblockchain ledger 1101 to determine theproperty owner 1003 where thepoint cloud 111 resides. Thesystem 103 saves the user's 106 encrypted ID in the metadata of thepoint cloud 111 which will be decrypted when accessed. - If the
point cloud 111 data indicates that it resides onprivate property 1001 but is generated from adevice 102 positioned onpublic property 1004 or otherprivate property 1001, then thesystem 103 may save within that point cloud's 111 metadata that thepoint cloud 111 is public access on private property. Point clouds 111 with this metadata distinction enableclient device 102 localization, but prohibit mesh construction from the point clouds 111. - If the
point cloud 111 is located onprivate property 1001, then thesystem 103 saves that only thepoint cloud 111 is public to the point cloud's 111 metadata. The point clouds 111 are not meshed 403 if theprivate property owner 1003 ormanager 1007 does not grant private permissions to theuser 106. Point clouds 111 with this metadata distinction enableclient device 102 localization, but prohibit mesh construction from the point clouds 111. - The
system 103 verifiespoint cloud 111 metadata when localizing aclient device 102 or when performing feature matching. Thesystem 103 checks whether thepoint cloud 111 is public, public access on private property, or private. Thesystem 103 checks the owner metadata and theblockchain 1101smart contracts 1102 for user permissions if thepoint cloud 111 is private or public access on private property. Thesystem 103 permits access to the extent granted by theowner 1003 through thesmart contract 1102. Only theclient device 102 is localized if thepoint cloud 111 is public or public access on private property.
Claims (5)
1. A computer executable method of generating object persistence within an augmented reality system comprising:
scanning a local environment,
developing a mesh of local coordinates,
comparing local coordinates with global coordinates,
mapping the local environment if not previously mapped, and
saving the local coordinates to global coordinates as a voxel.
2. The method of claim 1 , wherein the saving further comprises saving one of more of a user id of a user generating the map, an identification of a location owner, a tile the voxel is located in, and a session identification number.
3. The method of claim 2 , where the system recognizes different weather conditions in the environment, the weather conditions comprising snow, rain, or flooding, or different lighting conditions, to build a database of different views at the local environment.
4. The method of claim 1 wherein recognition of a location of the local environment can be generated from simultaneous localization and mapping (SLAM), GPS location detection, Bluetooth Low Energy (BLE) beacons, or WIFI locating.
5. The method of claim 1 wherein the system recognizes a previously mapped location, downloads a corresponding mesh tile, and renders content based on permissions of the user.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/126,611 US20210183163A1 (en) | 2018-12-13 | 2020-12-18 | Augmented reality remote authoring and social media platform and system |
US18/468,824 US20240119688A1 (en) | 2018-12-13 | 2023-09-18 | Augmented reality remote authoring and social media platform and system |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201862779177P | 2018-12-13 | 2018-12-13 | |
US16/714,084 US10902685B2 (en) | 2018-12-13 | 2019-12-13 | Augmented reality remote authoring and social media platform and system |
US17/126,611 US20210183163A1 (en) | 2018-12-13 | 2020-12-18 | Augmented reality remote authoring and social media platform and system |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/714,084 Division US10902685B2 (en) | 2018-12-13 | 2019-12-13 | Augmented reality remote authoring and social media platform and system |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/468,824 Continuation US20240119688A1 (en) | 2018-12-13 | 2023-09-18 | Augmented reality remote authoring and social media platform and system |
Publications (1)
Publication Number | Publication Date |
---|---|
US20210183163A1 true US20210183163A1 (en) | 2021-06-17 |
Family
ID=71072794
Family Applications (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/714,084 Active US10902685B2 (en) | 2018-12-13 | 2019-12-13 | Augmented reality remote authoring and social media platform and system |
US17/126,611 Abandoned US20210183163A1 (en) | 2018-12-13 | 2020-12-18 | Augmented reality remote authoring and social media platform and system |
US17/126,235 Active US11182979B2 (en) | 2018-12-13 | 2020-12-18 | Augmented reality remote authoring and social media platform and system |
US18/468,824 Pending US20240119688A1 (en) | 2018-12-13 | 2023-09-18 | Augmented reality remote authoring and social media platform and system |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/714,084 Active US10902685B2 (en) | 2018-12-13 | 2019-12-13 | Augmented reality remote authoring and social media platform and system |
Family Applications After (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/126,235 Active US11182979B2 (en) | 2018-12-13 | 2020-12-18 | Augmented reality remote authoring and social media platform and system |
US18/468,824 Pending US20240119688A1 (en) | 2018-12-13 | 2023-09-18 | Augmented reality remote authoring and social media platform and system |
Country Status (1)
Country | Link |
---|---|
US (4) | US10902685B2 (en) |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109603155B (en) * | 2018-11-29 | 2019-12-27 | 网易(杭州)网络有限公司 | Method and device for acquiring merged map, storage medium, processor and terminal |
US11120526B1 (en) * | 2019-04-05 | 2021-09-14 | Snap Inc. | Deep feature generative adversarial neural networks |
US11188902B1 (en) * | 2020-05-20 | 2021-11-30 | Louise Dorothy Saulog Sano | Live time connection application method and devices |
CN112002019B (en) * | 2020-08-25 | 2023-04-11 | 成都威爱新经济技术研究院有限公司 | Method for simulating character shadow based on MR mixed reality |
US11544343B1 (en) * | 2020-10-16 | 2023-01-03 | Splunk Inc. | Codeless anchor generation for detectable features in an environment |
CN112270769B (en) * | 2020-11-11 | 2023-11-10 | 北京百度网讯科技有限公司 | Tour guide method and device, electronic equipment and storage medium |
EP4305508A1 (en) * | 2021-03-11 | 2024-01-17 | Telefonaktiebolaget LM Ericsson (publ) | Moving media in extended reality |
WO2023133542A1 (en) * | 2022-01-10 | 2023-07-13 | Bright Star Studios Aps | Methods and devices for supporting online video games utilizing a dedicated game server |
DE102022107027A1 (en) * | 2022-03-24 | 2023-09-28 | Eto Gruppe Technologies Gmbh | Location-based content management method for issuing digital content to a user and location-based content management system |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140232750A1 (en) * | 2010-08-26 | 2014-08-21 | Amazon Technologies, Inc. | Visual overlay for augmenting reality |
US20160179830A1 (en) * | 2014-12-19 | 2016-06-23 | Qualcomm Incorporated | Scalable 3d mapping system |
US20180213359A1 (en) * | 2017-01-23 | 2018-07-26 | Magic Leap, Inc. | Localization determination for mixed reality systems |
US20200208994A1 (en) * | 2016-10-28 | 2020-07-02 | Zoox, Inc. | Verification and updating of map data |
Family Cites Families (37)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6426745B1 (en) * | 1997-04-28 | 2002-07-30 | Computer Associates Think, Inc. | Manipulating graphic objects in 3D scenes |
US7522186B2 (en) * | 2000-03-07 | 2009-04-21 | L-3 Communications Corporation | Method and apparatus for providing immersive surveillance |
US7246044B2 (en) * | 2000-09-13 | 2007-07-17 | Matsushita Electric Works, Ltd. | Method for aiding space design using network, system therefor, and server computer of the system |
BRPI0520196A2 (en) * | 2005-04-25 | 2009-04-22 | Yappa Corp | 3d image generation and display system |
US7844229B2 (en) * | 2007-09-21 | 2010-11-30 | Motorola Mobility, Inc | Mobile virtual and augmented reality system |
US9662583B2 (en) * | 2008-06-30 | 2017-05-30 | Sony Corporation | Portable type game device and method for controlling portable type game device |
KR101667033B1 (en) | 2010-01-04 | 2016-10-17 | 삼성전자 주식회사 | Augmented reality service apparatus using location based data and method the same |
KR101548834B1 (en) | 2010-09-20 | 2015-08-31 | 퀄컴 인코포레이티드 | An adaptable framework for cloud assisted augmented reality |
US20120233555A1 (en) | 2010-11-08 | 2012-09-13 | Eyelead Sa | Real-time multi-user collaborative editing in 3d authoring system |
US9122321B2 (en) | 2012-05-04 | 2015-09-01 | Microsoft Technology Licensing, Llc | Collaboration environment using see through displays |
US20130335405A1 (en) * | 2012-06-18 | 2013-12-19 | Michael J. Scavezze | Virtual object generation within a virtual environment |
US9495783B1 (en) | 2012-07-25 | 2016-11-15 | Sri International | Augmented reality vision system for tracking and geolocating objects of interest |
US8996551B2 (en) * | 2012-10-01 | 2015-03-31 | Longsand Limited | Managing geographic region information |
US10970934B2 (en) * | 2012-10-23 | 2021-04-06 | Roam Holdings, LLC | Integrated operating environment |
US10262462B2 (en) | 2014-04-18 | 2019-04-16 | Magic Leap, Inc. | Systems and methods for augmented and virtual reality |
US9501871B2 (en) | 2014-04-30 | 2016-11-22 | At&T Mobility Ii Llc | Explorable augmented reality displays |
DE102014210481A1 (en) * | 2014-06-03 | 2015-12-03 | Siemens Aktiengesellschaft | Information display on moving objects visible through windows |
US10852838B2 (en) * | 2014-06-14 | 2020-12-01 | Magic Leap, Inc. | Methods and systems for creating virtual and augmented reality |
WO2016077506A1 (en) | 2014-11-11 | 2016-05-19 | Bent Image Lab, Llc | Accurate positioning of augmented reality content |
US20170243403A1 (en) | 2014-11-11 | 2017-08-24 | Bent Image Lab, Llc | Real-time shared augmented reality experience |
US20160133230A1 (en) | 2014-11-11 | 2016-05-12 | Bent Image Lab, Llc | Real-time shared augmented reality experience |
US9865091B2 (en) | 2015-09-02 | 2018-01-09 | Microsoft Technology Licensing, Llc | Localizing devices in augmented reality environment |
US10627625B2 (en) * | 2016-08-11 | 2020-04-21 | Magic Leap, Inc. | Automatic placement of a virtual object in a three-dimensional space |
US10373342B1 (en) * | 2017-01-10 | 2019-08-06 | Lucasfilm Entertainment Company Ltd. | Content generation in an immersive environment |
US20180308379A1 (en) * | 2017-04-21 | 2018-10-25 | Accenture Global Solutions Limited | Digital double platform |
JP6368404B1 (en) * | 2017-07-04 | 2018-08-01 | 株式会社コロプラ | Information processing method, program, and computer |
US20190088025A1 (en) * | 2017-09-15 | 2019-03-21 | DroneBase, Inc. | System and method for authoring and viewing augmented reality content with a drone |
US20190088086A1 (en) * | 2017-09-19 | 2019-03-21 | Bally Gaming, Inc. | Location-aware player loyalty system |
US20190130655A1 (en) | 2017-10-30 | 2019-05-02 | Rovi Guides, Inc. | Systems and methods for presentation of augmented reality supplemental content in combination with presentation of media content |
US20190188918A1 (en) * | 2017-12-14 | 2019-06-20 | Tsunami VR, Inc. | Systems and methods for user selection of virtual content for presentation to another user |
US10250948B1 (en) | 2018-01-05 | 2019-04-02 | Aron Surefire, Llc | Social media with optical narrowcasting |
DK201870347A1 (en) * | 2018-01-24 | 2019-10-08 | Apple Inc. | Devices, Methods, and Graphical User Interfaces for System-Wide Behavior for 3D Models |
US10740924B2 (en) * | 2018-04-16 | 2020-08-11 | Microsoft Technology Licensing, Llc | Tracking pose of handheld object |
US10535199B1 (en) * | 2018-06-18 | 2020-01-14 | Facebook Technologies, Llc | Systems and methods for determining a safety boundary for a mobile artificial reality user |
US11103773B2 (en) * | 2018-07-27 | 2021-08-31 | Yogesh Rathod | Displaying virtual objects based on recognition of real world object and identification of real world object associated location or geofence |
US10867061B2 (en) * | 2018-09-28 | 2020-12-15 | Todd R. Collart | System for authorizing rendering of objects in three-dimensional spaces |
KR20190104945A (en) * | 2019-08-23 | 2019-09-11 | 엘지전자 주식회사 | Xr device and method for controlling the same |
-
2019
- 2019-12-13 US US16/714,084 patent/US10902685B2/en active Active
-
2020
- 2020-12-18 US US17/126,611 patent/US20210183163A1/en not_active Abandoned
- 2020-12-18 US US17/126,235 patent/US11182979B2/en active Active
-
2023
- 2023-09-18 US US18/468,824 patent/US20240119688A1/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140232750A1 (en) * | 2010-08-26 | 2014-08-21 | Amazon Technologies, Inc. | Visual overlay for augmenting reality |
US20160179830A1 (en) * | 2014-12-19 | 2016-06-23 | Qualcomm Incorporated | Scalable 3d mapping system |
US20200208994A1 (en) * | 2016-10-28 | 2020-07-02 | Zoox, Inc. | Verification and updating of map data |
US20180213359A1 (en) * | 2017-01-23 | 2018-07-26 | Magic Leap, Inc. | Localization determination for mixed reality systems |
Non-Patent Citations (1)
Title |
---|
Gabriel Takacs, Vijay Chandresekhar, Natasha Gelfand, Yingen Xiong, Wei-Chao Chen, Thanos Bismpigiannis, Radek Grzeszczuk, Kari Pulli, Bernd Girod, "Outdoors Augmented Reality on Mobile Phone using Loxel-Based Visual Feature Organization", October 31, 2008, ACM, MIR '08 Proceedings, pages 427-434 * |
Also Published As
Publication number | Publication date |
---|---|
US20200193717A1 (en) | 2020-06-18 |
US10902685B2 (en) | 2021-01-26 |
US20210183162A1 (en) | 2021-06-17 |
US20240119688A1 (en) | 2024-04-11 |
US11182979B2 (en) | 2021-11-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11182979B2 (en) | Augmented reality remote authoring and social media platform and system | |
US11368557B2 (en) | Platform for constructing and consuming realm and object feature clouds | |
US11663785B2 (en) | Augmented and virtual reality | |
US10665028B2 (en) | Mobile persistent augmented-reality experiences | |
US9041734B2 (en) | Simulating three-dimensional features | |
AU2011331972B2 (en) | Rendering and navigating photographic panoramas with depth information in a geographic information system | |
AU2015332046B2 (en) | Street-level guidance via route path | |
CN109887003A (en) | A kind of method and apparatus initialized for carrying out three-dimensional tracking | |
US10878599B2 (en) | Soft-occlusion for computer graphics rendering | |
US11290705B2 (en) | Rendering augmented reality with occlusion | |
CN112105892A (en) | Identifying map features using motion data and bin data | |
Höllerer et al. | “Anywhere augmentation”: Towards mobile augmented reality in unprepared environments | |
CA3069813C (en) | Capturing, connecting and using building interior data from mobile devices | |
US9007374B1 (en) | Selection and thematic highlighting using terrain textures | |
KR102204721B1 (en) | Method and user terminal for providing AR(Augmented Reality) documentary service | |
US20230316659A1 (en) | Traveling in time and space continuum | |
US20230277943A1 (en) | Mapping traversable space in a scene using a three-dimensional mesh | |
KR20230166760A (en) | Method for generating metaverse space of hyper-personalized design and a metaverse system for performing the same | |
Scheiblauer et al. | Graph-based Guidance in Huge Point Clouds |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |