EP3959692A1 - System and method for creating persistent mappings in augmented reality - Google Patents

System and method for creating persistent mappings in augmented reality

Info

Publication number
EP3959692A1
EP3959692A1 EP19832782.7A EP19832782A EP3959692A1 EP 3959692 A1 EP3959692 A1 EP 3959692A1 EP 19832782 A EP19832782 A EP 19832782A EP 3959692 A1 EP3959692 A1 EP 3959692A1
Authority
EP
European Patent Office
Prior art keywords
digital representation
computing device
visual
map
environment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP19832782.7A
Other languages
German (de)
French (fr)
Inventor
Elena NATTINGER
Seth Raphael
Austin MCCASLAND
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Google LLC
Original Assignee
Google LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US16/395,832 external-priority patent/US11055919B2/en
Priority claimed from US16/396,145 external-priority patent/US11151792B2/en
Application filed by Google LLC filed Critical Google LLC
Publication of EP3959692A1 publication Critical patent/EP3959692A1/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Definitions

  • This description generally relates to the creating of persistent mappings in augmented reality.
  • an AR server may receive digital information about a first user’s environment, and a three-dimensional (3D) mapping that represents an AR environment is created.
  • the 3D mapping may provide a coordinate space in which visual information and AR objects are positioned.
  • the 3D mapping may be compared against digital information about the second user’s environment.
  • one or more physical objects in the physical space may have moved at the time of the second user’s attempt to localize the AR environment. Therefore, despite the second user being in the same physical space, the comparison may fail because of the visual differences between the 3D mapping and the digital information received from the second user’s device.
  • a system of one or more computers can be configured to perform particular operations or actions by virtue of having software, firmware, hardware, or a combination of them installed on the system that in operation causes or cause the system to perform the actions.
  • One or more computer programs can be configured to perform particular operations or actions by virtue of including instructions that, when executed by data processing apparatus, cause the apparatus to perform the actions.
  • a method for creating a three-dimensional map for augmented reality (AR) localization includes obtaining a digital representation of a scene of an AR environment, where the digital representation has been captured by a computing device.
  • the method includes identifying, using a machine learning (ML) model, a region of the digital representation having visual data identified as likely to change (e.g., move from the scene, disappear from the scene or other cause a change to the scene over time), and removing a portion of the digital representation that corresponds to the region of the digital representation to obtain a reduced digital representation, where the reduced digital representation is used to generate a three- dimensional (3D) map for the AR environment.
  • ML machine learning
  • a corresponding AR system and a non-transitory computer-readable medium storing corresponding instructions may be provided.
  • the method may include any of the following features (or any combination thereof).
  • the method may include generating the 3D map based on the reduced digital representation, where the 3D map does not include the portion of the digital representation that corresponds to the region with the visual data identified as likely to change.
  • the identifying may include detecting, using the ML model, a visual object in the digital representation that is likely to change, where the region of the digital representation is identified based on the detected visual object.
  • the identifying may include detecting, using the ML model, a visual object in the digital representation, classifying the visual object into a classification, and identifying the visual object as likely to change based on a tag associated with the classification, where the tag indicates that objects belonging to the classification are likely to change, and the region of the digital representation is identified as a three-dimensional space that includes the object identified as likely to change.
  • the identifying may include identifying, using the ML model, a pattern of visual points in the digital representation that are likely to change, where the pattern of visual points are excluded from the 3D map.
  • the digital representation includes a set of visual feature points derived from the computing device, and the method includes detecting a visual object that is likely to change based on the digital representation, identifying a region of space that includes the visual object, and removing one or more visual feature points from the set that are included within the region.
  • the digital representation is a first digital representation
  • the computing device is a first computing device
  • the method includes obtaining a second digital representation of at least a portion of the scene of the AR environment, where the second digital representation has been captured by a second computing device, and comparing the second digital representation with the 3D map to determine whether the second digital representation is from the same AR environment as the 3D map.
  • the method may include obtaining a second digital representation of at least a portion of the scene of the AR environment, identifying, using the ML model, a secondary region of the second digital representation, where the secondary region has visual data identified as likely to change, removing a portion of the second digital representation that corresponds to the secondary region, and comparing the second digital representation with the 3D map to determine whether the second digital representation is from the same AR environment as the 3D map.
  • an augmented reality (AR) system configured to generate a three-dimensional (3D) map for an AR environment includes an AR collaborative service executable by at least one server, and a client AR application executable by a computing device, where the client AR application configured to communicate with the AR collaborative service via one or more application programming interfaces (APIs), and the AR collaborative service or the client AR application configured to obtain a digital representation of a scene of an AR environment, where the digital representation has been captured by the computing device, identify, using a machine learning (ML) model, a region of the digital representation having visual data that is identified as likely to change, and remove a portion of the digital representation that corresponds to the region to obtain a reduced digital representation of the scene, where the reduced digital representation is used for comparison with a three-dimensional (3D) map of the AR environment.
  • ML machine learning
  • a corresponding method and a non-transitory computer-readable medium storing corresponding instructions may be provided.
  • the AR system may include any of the above/below features (or any combination thereof).
  • the AR collaborative service is configured to compare the reduced digital representation with the 3D map in response to an attempt to localize the AR environment on the computing device.
  • the client AR application or the AR collaborative service is configured to detect, using the ML model, an object in the digital representation that is likely to move, where the region of the digital representation is identified based on the detected object.
  • the AR collaborative service is configured to detect, using the ML model, an object in the digital representation, classify the object into a classification, and identify the object as likely to move based on a tag associated with the classification, where the tag indicates that objects belonging to the classification are likely to move.
  • the client AR application is configured to identify, using the ML model, a pattern of visual points in the digital representation that are likely to move.
  • the digital representation includes a set of visual feature points captured by the computing device, and the client AR application or the AR collaborative service is configured to detect an object that is likely to move based on the digital representation, identify a region of space that includes the object, and remove one or more visual feature points from the set that are included within the region.
  • a non-transitory computer-readable medium storing executable instructions that when executed by at least one processor are configured to generate a three-dimensional (3D) map for an augmented reality (AR) environment, where the executable instructions includes instructions that cause the at least one processor to obtain a first digital representation of a scene of an AR environment, where the first digital representation has been captured by a first computing device, identify, using a machine learning (ML) model, a region of the first digital representation having visual data that is identified as likely to change, remove a portion of the first digital representation that corresponds to the region to obtain a reduced digital representation, and generate a three-dimensional (3D) map for the AR environment for storage on an AR server, and, optionally, compare a second digital representation of at least a portion of the scene with the 3D map in response to an attempt to localize the AR environment on a second computing device, where the second digital representation has been captured by the second computing device.
  • a corresponding AR system and a corresponding method may be provided.
  • the non-transitory computer-readable medium may include any of the above/below features (or any combination thereof).
  • the operations may include detect, using the ML model, an object in the first digital representation that is likely to move.
  • the operations may include detect, using the ML model, an object in the first digital representation, classify the object into a classification, and identify the object as likely to move based on a tag associated with the classification, where the tag indicates that objects belonging to the classification are likely to move, and the region of the first digital representation is identified as a three-dimensional space that includes the object identified as likely to move.
  • the operations may include identify, using the ML model, a pattern of points in the first digital representation that are likely to move, where the pattern of points are excluded from the 3D map.
  • the digital representation includes a set of visual feature points captured by the first computing device, and the operations may include detect an object that is likely to move based on the first digital representation, identify a region of space that includes the object, and remove one or more visual feature points from the set that are included within the region.
  • the operations may include detect, using the ML model, an object in the second digital representation that is likely to move.
  • FIG. 1A illustrates an AR system for creating a 3D map according to an aspect.
  • FIG. IB illustrates a movement analyzer of the AR system for detecting moving data according to an aspect.
  • FIG. 2 illustrates an AR system with the movement analyzer integrated on a client AR application according to an aspect.
  • FIG. 3 illustrates an AR system with the movement analyzer integrated at an AR server according to an aspect.
  • FIG. 4 illustrates an AR system for generating a 3D mapping without movable data according to an aspect.
  • FIG. 5 illustrates an example of a computing device of an AR system according to an aspect.
  • FIGS. 6A through 6C illustrate graphical depictions of visual feature points on a scene of an AR environment and the removal of one or more of the points for a region having moving data according to an aspect.
  • FIG. 7 illustrates a flowchart depicting example operations of an AR system according to an aspect.
  • FIG. 8 illustrates example computing devices of the AR system according to an aspect.
  • the embodiments provide an AR system configured to create a 3D map for an AR environment without one or more visual objects (or one or more sets of patterned visual points) that are identified as likely to change (e.g., move from the scene, disappear from the scene, or a cause a change to the scene).
  • the 3D map includes the objects that are identified as likely to change, but the objects that are identified as likely to change are annotated in the AR system.
  • the annotation of movable objects may indicate that these objects are not used in AR localization comparison operations.
  • the AR system detects or identifies data that is likely is move from the digital information about a first user’s environment captured by the first user’s computing device, and then removes or annotates that data before updating or generating the 3D map.
  • the AR system may remove or ignore movable data from the digital information captured the second user’s computing device when comparing the second user’s digital information to the 3D map.
  • the AR environment uses machine learning models to semantically understand the type of physical object in the scene of the AR environment, and detects whether that object is likely to move. If the object is determined as likely to move, that portion of the digital information is not used to create/update the 3D map or not used in the comparison with the 3D map for AR localization. As a result, the quality of persistent world-space mapping of AR systems may be increased. In addition, the accuracy of the comparison for AR localization may be improved since relatively stationary objects are used as opposed to objects that are likely to move.
  • FIGS. 1A and IB illustrate an augmented reality (AR) system 100 configured to store and share digital content in an AR environment 101 according to an aspect.
  • the AR system 100 is configured to create a three-dimensional (3D) map 113 of the AR environment 101 without one or more visual objects (or one or more sets of patterned visual points) that are likely to change (e.g., move from the scene, disappear from the scene, or cause a change to the scene over time), thereby increasing the quality of persistent world-space mapping of the AR system 100.
  • the 3D map 113 includes the objects that are identified as likely to move, but the objects that are identified as likely to move are annotated in the AR system 100.
  • the 3D map 113 includes a coordinate space in which visual information from the physical space and AR content 130 are positioned. In some examples, the visual information and AR content 130 positions are updated in the 3D map 113 from image frame to image frame. In some examples, the 3D map 113 includes a sparse point map. The 3D map 113 is used to share the AR environment 101 with one or more users that join the AR environment 101 and to calculate where each user’s computing device is located in relation to the physical space of the AR environment 101 such that multiple users can view and interact with the AR environment 101.
  • the AR system 100 includes an AR collaborative service 104, executable by one or more AR servers 102, configured to create a multi-user or collaborative AR experience that users can share.
  • the AR collaborative service 104 communicates, over a network 150, with a plurality of computing devices including a first computing device 106 and a second computing device 108, where a user of the first computing device 106 and a user of the second computing device 108 may share the same AR environment 101.
  • Each of the first computing device 106 and the second computing device 108 is configured to execute a client AR application 110.
  • the client AR application 110 is a software development kit (SDK) that operates in conjunction with one or more AR applications.
  • SDK software development kit
  • the client AR application 110 in combination with one or more sensors on the first computing device 106 or the second computing device 108, is configured to detect and track a device’s position relative to the physical space, detect the size and location of different types of surfaces (e.g., horizontal, vertical, angled), and estimate the environment’s current lighting conditions.
  • the client AR application 110 is configured to communicate with the AR collaborative service 104 via one or more application programming interfaces (APIs). Although two computing devices are illustrated in FIG. 1A, the AR collaborative service 104 may communicate and share the AR environment 101 with any number of computing devices.
  • APIs application programming interfaces
  • the first computing device 106 may be, for example, a computing device such as a controller, or a mobile device (e.g., a smartphone, a tablet, a joystick, or other portable controllers )).
  • the first computing device 106 includes a wearable device (e.g., a head mounted device) that is paired with, or communicates with a mobile device for interaction in the AR environment 101.
  • the AR environment 101 is a representation of an environment that may be generated by the first computing device 106 (and/or other virtual and/or augmented reality hardware and software). In this example, the user is viewing the AR environment 101 with the first computing device 106. Since the details and use of the second computing device 108 may be the same with respect to the first computing device 106, the details of the second computing device 108 are omitted for the sake of brevity.
  • the AR environment 101 may involve a physical space which is within the view of a user and a virtual space within which AR content 130 is positioned.
  • the AR content 130 is a text description (“My Chair”) along with an arrow that points to a chair 131, where the chair 131 is a physical object in the physical space.
  • Providing (or rendering) the AR environment 101 may then involve altering the user’s view of the physical space by displaying the AR content 130 such that it appears to the user to be present in, or overlayed onto or into, the physical space in the view of the user. This displaying of the AR content 130 is therefore according to a mapping (e.g. the 3D map 113) between the virtual space and the physical space.
  • a mapping e.g. the 3D map 113
  • Overlaying of the AR content 130 may be implemented, for example, by superimposing the AR content 130 into an optical field of view of a user of the physical space, by reproducing a view of the user of the physical space on one or more display screens, and/or in other ways, for example by using heads up displays, mobile device display screens and so forth.
  • the first computing device 106 may send a digital representation 114 of a scene 125 of the AR environment 101 to the AR collaboration service 104.
  • the AR collaboration service 104 may create the 3D map 113 based on the digital representation 114 from the first computing device 106, and the 3D map 113 is stored at the AR server 102.
  • a user of the second computing device 108 may wish to join the AR environment 101 (e.g., at a time where the user of the first computing device 106 is within the AR environment 101 or at a subsequent time when the user of the first computing device 106 has left the session).
  • the second computing device 108 may send a digital representation 114 of at least a portion of the scene 125 of the AR environment 101.
  • the AR collaboration service 104 may compare the digital representation 114 from the second computing device 108 to the 3D map 113. If the comparison results in a match (or substantially matches), the AR environment 101 is localized on the second computing device 108.
  • the accuracy of the matching may be dependent upon whether the saved area (e.g., the 3D map 113) includes objects or points that are likely to move (e.g., relative to other objects in the scene).
  • Certain environment conditions e.g., changes in lighting, movement of objects such as furniture, etc.
  • the comparison may not result in a match, and the AR environment 101 may not be able to be localized on the second computing device 108.
  • the AR environment 101 includes a chair 131, and the 3D map 113 provides a 3D mapping of the AR environment 101 that includes the chair 131.
  • the chair 131 may be moved outside of the office depicted in the scene 125 of the AR environment 101.
  • the digital representation 114 sent to the AR collaboration service 104 from the second computing device 108 may not have visual features corresponding to the chair 131.
  • the comparison of visual features may fail on account of the differences in visual features of when the scene 125 was initially stored and the later attempt to localize the AR environment 101.
  • the AR system 100 includes a movement analyzer 112 configured to detect objects or a set of patterned points that are likely to move from image data captured by the first computing device 106 or the second computing device 108, and then remove or annotate those objects or points when creating the 3D map 113 or ignoring those objects or points when attempting to match to the 3D map 113 for AR localization of the AR environment 101 on the first computing device 106 or the second computing device 108.
  • the operations of the movement analyzer 112 are performed by the client AR application 110.
  • the operations of the movement analyzer 112 are performed by the AR collaboration service 104.
  • one or more operations of the movement analyzer 112 are performed by the client AR application 110 and one or more operations of the movement analyzer 112 are performed by the AR collaboration service 104.
  • the movement analyzer 112 is configured to detect a digital representation 114 of the scene 125 of the AR environment 101.
  • a user may use one or more sensors on the first computing device 106 to capture the scene 125 from the physical space of the AR environment 101.
  • the digital representation 114 includes a 3D representation of the scene 125 of the AR environment 101.
  • the digital representation 114 includes visual features with depth information.
  • the digital representation 114 includes image data of one or more frames captured by the first computing device 106.
  • the digital representation 114 includes a set of visual feature points with depth in space.
  • the movement analyzer 112 includes a movement detector 116 configured to identify, using one or more machine learning (ML) models 115, a region 118 having movable data 120 based on an analysis of the digital representation 114, which may be 2D image data or 3D image data with depth information.
  • the movable data 120 includes data that is likely to cause a change to the scene (e.g., anything that causes a“change” such as ice melting, a shadow or light moving).
  • the movable data 120 may be one or more objects 121 or a patterned set of visual points 123 that are identified as likely to move, and the region 118 may be space that includes the movable data 120.
  • the region 118 is a 3D space that includes the objects 121 or the patterned set of visual points 123. In some examples, the region 118 is the area (e.g., 3D space) identified by one or more coordinates and/or dimensions of the region 118 in the AR environment 101 that encompass the objects 121 or the patterned set of visual points 123. In some examples, the region 118 is a bounding box that includes the objects 121 or the patterned set of visual points 123.
  • the ML models 115 include one or more trained classifiers configured to detect a classification of an object 121 in the scene 125 based on the digital representation 114.
  • the one or more trained classifiers may detect an object 121 in the scene 125 and classify the object 121 into one of a plurality of classifications.
  • the classifications may include different characterizations of objects such as chairs, laptops, desks, etc. Some of the classifications may be associated with a tag indicating that objects belonging to a corresponding classification are likely to move.
  • a classification being tagged as likely to be moved may be programmatically determined by one or more of the ML models 115.
  • the trained classifiers may indicate that objects of a particular classification move out of the scene 125 (or a different location in the scene 125) over a threshold amount, and this particular classification may be programmatically tagged as likely to be moved.
  • a classification being tagged as likely to be moved may be determined by a human programmer (e.g., it is known that objects such as pens, laptops, chairs, etc. are likely to move, and may be manually tagged as likely to move without using ML algorithms). As shown in FIG. 1A, the scene 125 includes the chair 131.
  • the movement detector 116 may detect the object representing the chair 131 and classify the chair 131 as a chair classification, and the chair classification may be tagged as likely to be moved. In some examples, the detection of the chair 131 as the chair classification is associated with a confidence level, and if the confidence level is above a threshold amount, the movement detector 116 is configured to detect the chair 131 as the chair classification. The movement detector 116 may then identify the region 118 that encompasses the chair 131.
  • the movement detector 116 determines a classification for a detected object 121 using a 2D or 3D image signal and one or more other signals such as information associated with the AR content 130.
  • the AR content 130 may include descriptive information that can assist in the semantic understanding of the object 121.
  • the digital representation 114 may be a set of visual feature points with depth information in space, and one or more of the set of visual feature points may be associated with the AR content 130.
  • the chair 131 is associated with the AR content 130 (e.g.,“My chair”).
  • the movement detector 116 may be configured to analyze any AR content 130 associated with the objects 121 of the scene 125 and increase or decrease the confidence level associated with the classification. In this example, since the AR content 130 includes the word“Chair,” the movement detector 116 may increase the confidence level that the chair 131 is the chair classification.
  • the movement detector 116 is configured to identify, using the ML models 115, a patterned set of visual points 123 as likely to move.
  • the movement detector 116 may not necessarily detect the particular type of object, but rather the movement detector 116 may detect a pattern of visual points that have one or more characteristics in which the ML models 115 determine as likely to move.
  • the ML model(s) may allow particularly precise classification and identification.
  • the ML models 115 include a neural network.
  • the neural network may be an interconnected group of nodes, each node representing an artificial neuron.
  • the nodes are connected to each other in layers, with the output of one layer becoming the input of a next layer.
  • Neural networks transform an input, received by the input layer, transform it through a series of hidden layers, and produce an output via the output layer.
  • Each layer is made up of a subset of the set of nodes.
  • the nodes in hidden layers are fully connected to all nodes in the previous layer and provide their output to all nodes in the next layer.
  • the nodes in a single layer function independently of each other (i.e., do not share connections). Nodes in the output provide the transformed input to the requesting process.
  • the movement analyzer 112 uses a convolutional neural network in the object classification algorithm, which is a neural network that is not fully connected. Convolutional neural networks therefore have less complexity than fully connected neural networks. Convolutional neural networks can also make use of pooling or max-pooling to reduce the dimensionality (and hence complexity) of the data that flows through the neural network and thus this can reduce the level of computation required. This makes computation of the output in a convolutional neural network faster than in neural networks.
  • the movement analyzer 112 includes a digital representation reducer
  • the reduced (or annotated) digital representation 124 excludes the objects 121 or the patterned set of visual points 123 that are identified as likely to move or annotates them as likely to move.
  • the digital representation 114 is a set of visual feature points with depth information in space, and the digital representation reducer 122 may remove or annotate one or more visual feature points that are contained in the region 118 such that the objects 121 or the patterned set of visual points 123 are not included or annotated in the reduced (or annotated) digital representation 124.
  • FIG. 2 illustrates an AR system 200 for creating a 3D map 213 without one or more objects that are likely to move, thereby increasing the quality of persistent world-space mapping of the AR system 200.
  • the increased mapping quality can enable certain technical applications or features, e.g., precise indoor positioning and guidance that reliably avoid collisions.
  • the 3D map 213 includes the objects that are identified as likely to move, but the objects that are identified as likely to move are annotated in the AR system 200.
  • the AR system 200 of FIG. 2 may include any of the features of the AR system 100 of FIGS. 1A and IB.
  • the AR system 200 includes an AR collaborative service 204, executable by one or more AR servers 202, configured to communicate, over a network 250, with a plurality of computing devices including a first computing device 206 and a second computing device 208, where a user of the first computing device 206 and a user of the second computing device 208 may share the same AR environment (e.g., the AR environment 101 of FIG. 1).
  • Each of the first computing device 206 and the second computing device 208 is configured to execute a client AR application 210.
  • the client AR application 210 is configured to communicate with the AR collaborative service 204 via one or more application programming interfaces (APIs)
  • APIs application programming interfaces
  • the AR system 200 includes a movement analyzer
  • the movement analyzer 212 may include any of the features discussed with reference to the movement analyzer 112 of FIGS. 1A and IB.
  • the client AR application 210 of the first computing device 206 obtains a first digital representation (e.g., the digital representation 114 of FIG. IB) of the scene (e.g., the scene 125), and then processes the first digital representation (using the operating of the movement analyzer 212) to obtain a first reduced (or annotated) digital representation 224-1.
  • the client AR application 210 sends the reduced (or annotated) digital representation 224-1, over the network 250, to the AR collaborative service 204.
  • the AR collaborative service 204 generates the 3D map 213 using the first reduced (or annotated) digital representation 224-1.
  • the AR collaborative service 204 includes a map generator 226 configured to generate the 3D map 213 using the first reduced (or annotated) digital representation 224-1.
  • the map generator 226 stores the 3D map 213 in a database 228 at the AR server 202.
  • the client AR application 210 of the second computing device 208 obtains a second digital representation (e.g., the digital representation 114) of the scene (e.g., the scene 125), and then processes the second digital representation (using the operating of the movement analyzer 212) to obtain a second reduced (or annotated) digital representation 224-2.
  • the client AR application 210 of the second computing device 208 sends the second reduced (or annotated) digital representation 224-2, over the network 250, to the AR collaborative service 204.
  • the AR collaborative service 204 includes a localization resolver 230 configured to compare the second reduced (or annotated) digital representation 224-2 to the 3D map 213 when attempting to localize the AR environment on the second computing device 208. In response to the comparison resulting in a match (e.g., indicating that the 3D map 213 and the second reduced (or annotated) digital representation 224-2 is from the same AR environment), the AR collaborative service 204 provides the AR environment to the client AR application 210 of the second computing device 208. Also, the map generator 226 may update the 3D map 213 using the second reduced (or annotated) digital representation 224-2.
  • the AR environment is not shared with the user of the second computing device 208.
  • the 3D map 213 does not include movable data or the movable data is annotated in the 3D map 213 (and the second reduced digital representation 224-2 does not include movable data or the movable data is annotated in the second reduced digital representation 224-2)
  • the accuracy of the comparison may be improved.
  • FIG. 3 illustrates an AR system 300 for creating a 3D map 313 without one or more objects that are likely to move, thereby increasing the quality of persistent world-space mapping of the AR system 300.
  • the 3D map 313 includes the objects that are identified as likely to move, but the objects that are identified as likely to move are annotated in the AR system 300.
  • the AR system 300 of FIG. 3 may include any of the features of the AR system 100 of FIGS. 1A and IB.
  • the AR system 300 includes an AR collaborative service 304, executable by one or more AR servers 302, configured to communicate, over a network 350, with a plurality of computing devices including a first computing device 306 and a second computing device 308, where a user of the first computing device 306 and a user of the second computing device 308 may share the same AR environment (e.g., the AR environment 101 of FIG. 1).
  • Each of the first computing device 306 and the second computing device 308 is configured to execute a client AR application 310.
  • the client AR application 310 is configured to communicate with the AR collaborative service 304 via one or more application programming interfaces (APIs).
  • APIs application programming interfaces
  • the AR system 300 includes a movement analyzer
  • the movement analyzer 312 may include any of the features discussed with reference to the movement analyzer 112 of FIGS. 1A and IB.
  • the client AR application 310 of the first computing device 306 obtains a first digital representation 314-1 (e.g., the digital representation 114 of FIG. IB) of the scene (e.g., the scene 125) and sends the first digital representation 314-1, over the network 350, to the AR collaborative service 304.
  • the movement analyzer 312 is configured to process the first digital representation 314-1 (using the movement analyzer 312) to obtain a first reduced (or annotated) digital representation 324-1 that does not include movable data, or the movable data is annotated in the digital representation 324-1.
  • the AR collaborative service 304 generates the 3D map 313 using the first reduced (or annotated) digital representation 324-1.
  • the AR collaborative service 304 includes a map generator 326 configured to generate the 3D map 313 using the first reduced digital representation 324-1.
  • the map generator 326 stores the 3D map
  • the client AR application 310 of the second computing device 308 obtains a second digital representation 314-2 (e.g., the digital representation 114) of the scene (e.g., the scene 125), and sends the second digital representation 314-2, over the network 350, to the AR collaborative service 304.
  • the movement analyzer 312 processes the second digital representation 314-2 to obtain a second reduced (or annotated) digital representation 324-2, which does not include movable data or the movable data is annotated in the digital representation 324-2.
  • the AR collaborative service 304 includes a localization resolver 330 configured to compare the second reduced (or annotated) digital representation 324-2 to the 3D map 313 when attempting to localize the AR environment on the second computing device 308. In response to the comparison resulting in a match (e.g., indicating that the 3D map 313 and the second reduced digital representation 324-2 is from the same AR environment), the AR collaborative service 304 provides the AR environment to the client AR application 310 of the second computing device 308. Also, the map generator 326 may update the 3D map 313 using the second reduced (or annotated) digital representation 324-2.
  • the AR environment is not shared with the user of the second computing device 308.
  • the 3D map 313 does not include movable data (and the second reduced digital representation 324-2 does not include movable data)
  • the accuracy of the comparison may be improved.
  • FIG. 4 illustrates an AR system 400 for creating a 3D map 432 without one or more objects that are likely to move, thereby increasing the quality of persistent world-space mapping of the AR system 400.
  • the AR system 400 of FIG. 4 may include any of the features of the previous figures.
  • the AR system 400 includes an AR collaborative service 404, executable by one or more AR servers 402, configured to communicate, over a network 450, with a plurality of computing devices including a first computing device 406 and a second computing device 408, where a user of the first computing device 406 and a user of the second computing device 408 may share the same AR environment (e.g., the AR environment 101 of FIG. 1).
  • Each of the first computing device 406 and the second computing device 408 is configured to execute a client AR application 410.
  • the client AR application 410 is configured to communicate with the AR collaborative service 404 via one or more application programming interfaces (APIs).
  • APIs application programming interfaces
  • the client AR application 410 of the first computing device 406 obtains a first digital representation 414-1 (e.g., the digital representation 114 of FIG. IB) of the scene (e.g., the scene 125) and sends the first digital representation 414-1, over the network 450, to the AR collaborative service 404.
  • the AR collaborative service 404 includes a map generator 426 configured to generate a first 3D map 413-1 based on the first digital representation 414-1.
  • the client AR application 410 of the second computing device 408 obtains a second digital representation 414-2 (e.g., the digital representation 114 of FIG.
  • the map generator 426 configured to generate a second 3D map 413-2 based on the second digital representation 414-2.
  • the AR collaborative service 404 includes a movement analyzer 412 configured to compare the first 3D map 413-1 to the second 3D map 413-2 to identify one or more objects or one or more patterned sets of visual points that are present in one of the 3D maps but not present in the other 3D maps.
  • the movement analyzer 412 may identify these objects or patterned sets of visual points as likely to remove.
  • the movement analyzer 412 may generate and store the 3D map 432 based on the first 3D map 413-1 and the second 3D map 413-2 in a manner that does not include the objects or patterned sets of visual points that are likely to move.
  • FIG. 5 illustrates an example of a computing device 506 configured to communicate with any of the AR systems disclosed herein.
  • the computing device 506 may be an example of the first computing device (e.g., 106, 206, 306, 406) or the second computing device (e.g., 108, 208, 308, 408).
  • the computing device 506 may include any of the features discussed with reference to the first computing device or the second computing device with reference to the previous figures.
  • the computing device 506 includes a client AR application 510 configured to execute on an operating system of the computing device 506.
  • the client AR application 510 is a software development kit (SDK) that operates in conjunction with one or more AR applications 558.
  • SDK software development kit
  • the AR applications 558 may be any type of AR applications (e.g., gaming, entertainment, medicine, education, etc.) executable on the computing device 506.
  • the client AR application 510 includes a motion tracker 552 configured to permit the computing device 506 to detect and track its position relative to the physical space, an environment detector 554 configured to permit the computing device 506 to detect the size and location of different types of surfaces (e.g., horizontal, vertical, angled), and a light estimator 556 to permit the computing device 506 to estimate the environment’s current lighting conditions.
  • a motion tracker 552 configured to permit the computing device 506 to detect and track its position relative to the physical space
  • an environment detector 554 configured to permit the computing device 506 to detect the size and location of different types of surfaces (e.g., horizontal, vertical, angled)
  • a light estimator 556 to permit the computing device 506 to estimate the environment’s current lighting conditions.
  • the computing device 506 includes a display 560, one or more inertial sensors 562, and a camera 564.
  • the client AR application 510 is configured to generate a set of visual feature points 514 to be sent and stored on the AR server 102 for future AR localization.
  • the user may use the camera 564 on the computing device 506 to capture a scene from the physical space (e.g. moving the camera around to capture a specific area), and the client AR application 510 is configured to detect the set of visual feature points 514 and track the movement of the set of visual feature points 514 move over time.
  • the client AR application 510 is configured to determine the position and orientation of the computing device 506 as the computing device 506 moves through the physical space.
  • the client AR application 510 may detect flat surfaces (e.g., a table or the floor) and estimate the average lighting in the area around it.
  • the set of visual feature points 514 may be an example of the digital representation 114.
  • the set of visual feature points 514 are a plurality of points (e.g., interesting points) that represent the user’s environment.
  • each visual feature point 514 is an approximation of a fixed location and orientation in the physical space, and the set of visual feature points 514 may be updated over time.
  • the set of visual feature points 514 may be referred to an anchor or a set of persistent visual features that represent physical objects in the physical world.
  • the set of visual feature points 514 may be used to localize the AR environment for a secondary user or localize the AR environment for the computing device 506 in a subsequent session
  • the visual feature points 514 may be used to compare and match against other visual feature points 514 captured by a secondary computing device in order to determine whether the physical space is the same as the physical space of the stored visual feature points 514 and to calculate the location of the secondary computing device within the AR environment in relation to the stored visual feature points 514.
  • AR content 130 is attached to the one of more of the visual feature points 514.
  • the AR content 130 may include objects (e.g., 3D objects), annotations, or other information.
  • objects e.g., 3D objects
  • annotations e.g., text, text, or other information.
  • the user of the computing device 506 can place a napping kitten on the comer of a coffee table or annotate a painting with biographical information about the artist. Motion tracking means that you can move around and view these objects from any angle, and even if you turn around and leave the room, when you come back, the kitten or annotation will be right where you left it.
  • the client AR application 510 includes a movement analyzer 512.
  • the movement analyzer 512 is configured to process the set of visual feature points 514 to remove one or more visual feature points 514 that are included within a 3D region encompassing an object or a set of patterned visual points that are identified as likely to move by the ML models.
  • the client AR application 510 is configured to send the reduced set of visual feature points 514 to the AR collaboration service 104 for storage thereon and/or the generation of the 3D map 113.
  • the client AR application 510 does not include the movement analyzer 512, but rather the movement analyzer 512 executes on the AR collaboration service 104 as described above. In this case, the client AR application 510 sends the full set of visual feature points 514 to the AR collaboration service 104.
  • the movement analyzer 512 may remove those visual feature points 514 in the set of visual feature points 514.
  • the movement analyzer 512 uses the ML models 115 to identify a region 118 (e.g., a bounding box) of an object 121 likely to move in a given image, and the movement analyzer 512 is configured to determine which of the visual feature points 514 are contained in that region 118, and then remove those visual feature points 514 contained in the region 118.
  • FIGS. 6A through 6C depict examples of the set of visual feature points 514 in a scene 525 of an AR environment and the removal of one or more visual feature points 514 that correspond to an object identified as likely to move according to an aspect.
  • the client AR application 510 is configured to generate the set of visual feature points 514 that represent the scene 525 of the AR environment.
  • the movement analyzer 512 is configured to detect an object 521 (e.g., the chair) as likely to move in the scene 525 in the manner as described above and identify a region 518 that encompasses the object 521.
  • the movement analyzer 512 is remove the visual feature points 514 that are included within the region 518 from the set of visual feature points 514.
  • FIG. 7 illustrates a flow chart 700 depicting example operations of an AR system according to an aspect. Although the operations are described with reference to the AR system 100, the operation of FIG. 7 may be applicable to any of the systems described herein.
  • Operation 702 includes obtaining a digital representation 114 of a scene 125 of an AR environment 101, where the digital representation 114 has been captured by a computing device (e.g., the first computing device 106).
  • Operation 704 includes identifying, using a machine learning (ML) model 115, a region 118 of the digital representation 114 having data 120 that is likely to move.
  • Operation 706 includes removing a portion of the digital representation 114 that corresponds to the region 118 of the digital representation 114 to obtain a reduced digital representation 124, where the reduced digital representation 124 is used to generate a three- dimensional (3D) map 113 for the AR environment 101.
  • ML machine learning
  • FIG. 8 shows an example of an example computer device 800 and an example mobile computer device 850, which may be used with the techniques described here.
  • Computing device 800 includes a processor 802, memory 804, a storage device 806, a high-speed interface 808 connecting to memory 804 and high speed expansion ports 810, and a low speed interface 812 connecting to low speed bus 814 and storage device 806.
  • Each of the components 802, 804, 806, 808, 810, and 812 are interconnected using various busses, and may be mounted on a common motherboard or in other manners as appropriate.
  • the processor 802 can process instructions for execution within the computing device 800, including instructions stored in the memory 804 or on the storage device 806 to display graphical information for a GUI on an external input/output device, such as display 816 coupled to high speed interface 808.
  • an external input/output device such as display 816 coupled to high speed interface 808.
  • multiple processors and/or multiple buses may be used, as appropriate, along with multiple memories and types of memory.
  • multiple computing devices 800 may be connected, with each device providing portions of the necessary operations (e.g., as a server bank, a group of blade servers, or a multi-processor system).
  • the memory 804 stores information within the computing device 800.
  • the memory 804 is a volatile memory unit or units. In another implementation, the memory 804 is a non-volatile memory unit or units.
  • the memory 804 may also be another form of computer-readable medium, such as a magnetic or optical disk.
  • the storage device 806 is capable of providing mass storage for the computing device 800.
  • the storage device 806 may be or contain a computer-readable medium, such as a floppy disk device, a hard disk device, an optical disk device, or a tape device, a flash memory or other similar solid state memory device, or an array of devices, including devices in a storage area network or other configurations.
  • a computer program product can be tangibly embodied in an information carrier.
  • the computer program product may also contain instructions that, when executed, perform one or more methods, such as those described above.
  • the information carrier is a computer- or machine-readable medium, such as the memory 804, the storage device 806, or memory on processor 802.
  • the high speed controller 808 manages bandwidth-intensive operations for the computing device 800, while the low speed controller 812 manages lower bandwidth-intensive operations.
  • the high-speed controller 808 is coupled to memory 804, display 816 (e.g., through a graphics processor or accelerator), and to high-speed expansion ports 810, which may accept various expansion cards (not shown).
  • low-speed controller 812 is coupled to storage device 806 and low- speed expansion port 814.
  • the low-speed expansion port which may include various communication ports (e.g., USB, Bluetooth, Ethernet, wireless Ethernet) may be coupled to one or more input/output devices, such as a keyboard, a pointing device, a scanner, or a networking device such as a switch or router, e.g., through a network adapter.
  • input/output devices such as a keyboard, a pointing device, a scanner, or a networking device such as a switch or router, e.g., through a network adapter.
  • the computing device 800 may be implemented in a number of different forms, as shown in the figure. For example, it may be implemented as a standard server 820, or multiple times in a group of such servers. It may also be implemented as part of a rack server system 824. In addition, it may be implemented in a personal computer such as a laptop computer 822. Alternatively, components from computing device 800 may be combined with other components in a mobile device (not shown), such as device 850. Each of such devices may contain one or more of computing device 800, 850, and an entire system may be made up of multiple computing devices 800, 850 communicating with each other.
  • Computing device 850 includes a processor 852, memory 864, an input/output device such as a display 854, a communication interface 866, and a transceiver 868, among other components.
  • the device 850 may also be provided with a storage device, such as a microdrive or other device, to provide additional storage.
  • a storage device such as a microdrive or other device, to provide additional storage.
  • Each of the components 850, 852, 864, 854, 866, and 868 are interconnected using various buses, and several of the components may be mounted on a common motherboard or in other manners as appropriate.
  • the processor 852 can execute instructions within the computing device 850, including instructions stored in the memory 864.
  • the processor may be implemented as a chipset of chips that include separate and multiple analog and digital processors.
  • the processor may provide, for example, for coordination of the other components of the device 850, such as control of user interfaces, applications run by device 850, and wireless communication by device 850.
  • Processor 852 may communicate with a user through control interface
  • the display 854 may be, for example, a TFT LCD (Thin-Film-Transistor Liquid Crystal Display) or an OLED (Organic Light Emitting Diode) display, or other appropriate display technology.
  • the display interface 856 may comprise appropriate circuitry for driving the display 854 to present graphical and other information to a user.
  • the control interface 858 may receive commands from a user and convert them for submission to the processor 852.
  • an external interface 862 may be provide in communication with processor 852, so as to enable near area communication of device 850 with other devices.
  • External interface 862 may provide, for example, for wired communication in some implementations, or for wireless communication in other implementations, and multiple interfaces may also be used.
  • the memory 864 stores information within the computing device 850.
  • the memory 864 can be implemented as one or more of a computer-readable medium or media, a volatile memory unit or units, or a non-volatile memory unit or units.
  • Expansion memory 874 may also be provided and connected to device 850 through expansion interface 872, which may include, for example, a SIMM (Single In Line Memory Module) card interface.
  • SIMM Single In Line Memory Module
  • expansion memory 874 may provide extra storage space for device 850, or may also store applications or other information for device 850.
  • expansion memory 874 may include instructions to carry out or supplement the processes described above, and may include secure information also.
  • expansion memory 874 may be provide as a security module for device 850, and may be programmed with instructions that permit secure use of device 850.
  • secure applications may be provided via the SIMM cards, along with additional information, such as placing identifying information on the SIMM card in a non-hackable manner.
  • the memory may include, for example, flash memory and/or NVRAM memory, as discussed below.
  • a computer program product is tangibly embodied in an information carrier.
  • the computer program product contains instructions that, when executed, perform one or more methods, such as those described above.
  • the information carrier is a computer- or machine-readable medium, such as the memory 864, expansion memory 874, or memory on processor 852, that may be received, for example, over transceiver 868 or external interface 862.
  • Device 850 may communicate wirelessly through communication interface 866, which may include digital signal processing circuitry where necessary. Communication interface 866 may provide for communications under various modes or protocols, such as GSM voice calls, SMS, EMS, or MMS messaging, CDMA, TDMA, PDC, WCDMA, CDMA2000, or GPRS, among others. Such communication may occur, for example, through radio-frequency transceiver 868. In addition, short- range communication may occur, such as using a Bluetooth, Wi-Fi, or other such transceiver (not shown). In addition, GPS (Global Positioning System) receiver module 870 may provide additional navigation- and location-related wireless data to device 850, which may be used as appropriate by applications running on device 850.
  • GPS Global Positioning System
  • Device 850 may also communicate audibly using audio codec 860, which may receive spoken information from a user and convert it to usable digital information. Audio codec 860 may likewise generate audible sound for a user, such as through a speaker, e.g., in a handset of device 850. Such sound may include sound from voice telephone calls, may include recorded sound (e.g., voice messages, music files, etc.) and may also include sound generated by applications operating on device 850.
  • Audio codec 860 may receive spoken information from a user and convert it to usable digital information. Audio codec 860 may likewise generate audible sound for a user, such as through a speaker, e.g., in a handset of device 850. Such sound may include sound from voice telephone calls, may include recorded sound (e.g., voice messages, music files, etc.) and may also include sound generated by applications operating on device 850.
  • the computing device 850 may be implemented in a number of different forms, as shown in the figure. For example, it may be implemented as a cellular telephone 880. It may also be implemented as part of a smart phone 882, personal digital assistant, or other similar mobile device.
  • Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, specially designed ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof.
  • ASICs application specific integrated circuits
  • These various implementations can include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device.
  • the term“module” may include software and/or hardware.
  • the systems and techniques described here can be implemented on a computer having a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user and a keyboard and a pointing device (e.g., a mouse or a trackball) by which the user can provide input to the computer.
  • a display device e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor
  • a keyboard and a pointing device e.g., a mouse or a trackball
  • Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user can be received in any form, including acoustic, speech, or tactile input.
  • the systems and techniques described here can be implemented in a computing system that includes a back end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front end component (e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back end, middleware, or front end components.
  • the components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network (“LAN”), a wide area network (“WAN”), and the Internet.
  • LAN local area network
  • WAN wide area network
  • the Internet the global information network
  • the computing system can include clients and servers.
  • a client and server are generally remote from each other and typically interact through a communication network.
  • the relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
  • the computing devices depicted in FIG. 8 can include sensors that interface with a virtual reality (VR headset 890).
  • VR headset 890 virtual reality
  • one or more sensors included on a computing device 850 or other computing device depicted in FIG. 8 can provide input to VR headset 890 or in general, provide input to a VR space.
  • the sensors can include, but are not limited to, a touchscreen, accelerometers, gyroscopes, pressure sensors, biometric sensors, temperature sensors, humidity sensors, and ambient light sensors.
  • the computing device 850 can use the sensors to determine an absolute position and/or a detected rotation of the computing device in the VR space that can then be used as input to the VR space.
  • the computing device 850 may be incorporated into the VR space as a virtual object, such as a controller, a laser pointer, a keyboard, a weapon, etc.
  • a virtual object such as a controller, a laser pointer, a keyboard, a weapon, etc.
  • Positioning of the computing device/virtual object by the user when incorporated into the VR space can allow the user to position the computing device to view the virtual object in certain manners in the VR space.
  • the virtual object represents a laser pointer
  • the user can manipulate the computing device as if it were an actual laser pointer.
  • the user can move the computing device left and right, up and down, in a circle, etc., and use the device in a similar fashion to using a laser pointer.
  • one or more input devices included on, or connect to, the computing device 850 can be used as input to the VR space.
  • the input devices can include, but are not limited to, a touchscreen, a keyboard, one or more buttons, a trackpad, a touchpad, a pointing device, a mouse, a trackball, a joystick, a camera, a microphone, earphones or buds with input functionality, a gaming controller, or other connectable input device.
  • a user interacting with an input device included on the computing device 850 when the computing device is incorporated into the VR space can cause a particular action to occur in the VR space.
  • a touchscreen of the computing device 850 can be rendered as a touchpad in VR space.
  • a user can interact with the touchscreen of the computing device 850.
  • the interactions are rendered, in VR headset 890 for example, as movements on the rendered touchpad in the VR space.
  • the rendered movements can control objects in the VR space.
  • one or more output devices included on the computing device 850 can provide output and/or feedback to a user of the VR headset 890 in the VR space.
  • the output and feedback can be visual, tactical, or audio.
  • the output and/or feedback can include, but is not limited to, vibrations, turning on and off or blinking and/or flashing of one or more lights or strobes, sounding an alarm, playing a chime, playing a song, and playing of an audio file.
  • the output devices can include, but are not limited to, vibration motors, vibration coils, piezoelectric devices, electrostatic devices, light emitting diodes (LEDs), strobes, and speakers.
  • the computing device 850 may appear as another object in a computer-generated, 3D environment. Interactions by the user with the computing device 850 (e.g., rotating, shaking, touching a touchscreen, swiping a finger across a touch screen) can be interpreted as interactions with the object in the VR space.
  • the computing device 850 appears as a virtual laser pointer in the computer-generated, 3D environment.
  • the user manipulates the computing device 850, the user in the VR space sees movement of the laser pointer.
  • the user receives feedback from interactions with the computing device 850 in the VR space on the computing device 850 or on the VR headset 890.
  • one or more input devices in addition to the computing device can be rendered in a computer generated, 3D environment.
  • the rendered input devices e.g., the rendered mouse, the rendered keyboard
  • Computing device 800 is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers.
  • Computing device 850 is intended to represent various forms of mobile devices, such as personal digital assistants, cellular telephones, smart phones, and other similar computing devices.
  • the components shown here, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed in this document.
  • a number of embodiments have been described. Nevertheless, it will be understood that various modifications may be made without departing from the spirit and scope of the specification.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Hardware Design (AREA)
  • Computer Graphics (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

According to an aspect, a method for creating a three-dimensional map for augmented reality (AR) localization includes obtaining a digital representation of a scene of an AR environment, where the digital representation has been captured by a computing device. The method includes identifying, using a machine learning (ML) model, a region of the digital representation having visual data identified as likely to change, and removing a portion of the digital representation that corresponds to the region of the digital representation to obtain a reduced digital representation, where the reduced digital representation is used to generate a three-dimensional (3D) map for the AR environment.

Description

SYSTEM AND METHOD FOR CREATING PERSISTENT MAPPINGS IN AUGMENTED
REALITY
RELATED APPLICATIONS
[0001] This application claims priority to U.S. Non-Provisional Patent Application No. 16/396,145, filed on April 26, 2019, entitled “SYSTEM AND METHOD FOR CREATING PERSISTENT MAPPINGS IN AUGMENTED REALITY” and U.S. Non-Provisional Patent Application No. 16/395,832, filed on April 26, 2019, entitled“MANAGING CONTENT IN AUGMENTED REALITY”, the disclosures of which are incorporated by reference herein in their entirety.
TECHNICAL FIELD
[0002] This description generally relates to the creating of persistent mappings in augmented reality.
BACKGROUND
[0003] In some augmented reality (AR) systems, an AR server may receive digital information about a first user’s environment, and a three-dimensional (3D) mapping that represents an AR environment is created. The 3D mapping may provide a coordinate space in which visual information and AR objects are positioned. In response to an attempt to localize the AR environment on a second user’s computing device, the 3D mapping may be compared against digital information about the second user’s environment. However, one or more physical objects in the physical space may have moved at the time of the second user’s attempt to localize the AR environment. Therefore, despite the second user being in the same physical space, the comparison may fail because of the visual differences between the 3D mapping and the digital information received from the second user’s device.
SUMMARY
[0004] A system of one or more computers can be configured to perform particular operations or actions by virtue of having software, firmware, hardware, or a combination of them installed on the system that in operation causes or cause the system to perform the actions. One or more computer programs can be configured to perform particular operations or actions by virtue of including instructions that, when executed by data processing apparatus, cause the apparatus to perform the actions.
[0005] According to an aspect, a method for creating a three-dimensional map for augmented reality (AR) localization includes obtaining a digital representation of a scene of an AR environment, where the digital representation has been captured by a computing device. The method includes identifying, using a machine learning (ML) model, a region of the digital representation having visual data identified as likely to change (e.g., move from the scene, disappear from the scene or other cause a change to the scene over time), and removing a portion of the digital representation that corresponds to the region of the digital representation to obtain a reduced digital representation, where the reduced digital representation is used to generate a three- dimensional (3D) map for the AR environment. According to further aspects, a corresponding AR system and a non-transitory computer-readable medium storing corresponding instructions may be provided.
[0006] According to some aspects, the method may include any of the following features (or any combination thereof). The method may include generating the 3D map based on the reduced digital representation, where the 3D map does not include the portion of the digital representation that corresponds to the region with the visual data identified as likely to change. The identifying may include detecting, using the ML model, a visual object in the digital representation that is likely to change, where the region of the digital representation is identified based on the detected visual object. The identifying may include detecting, using the ML model, a visual object in the digital representation, classifying the visual object into a classification, and identifying the visual object as likely to change based on a tag associated with the classification, where the tag indicates that objects belonging to the classification are likely to change, and the region of the digital representation is identified as a three-dimensional space that includes the object identified as likely to change. The identifying may include identifying, using the ML model, a pattern of visual points in the digital representation that are likely to change, where the pattern of visual points are excluded from the 3D map. The digital representation includes a set of visual feature points derived from the computing device, and the method includes detecting a visual object that is likely to change based on the digital representation, identifying a region of space that includes the visual object, and removing one or more visual feature points from the set that are included within the region. The digital representation is a first digital representation, and the computing device is a first computing device, and the method includes obtaining a second digital representation of at least a portion of the scene of the AR environment, where the second digital representation has been captured by a second computing device, and comparing the second digital representation with the 3D map to determine whether the second digital representation is from the same AR environment as the 3D map. The method may include obtaining a second digital representation of at least a portion of the scene of the AR environment, identifying, using the ML model, a secondary region of the second digital representation, where the secondary region has visual data identified as likely to change, removing a portion of the second digital representation that corresponds to the secondary region, and comparing the second digital representation with the 3D map to determine whether the second digital representation is from the same AR environment as the 3D map.
[0007] According to an aspect, an augmented reality (AR) system configured to generate a three-dimensional (3D) map for an AR environment includes an AR collaborative service executable by at least one server, and a client AR application executable by a computing device, where the client AR application configured to communicate with the AR collaborative service via one or more application programming interfaces (APIs), and the AR collaborative service or the client AR application configured to obtain a digital representation of a scene of an AR environment, where the digital representation has been captured by the computing device, identify, using a machine learning (ML) model, a region of the digital representation having visual data that is identified as likely to change, and remove a portion of the digital representation that corresponds to the region to obtain a reduced digital representation of the scene, where the reduced digital representation is used for comparison with a three-dimensional (3D) map of the AR environment. According to further aspects, a corresponding method and a non-transitory computer-readable medium storing corresponding instructions may be provided.
[0008] According to some aspects, the AR system may include any of the above/below features (or any combination thereof). The AR collaborative service is configured to compare the reduced digital representation with the 3D map in response to an attempt to localize the AR environment on the computing device. The client AR application or the AR collaborative service is configured to detect, using the ML model, an object in the digital representation that is likely to move, where the region of the digital representation is identified based on the detected object. The AR collaborative service is configured to detect, using the ML model, an object in the digital representation, classify the object into a classification, and identify the object as likely to move based on a tag associated with the classification, where the tag indicates that objects belonging to the classification are likely to move. The client AR application is configured to identify, using the ML model, a pattern of visual points in the digital representation that are likely to move. The digital representation includes a set of visual feature points captured by the computing device, and the client AR application or the AR collaborative service is configured to detect an object that is likely to move based on the digital representation, identify a region of space that includes the object, and remove one or more visual feature points from the set that are included within the region.
[0009] According to an aspect, a non-transitory computer-readable medium storing executable instructions that when executed by at least one processor are configured to generate a three-dimensional (3D) map for an augmented reality (AR) environment, where the executable instructions includes instructions that cause the at least one processor to obtain a first digital representation of a scene of an AR environment, where the first digital representation has been captured by a first computing device, identify, using a machine learning (ML) model, a region of the first digital representation having visual data that is identified as likely to change, remove a portion of the first digital representation that corresponds to the region to obtain a reduced digital representation, and generate a three-dimensional (3D) map for the AR environment for storage on an AR server, and, optionally, compare a second digital representation of at least a portion of the scene with the 3D map in response to an attempt to localize the AR environment on a second computing device, where the second digital representation has been captured by the second computing device. According to further aspects, a corresponding AR system and a corresponding method may be provided.
[0010] According to some aspects, the non-transitory computer-readable medium may include any of the above/below features (or any combination thereof). The operations may include detect, using the ML model, an object in the first digital representation that is likely to move. The operations may include detect, using the ML model, an object in the first digital representation, classify the object into a classification, and identify the object as likely to move based on a tag associated with the classification, where the tag indicates that objects belonging to the classification are likely to move, and the region of the first digital representation is identified as a three-dimensional space that includes the object identified as likely to move. The operations may include identify, using the ML model, a pattern of points in the first digital representation that are likely to move, where the pattern of points are excluded from the 3D map. The digital representation includes a set of visual feature points captured by the first computing device, and the operations may include detect an object that is likely to move based on the first digital representation, identify a region of space that includes the object, and remove one or more visual feature points from the set that are included within the region. The operations may include detect, using the ML model, an object in the second digital representation that is likely to move.
[0011] The details of one or more implementations are set forth in the accompanying drawings and the description below. Other features will be apparent from the description and drawings, and from the claims.
BRIEF DESCRIPTION OF THE DRAWINGS
[0012] FIG. 1A illustrates an AR system for creating a 3D map according to an aspect.
[0013] FIG. IB illustrates a movement analyzer of the AR system for detecting moving data according to an aspect.
[0014] FIG. 2 illustrates an AR system with the movement analyzer integrated on a client AR application according to an aspect.
[0015] FIG. 3 illustrates an AR system with the movement analyzer integrated at an AR server according to an aspect.
[0016] FIG. 4 illustrates an AR system for generating a 3D mapping without movable data according to an aspect.
[0017] FIG. 5 illustrates an example of a computing device of an AR system according to an aspect.
[0018] FIGS. 6A through 6C illustrate graphical depictions of visual feature points on a scene of an AR environment and the removal of one or more of the points for a region having moving data according to an aspect.
[0019] FIG. 7 illustrates a flowchart depicting example operations of an AR system according to an aspect. [0020] FIG. 8 illustrates example computing devices of the AR system according to an aspect.
[0021 ] Like reference symbols in the various drawings indicate like elements.
DETAILED DESCRIPTION
[0022] The embodiments provide an AR system configured to create a 3D map for an AR environment without one or more visual objects (or one or more sets of patterned visual points) that are identified as likely to change (e.g., move from the scene, disappear from the scene, or a cause a change to the scene). In some examples, the 3D map includes the objects that are identified as likely to change, but the objects that are identified as likely to change are annotated in the AR system. The annotation of movable objects may indicate that these objects are not used in AR localization comparison operations. For example, the AR system detects or identifies data that is likely is move from the digital information about a first user’s environment captured by the first user’s computing device, and then removes or annotates that data before updating or generating the 3D map. In addition, in an attempt to localize the AR environment on a second user’s computing device, the AR system may remove or ignore movable data from the digital information captured the second user’s computing device when comparing the second user’s digital information to the 3D map.
[0023] In some examples, the AR environment uses machine learning models to semantically understand the type of physical object in the scene of the AR environment, and detects whether that object is likely to move. If the object is determined as likely to move, that portion of the digital information is not used to create/update the 3D map or not used in the comparison with the 3D map for AR localization. As a result, the quality of persistent world-space mapping of AR systems may be increased. In addition, the accuracy of the comparison for AR localization may be improved since relatively stationary objects are used as opposed to objects that are likely to move.
[0024] FIGS. 1A and IB illustrate an augmented reality (AR) system 100 configured to store and share digital content in an AR environment 101 according to an aspect. The AR system 100 is configured to create a three-dimensional (3D) map 113 of the AR environment 101 without one or more visual objects (or one or more sets of patterned visual points) that are likely to change (e.g., move from the scene, disappear from the scene, or cause a change to the scene over time), thereby increasing the quality of persistent world-space mapping of the AR system 100. In some examples, the 3D map 113 includes the objects that are identified as likely to move, but the objects that are identified as likely to move are annotated in the AR system 100. In some examples, the 3D map 113 includes a coordinate space in which visual information from the physical space and AR content 130 are positioned. In some examples, the visual information and AR content 130 positions are updated in the 3D map 113 from image frame to image frame. In some examples, the 3D map 113 includes a sparse point map. The 3D map 113 is used to share the AR environment 101 with one or more users that join the AR environment 101 and to calculate where each user’s computing device is located in relation to the physical space of the AR environment 101 such that multiple users can view and interact with the AR environment 101.
[0025] The AR system 100 includes an AR collaborative service 104, executable by one or more AR servers 102, configured to create a multi-user or collaborative AR experience that users can share. The AR collaborative service 104 communicates, over a network 150, with a plurality of computing devices including a first computing device 106 and a second computing device 108, where a user of the first computing device 106 and a user of the second computing device 108 may share the same AR environment 101. Each of the first computing device 106 and the second computing device 108 is configured to execute a client AR application 110.
[0026] In some examples, the client AR application 110 is a software development kit (SDK) that operates in conjunction with one or more AR applications. In some examples, in combination with one or more sensors on the first computing device 106 or the second computing device 108, the client AR application 110 is configured to detect and track a device’s position relative to the physical space, detect the size and location of different types of surfaces (e.g., horizontal, vertical, angled), and estimate the environment’s current lighting conditions. The client AR application 110 is configured to communicate with the AR collaborative service 104 via one or more application programming interfaces (APIs). Although two computing devices are illustrated in FIG. 1A, the AR collaborative service 104 may communicate and share the AR environment 101 with any number of computing devices.
[0027] The first computing device 106 may be, for example, a computing device such as a controller, or a mobile device (e.g., a smartphone, a tablet, a joystick, or other portable controllers )). In some examples, the first computing device 106 includes a wearable device (e.g., a head mounted device) that is paired with, or communicates with a mobile device for interaction in the AR environment 101. The AR environment 101 is a representation of an environment that may be generated by the first computing device 106 (and/or other virtual and/or augmented reality hardware and software). In this example, the user is viewing the AR environment 101 with the first computing device 106. Since the details and use of the second computing device 108 may be the same with respect to the first computing device 106, the details of the second computing device 108 are omitted for the sake of brevity.
[0028] The AR environment 101 may involve a physical space which is within the view of a user and a virtual space within which AR content 130 is positioned. As shown in FIG. 1A, the AR content 130 is a text description (“My Chair”) along with an arrow that points to a chair 131, where the chair 131 is a physical object in the physical space. Providing (or rendering) the AR environment 101 may then involve altering the user’s view of the physical space by displaying the AR content 130 such that it appears to the user to be present in, or overlayed onto or into, the physical space in the view of the user. This displaying of the AR content 130 is therefore according to a mapping (e.g. the 3D map 113) between the virtual space and the physical space. Overlaying of the AR content 130 may be implemented, for example, by superimposing the AR content 130 into an optical field of view of a user of the physical space, by reproducing a view of the user of the physical space on one or more display screens, and/or in other ways, for example by using heads up displays, mobile device display screens and so forth.
[0029] The first computing device 106 may send a digital representation 114 of a scene 125 of the AR environment 101 to the AR collaboration service 104. The AR collaboration service 104 may create the 3D map 113 based on the digital representation 114 from the first computing device 106, and the 3D map 113 is stored at the AR server 102. Then, a user of the second computing device 108 may wish to join the AR environment 101 (e.g., at a time where the user of the first computing device 106 is within the AR environment 101 or at a subsequent time when the user of the first computing device 106 has left the session). In order to localize the AR environment 101 on the second computing device 108, the second computing device 108 may send a digital representation 114 of at least a portion of the scene 125 of the AR environment 101. The AR collaboration service 104 may compare the digital representation 114 from the second computing device 108 to the 3D map 113. If the comparison results in a match (or substantially matches), the AR environment 101 is localized on the second computing device 108.
[0030] The accuracy of the matching may be dependent upon whether the saved area (e.g., the 3D map 113) includes objects or points that are likely to move (e.g., relative to other objects in the scene). Certain environment conditions (e.g., changes in lighting, movement of objects such as furniture, etc.) may result in visual differences in the camera frame. For example, some types of objects are more likely to be stable long-term (e.g., walls, counters, tables, shelves) while some type of objects are more likely to be moved regularly (e.g., chairs, people in the room, etc.). If the difference between the 3D map 113 (e.g., when the AR environment 101 was initially created) and the digital representation 114 from the second computing device 108 is above a threshold amount, the comparison may not result in a match, and the AR environment 101 may not be able to be localized on the second computing device 108.
[0031] As shown in FIG. 1A, the AR environment 101 includes a chair 131, and the 3D map 113 provides a 3D mapping of the AR environment 101 that includes the chair 131. However, after the creation of the 3D map 113, the chair 131 may be moved outside of the office depicted in the scene 125 of the AR environment 101. In response to an attempt to localize the AR environment 101 on the second computing device 108, the digital representation 114 sent to the AR collaboration service 104 from the second computing device 108 may not have visual features corresponding to the chair 131. When resolving the 3D map 113 against the digital representation 114 from the second computing device 108, the comparison of visual features may fail on account of the differences in visual features of when the scene 125 was initially stored and the later attempt to localize the AR environment 101.
[0032] However, the AR system 100 includes a movement analyzer 112 configured to detect objects or a set of patterned points that are likely to move from image data captured by the first computing device 106 or the second computing device 108, and then remove or annotate those objects or points when creating the 3D map 113 or ignoring those objects or points when attempting to match to the 3D map 113 for AR localization of the AR environment 101 on the first computing device 106 or the second computing device 108. In some examples, the operations of the movement analyzer 112 are performed by the client AR application 110. In some examples, the operations of the movement analyzer 112 are performed by the AR collaboration service 104. In some examples, one or more operations of the movement analyzer 112 are performed by the client AR application 110 and one or more operations of the movement analyzer 112 are performed by the AR collaboration service 104.
[0033] Referring to FIGS. 1A and IB, the movement analyzer 112 is configured to detect a digital representation 114 of the scene 125 of the AR environment 101. For example, a user may use one or more sensors on the first computing device 106 to capture the scene 125 from the physical space of the AR environment 101. In some examples, the digital representation 114 includes a 3D representation of the scene 125 of the AR environment 101. In some examples, the digital representation 114 includes visual features with depth information. In some examples, the digital representation 114 includes image data of one or more frames captured by the first computing device 106. In some examples, the digital representation 114 includes a set of visual feature points with depth in space.
[0034] The movement analyzer 112 includes a movement detector 116 configured to identify, using one or more machine learning (ML) models 115, a region 118 having movable data 120 based on an analysis of the digital representation 114, which may be 2D image data or 3D image data with depth information. In some examples, the movable data 120 includes data that is likely to cause a change to the scene (e.g., anything that causes a“change” such as ice melting, a shadow or light moving). The movable data 120 may be one or more objects 121 or a patterned set of visual points 123 that are identified as likely to move, and the region 118 may be space that includes the movable data 120. In some examples, the region 118 is a 3D space that includes the objects 121 or the patterned set of visual points 123. In some examples, the region 118 is the area (e.g., 3D space) identified by one or more coordinates and/or dimensions of the region 118 in the AR environment 101 that encompass the objects 121 or the patterned set of visual points 123. In some examples, the region 118 is a bounding box that includes the objects 121 or the patterned set of visual points 123.
[0035] In some examples, the ML models 115 include one or more trained classifiers configured to detect a classification of an object 121 in the scene 125 based on the digital representation 114. For example, the one or more trained classifiers may detect an object 121 in the scene 125 and classify the object 121 into one of a plurality of classifications. For example, the classifications may include different characterizations of objects such as chairs, laptops, desks, etc. Some of the classifications may be associated with a tag indicating that objects belonging to a corresponding classification are likely to move.
[0036] In some examples, a classification being tagged as likely to be moved may be programmatically determined by one or more of the ML models 115. For example, the trained classifiers may indicate that objects of a particular classification move out of the scene 125 (or a different location in the scene 125) over a threshold amount, and this particular classification may be programmatically tagged as likely to be moved. In some examples, a classification being tagged as likely to be moved may be determined by a human programmer (e.g., it is known that objects such as pens, laptops, chairs, etc. are likely to move, and may be manually tagged as likely to move without using ML algorithms). As shown in FIG. 1A, the scene 125 includes the chair 131. The movement detector 116 may detect the object representing the chair 131 and classify the chair 131 as a chair classification, and the chair classification may be tagged as likely to be moved. In some examples, the detection of the chair 131 as the chair classification is associated with a confidence level, and if the confidence level is above a threshold amount, the movement detector 116 is configured to detect the chair 131 as the chair classification. The movement detector 116 may then identify the region 118 that encompasses the chair 131.
[0037] In some examples, the movement detector 116 determines a classification for a detected object 121 using a 2D or 3D image signal and one or more other signals such as information associated with the AR content 130. The AR content 130 may include descriptive information that can assist in the semantic understanding of the object 121. In some examples, as indicated above, the digital representation 114 may be a set of visual feature points with depth information in space, and one or more of the set of visual feature points may be associated with the AR content 130. As shown in FIG. 1A, the chair 131 is associated with the AR content 130 (e.g.,“My chair”). In some examples, the movement detector 116 may be configured to analyze any AR content 130 associated with the objects 121 of the scene 125 and increase or decrease the confidence level associated with the classification. In this example, since the AR content 130 includes the word“Chair,” the movement detector 116 may increase the confidence level that the chair 131 is the chair classification.
[0038] In some examples, the movement detector 116 is configured to identify, using the ML models 115, a patterned set of visual points 123 as likely to move. For example, the movement detector 116 may not necessarily detect the particular type of object, but rather the movement detector 116 may detect a pattern of visual points that have one or more characteristics in which the ML models 115 determine as likely to move. The ML model(s) may allow particularly precise classification and identification.
[0001] In some examples, the ML models 115 include a neural network. The neural network may be an interconnected group of nodes, each node representing an artificial neuron. The nodes are connected to each other in layers, with the output of one layer becoming the input of a next layer. Neural networks transform an input, received by the input layer, transform it through a series of hidden layers, and produce an output via the output layer. Each layer is made up of a subset of the set of nodes. The nodes in hidden layers are fully connected to all nodes in the previous layer and provide their output to all nodes in the next layer. The nodes in a single layer function independently of each other (i.e., do not share connections). Nodes in the output provide the transformed input to the requesting process.
[0002] In some examples, the movement analyzer 112 uses a convolutional neural network in the object classification algorithm, which is a neural network that is not fully connected. Convolutional neural networks therefore have less complexity than fully connected neural networks. Convolutional neural networks can also make use of pooling or max-pooling to reduce the dimensionality (and hence complexity) of the data that flows through the neural network and thus this can reduce the level of computation required. This makes computation of the output in a convolutional neural network faster than in neural networks.
[0039] The movement analyzer 112 includes a digital representation reducer
122 configured to remove or annotate a portion of the digital representation 114 that corresponds to the region 118 to obtain a reduced (or annotated) digital representation 124. The reduced (or annotated) digital representation 124 excludes the objects 121 or the patterned set of visual points 123 that are identified as likely to move or annotates them as likely to move. In some examples, as indicated above, the digital representation 114 is a set of visual feature points with depth information in space, and the digital representation reducer 122 may remove or annotate one or more visual feature points that are contained in the region 118 such that the objects 121 or the patterned set of visual points 123 are not included or annotated in the reduced (or annotated) digital representation 124.
[0040] FIG. 2 illustrates an AR system 200 for creating a 3D map 213 without one or more objects that are likely to move, thereby increasing the quality of persistent world-space mapping of the AR system 200. The increased mapping quality can enable certain technical applications or features, e.g., precise indoor positioning and guidance that reliably avoid collisions. In some examples, the 3D map 213 includes the objects that are identified as likely to move, but the objects that are identified as likely to move are annotated in the AR system 200. The AR system 200 of FIG. 2 may include any of the features of the AR system 100 of FIGS. 1A and IB.
[0041] The AR system 200 includes an AR collaborative service 204, executable by one or more AR servers 202, configured to communicate, over a network 250, with a plurality of computing devices including a first computing device 206 and a second computing device 208, where a user of the first computing device 206 and a user of the second computing device 208 may share the same AR environment (e.g., the AR environment 101 of FIG. 1). Each of the first computing device 206 and the second computing device 208 is configured to execute a client AR application 210. The client AR application 210 is configured to communicate with the AR collaborative service 204 via one or more application programming interfaces (APIs)
[0042] As shown in FIG. 2, the AR system 200 includes a movement analyzer
212 included within the client AR application 210. The movement analyzer 212 may include any of the features discussed with reference to the movement analyzer 112 of FIGS. 1A and IB. The client AR application 210 of the first computing device 206 obtains a first digital representation (e.g., the digital representation 114 of FIG. IB) of the scene (e.g., the scene 125), and then processes the first digital representation (using the operating of the movement analyzer 212) to obtain a first reduced (or annotated) digital representation 224-1. The client AR application 210 sends the reduced (or annotated) digital representation 224-1, over the network 250, to the AR collaborative service 204.
[0043] The AR collaborative service 204 generates the 3D map 213 using the first reduced (or annotated) digital representation 224-1. For example, the AR collaborative service 204 includes a map generator 226 configured to generate the 3D map 213 using the first reduced (or annotated) digital representation 224-1. The map generator 226 stores the 3D map 213 in a database 228 at the AR server 202.
[0044] The client AR application 210 of the second computing device 208 obtains a second digital representation (e.g., the digital representation 114) of the scene (e.g., the scene 125), and then processes the second digital representation (using the operating of the movement analyzer 212) to obtain a second reduced (or annotated) digital representation 224-2. The client AR application 210 of the second computing device 208 sends the second reduced (or annotated) digital representation 224-2, over the network 250, to the AR collaborative service 204.
[0045] The AR collaborative service 204 includes a localization resolver 230 configured to compare the second reduced (or annotated) digital representation 224-2 to the 3D map 213 when attempting to localize the AR environment on the second computing device 208. In response to the comparison resulting in a match (e.g., indicating that the 3D map 213 and the second reduced (or annotated) digital representation 224-2 is from the same AR environment), the AR collaborative service 204 provides the AR environment to the client AR application 210 of the second computing device 208. Also, the map generator 226 may update the 3D map 213 using the second reduced (or annotated) digital representation 224-2. In response to the comparison not resulting in a match (e.g., indicating that AR localization has failed), the AR environment is not shared with the user of the second computing device 208. However, since the 3D map 213 does not include movable data or the movable data is annotated in the 3D map 213 (and the second reduced digital representation 224-2 does not include movable data or the movable data is annotated in the second reduced digital representation 224-2), the accuracy of the comparison may be improved.
[0046] FIG. 3 illustrates an AR system 300 for creating a 3D map 313 without one or more objects that are likely to move, thereby increasing the quality of persistent world-space mapping of the AR system 300. In some examples, the 3D map 313 includes the objects that are identified as likely to move, but the objects that are identified as likely to move are annotated in the AR system 300. The AR system 300 of FIG. 3 may include any of the features of the AR system 100 of FIGS. 1A and IB.
[0047] The AR system 300 includes an AR collaborative service 304, executable by one or more AR servers 302, configured to communicate, over a network 350, with a plurality of computing devices including a first computing device 306 and a second computing device 308, where a user of the first computing device 306 and a user of the second computing device 308 may share the same AR environment (e.g., the AR environment 101 of FIG. 1). Each of the first computing device 306 and the second computing device 308 is configured to execute a client AR application 310. The client AR application 310 is configured to communicate with the AR collaborative service 304 via one or more application programming interfaces (APIs).
[0048] As shown in FIG. 3, the AR system 300 includes a movement analyzer
312 included within the AR collaborative service 304 that executes on the AR server 302. The movement analyzer 312 may include any of the features discussed with reference to the movement analyzer 112 of FIGS. 1A and IB. The client AR application 310 of the first computing device 306 obtains a first digital representation 314-1 (e.g., the digital representation 114 of FIG. IB) of the scene (e.g., the scene 125) and sends the first digital representation 314-1, over the network 350, to the AR collaborative service 304. The movement analyzer 312 is configured to process the first digital representation 314-1 (using the movement analyzer 312) to obtain a first reduced (or annotated) digital representation 324-1 that does not include movable data, or the movable data is annotated in the digital representation 324-1. The AR collaborative service 304 generates the 3D map 313 using the first reduced (or annotated) digital representation 324-1. For example, the AR collaborative service 304 includes a map generator 326 configured to generate the 3D map 313 using the first reduced digital representation 324-1. The map generator 326 stores the 3D map
313 in a database 328 at the AR server 302.
[0049] The client AR application 310 of the second computing device 308 obtains a second digital representation 314-2 (e.g., the digital representation 114) of the scene (e.g., the scene 125), and sends the second digital representation 314-2, over the network 350, to the AR collaborative service 304. The movement analyzer 312 processes the second digital representation 314-2 to obtain a second reduced (or annotated) digital representation 324-2, which does not include movable data or the movable data is annotated in the digital representation 324-2.
[0050] The AR collaborative service 304 includes a localization resolver 330 configured to compare the second reduced (or annotated) digital representation 324-2 to the 3D map 313 when attempting to localize the AR environment on the second computing device 308. In response to the comparison resulting in a match (e.g., indicating that the 3D map 313 and the second reduced digital representation 324-2 is from the same AR environment), the AR collaborative service 304 provides the AR environment to the client AR application 310 of the second computing device 308. Also, the map generator 326 may update the 3D map 313 using the second reduced (or annotated) digital representation 324-2. In response to the comparison not resulting in a match (e.g., indicating that AR localization has failed), the AR environment is not shared with the user of the second computing device 308. However, since the 3D map 313 does not include movable data (and the second reduced digital representation 324-2 does not include movable data), the accuracy of the comparison may be improved.
[0051] FIG. 4 illustrates an AR system 400 for creating a 3D map 432 without one or more objects that are likely to move, thereby increasing the quality of persistent world-space mapping of the AR system 400. The AR system 400 of FIG. 4 may include any of the features of the previous figures.
[0052] The AR system 400 includes an AR collaborative service 404, executable by one or more AR servers 402, configured to communicate, over a network 450, with a plurality of computing devices including a first computing device 406 and a second computing device 408, where a user of the first computing device 406 and a user of the second computing device 408 may share the same AR environment (e.g., the AR environment 101 of FIG. 1). Each of the first computing device 406 and the second computing device 408 is configured to execute a client AR application 410. The client AR application 410 is configured to communicate with the AR collaborative service 404 via one or more application programming interfaces (APIs).
[0053] The client AR application 410 of the first computing device 406 obtains a first digital representation 414-1 (e.g., the digital representation 114 of FIG. IB) of the scene (e.g., the scene 125) and sends the first digital representation 414-1, over the network 450, to the AR collaborative service 404. The AR collaborative service 404 includes a map generator 426 configured to generate a first 3D map 413-1 based on the first digital representation 414-1. The client AR application 410 of the second computing device 408 obtains a second digital representation 414-2 (e.g., the digital representation 114 of FIG. IB) of the scene (e.g., the scene 125) and sends the second digital representation 414-2, over the network 450, to the AR collaborative service 404. The map generator 426 configured to generate a second 3D map 413-2 based on the second digital representation 414-2.
[0054] The AR collaborative service 404 includes a movement analyzer 412 configured to compare the first 3D map 413-1 to the second 3D map 413-2 to identify one or more objects or one or more patterned sets of visual points that are present in one of the 3D maps but not present in the other 3D maps. The movement analyzer 412 may identify these objects or patterned sets of visual points as likely to remove. The movement analyzer 412 may generate and store the 3D map 432 based on the first 3D map 413-1 and the second 3D map 413-2 in a manner that does not include the objects or patterned sets of visual points that are likely to move.
[0055] FIG. 5 illustrates an example of a computing device 506 configured to communicate with any of the AR systems disclosed herein. The computing device 506 may be an example of the first computing device (e.g., 106, 206, 306, 406) or the second computing device (e.g., 108, 208, 308, 408). The computing device 506 may include any of the features discussed with reference to the first computing device or the second computing device with reference to the previous figures.
[0056] The computing device 506 includes a client AR application 510 configured to execute on an operating system of the computing device 506. In some examples, the client AR application 510 is a software development kit (SDK) that operates in conjunction with one or more AR applications 558. The AR applications 558 may be any type of AR applications (e.g., gaming, entertainment, medicine, education, etc.) executable on the computing device 506.
[0057] The client AR application 510 includes a motion tracker 552 configured to permit the computing device 506 to detect and track its position relative to the physical space, an environment detector 554 configured to permit the computing device 506 to detect the size and location of different types of surfaces (e.g., horizontal, vertical, angled), and a light estimator 556 to permit the computing device 506 to estimate the environment’s current lighting conditions. Although the computing device 506 is discussed with reference to the AR system 100 of FIGS. 1A and IB, it is noted that the computing device 506 may be used within any of the AR system described herein. The computing device 506 includes a display 560, one or more inertial sensors 562, and a camera 564.
[0058] Using the motion tracker 552, the environment detector 554, and the light estimator 556, the client AR application 510 is configured to generate a set of visual feature points 514 to be sent and stored on the AR server 102 for future AR localization. The user may use the camera 564 on the computing device 506 to capture a scene from the physical space (e.g. moving the camera around to capture a specific area), and the client AR application 510 is configured to detect the set of visual feature points 514 and track the movement of the set of visual feature points 514 move over time. For example, with a combination of the movement of the set of visual feature points 514 and readings from the inertial sensors 562 on the computing device 506, the client AR application 510 is configured to determine the position and orientation of the computing device 506 as the computing device 506 moves through the physical space. In some examples, the client AR application 510 may detect flat surfaces (e.g., a table or the floor) and estimate the average lighting in the area around it.
[0059] The set of visual feature points 514 may be an example of the digital representation 114. In some examples, the set of visual feature points 514 are a plurality of points (e.g., interesting points) that represent the user’s environment. In some examples, each visual feature point 514 is an approximation of a fixed location and orientation in the physical space, and the set of visual feature points 514 may be updated over time.
[0060] In some examples, the set of visual feature points 514 may be referred to an anchor or a set of persistent visual features that represent physical objects in the physical world. For example, the set of visual feature points 514 may be used to localize the AR environment for a secondary user or localize the AR environment for the computing device 506 in a subsequent session For example, the visual feature points 514 may be used to compare and match against other visual feature points 514 captured by a secondary computing device in order to determine whether the physical space is the same as the physical space of the stored visual feature points 514 and to calculate the location of the secondary computing device within the AR environment in relation to the stored visual feature points 514.
[0061] In some examples, AR content 130 is attached to the one of more of the visual feature points 514. The AR content 130 may include objects (e.g., 3D objects), annotations, or other information. For example, the user of the computing device 506 can place a napping kitten on the comer of a coffee table or annotate a painting with biographical information about the artist. Motion tracking means that you can move around and view these objects from any angle, and even if you turn around and leave the room, when you come back, the kitten or annotation will be right where you left it.
[0062] In some examples, the client AR application 510 includes a movement analyzer 512. The movement analyzer 512 is configured to process the set of visual feature points 514 to remove one or more visual feature points 514 that are included within a 3D region encompassing an object or a set of patterned visual points that are identified as likely to move by the ML models. The client AR application 510 is configured to send the reduced set of visual feature points 514 to the AR collaboration service 104 for storage thereon and/or the generation of the 3D map 113. In some examples, the client AR application 510 does not include the movement analyzer 512, but rather the movement analyzer 512 executes on the AR collaboration service 104 as described above. In this case, the client AR application 510 sends the full set of visual feature points 514 to the AR collaboration service 104.
[0063] In further detail, if the movement analyzer 512 determines that one or more visual feature points 514 in the set corresponds to an object 121 that is likely to move, the movement analyzer 512 may remove those visual feature points 514 in the set of visual feature points 514. In some examples, the movement analyzer 512 uses the ML models 115 to identify a region 118 (e.g., a bounding box) of an object 121 likely to move in a given image, and the movement analyzer 512 is configured to determine which of the visual feature points 514 are contained in that region 118, and then remove those visual feature points 514 contained in the region 118.
[0064] FIGS. 6A through 6C depict examples of the set of visual feature points 514 in a scene 525 of an AR environment and the removal of one or more visual feature points 514 that correspond to an object identified as likely to move according to an aspect. For example, as shown in FIG. 6A, the client AR application 510 is configured to generate the set of visual feature points 514 that represent the scene 525 of the AR environment. As shown in FIG. 6B, the movement analyzer 512 is configured to detect an object 521 (e.g., the chair) as likely to move in the scene 525 in the manner as described above and identify a region 518 that encompasses the object 521. As shown in FIG. 6C, the movement analyzer 512 is remove the visual feature points 514 that are included within the region 518 from the set of visual feature points 514.
[0065] FIG. 7 illustrates a flow chart 700 depicting example operations of an AR system according to an aspect. Although the operations are described with reference to the AR system 100, the operation of FIG. 7 may be applicable to any of the systems described herein.
[0066] Operation 702 includes obtaining a digital representation 114 of a scene 125 of an AR environment 101, where the digital representation 114 has been captured by a computing device (e.g., the first computing device 106). Operation 704 includes identifying, using a machine learning (ML) model 115, a region 118 of the digital representation 114 having data 120 that is likely to move. Operation 706 includes removing a portion of the digital representation 114 that corresponds to the region 118 of the digital representation 114 to obtain a reduced digital representation 124, where the reduced digital representation 124 is used to generate a three- dimensional (3D) map 113 for the AR environment 101.
[0067] FIG. 8 shows an example of an example computer device 800 and an example mobile computer device 850, which may be used with the techniques described here. Computing device 800 includes a processor 802, memory 804, a storage device 806, a high-speed interface 808 connecting to memory 804 and high speed expansion ports 810, and a low speed interface 812 connecting to low speed bus 814 and storage device 806. Each of the components 802, 804, 806, 808, 810, and 812, are interconnected using various busses, and may be mounted on a common motherboard or in other manners as appropriate. The processor 802 can process instructions for execution within the computing device 800, including instructions stored in the memory 804 or on the storage device 806 to display graphical information for a GUI on an external input/output device, such as display 816 coupled to high speed interface 808. In other implementations, multiple processors and/or multiple buses may be used, as appropriate, along with multiple memories and types of memory. In addition, multiple computing devices 800 may be connected, with each device providing portions of the necessary operations (e.g., as a server bank, a group of blade servers, or a multi-processor system).
[0068] The memory 804 stores information within the computing device 800.
In one implementation, the memory 804 is a volatile memory unit or units. In another implementation, the memory 804 is a non-volatile memory unit or units. The memory 804 may also be another form of computer-readable medium, such as a magnetic or optical disk.
[0069] The storage device 806 is capable of providing mass storage for the computing device 800. In one implementation, the storage device 806 may be or contain a computer-readable medium, such as a floppy disk device, a hard disk device, an optical disk device, or a tape device, a flash memory or other similar solid state memory device, or an array of devices, including devices in a storage area network or other configurations. A computer program product can be tangibly embodied in an information carrier. The computer program product may also contain instructions that, when executed, perform one or more methods, such as those described above. The information carrier is a computer- or machine-readable medium, such as the memory 804, the storage device 806, or memory on processor 802.
[0070] The high speed controller 808 manages bandwidth-intensive operations for the computing device 800, while the low speed controller 812 manages lower bandwidth-intensive operations. Such allocation of functions is exemplary only. In one implementation, the high-speed controller 808 is coupled to memory 804, display 816 (e.g., through a graphics processor or accelerator), and to high-speed expansion ports 810, which may accept various expansion cards (not shown). In the implementation, low-speed controller 812 is coupled to storage device 806 and low- speed expansion port 814. The low-speed expansion port, which may include various communication ports (e.g., USB, Bluetooth, Ethernet, wireless Ethernet) may be coupled to one or more input/output devices, such as a keyboard, a pointing device, a scanner, or a networking device such as a switch or router, e.g., through a network adapter.
[0071] The computing device 800 may be implemented in a number of different forms, as shown in the figure. For example, it may be implemented as a standard server 820, or multiple times in a group of such servers. It may also be implemented as part of a rack server system 824. In addition, it may be implemented in a personal computer such as a laptop computer 822. Alternatively, components from computing device 800 may be combined with other components in a mobile device (not shown), such as device 850. Each of such devices may contain one or more of computing device 800, 850, and an entire system may be made up of multiple computing devices 800, 850 communicating with each other.
[0072] Computing device 850 includes a processor 852, memory 864, an input/output device such as a display 854, a communication interface 866, and a transceiver 868, among other components. The device 850 may also be provided with a storage device, such as a microdrive or other device, to provide additional storage. Each of the components 850, 852, 864, 854, 866, and 868, are interconnected using various buses, and several of the components may be mounted on a common motherboard or in other manners as appropriate.
[0073] The processor 852 can execute instructions within the computing device 850, including instructions stored in the memory 864. The processor may be implemented as a chipset of chips that include separate and multiple analog and digital processors. The processor may provide, for example, for coordination of the other components of the device 850, such as control of user interfaces, applications run by device 850, and wireless communication by device 850.
[0074] Processor 852 may communicate with a user through control interface
858 and display interface 856 coupled to a display 854. The display 854 may be, for example, a TFT LCD (Thin-Film-Transistor Liquid Crystal Display) or an OLED (Organic Light Emitting Diode) display, or other appropriate display technology. The display interface 856 may comprise appropriate circuitry for driving the display 854 to present graphical and other information to a user. The control interface 858 may receive commands from a user and convert them for submission to the processor 852. In addition, an external interface 862 may be provide in communication with processor 852, so as to enable near area communication of device 850 with other devices. External interface 862 may provide, for example, for wired communication in some implementations, or for wireless communication in other implementations, and multiple interfaces may also be used.
[0075] The memory 864 stores information within the computing device 850.
The memory 864 can be implemented as one or more of a computer-readable medium or media, a volatile memory unit or units, or a non-volatile memory unit or units. Expansion memory 874 may also be provided and connected to device 850 through expansion interface 872, which may include, for example, a SIMM (Single In Line Memory Module) card interface. Such expansion memory 874 may provide extra storage space for device 850, or may also store applications or other information for device 850. Specifically, expansion memory 874 may include instructions to carry out or supplement the processes described above, and may include secure information also. Thus, for example, expansion memory 874 may be provide as a security module for device 850, and may be programmed with instructions that permit secure use of device 850. In addition, secure applications may be provided via the SIMM cards, along with additional information, such as placing identifying information on the SIMM card in a non-hackable manner.
[0076] The memory may include, for example, flash memory and/or NVRAM memory, as discussed below. In one implementation, a computer program product is tangibly embodied in an information carrier. The computer program product contains instructions that, when executed, perform one or more methods, such as those described above. The information carrier is a computer- or machine-readable medium, such as the memory 864, expansion memory 874, or memory on processor 852, that may be received, for example, over transceiver 868 or external interface 862.
[0077] Device 850 may communicate wirelessly through communication interface 866, which may include digital signal processing circuitry where necessary. Communication interface 866 may provide for communications under various modes or protocols, such as GSM voice calls, SMS, EMS, or MMS messaging, CDMA, TDMA, PDC, WCDMA, CDMA2000, or GPRS, among others. Such communication may occur, for example, through radio-frequency transceiver 868. In addition, short- range communication may occur, such as using a Bluetooth, Wi-Fi, or other such transceiver (not shown). In addition, GPS (Global Positioning System) receiver module 870 may provide additional navigation- and location-related wireless data to device 850, which may be used as appropriate by applications running on device 850.
[0078] Device 850 may also communicate audibly using audio codec 860, which may receive spoken information from a user and convert it to usable digital information. Audio codec 860 may likewise generate audible sound for a user, such as through a speaker, e.g., in a handset of device 850. Such sound may include sound from voice telephone calls, may include recorded sound (e.g., voice messages, music files, etc.) and may also include sound generated by applications operating on device 850.
[0079] The computing device 850 may be implemented in a number of different forms, as shown in the figure. For example, it may be implemented as a cellular telephone 880. It may also be implemented as part of a smart phone 882, personal digital assistant, or other similar mobile device.
[0080] Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, specially designed ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various implementations can include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device. In addition, the term“module” may include software and/or hardware.
[0081] These computer programs (also known as programs, software, software applications or code) include machine instructions for a programmable processor, and can be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms“machine- readable medium”“computer-readable medium” refers to any computer program product, apparatus and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term“machine- readable signal” refers to any signal used to provide machine instructions and/or data to a programmable processor.
[0082] To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user and a keyboard and a pointing device (e.g., a mouse or a trackball) by which the user can provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user can be received in any form, including acoustic, speech, or tactile input.
[0083] The systems and techniques described here can be implemented in a computing system that includes a back end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front end component (e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back end, middleware, or front end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network (“LAN”), a wide area network (“WAN”), and the Internet.
[0084] The computing system can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
[0085] In some implementations, the computing devices depicted in FIG. 8 can include sensors that interface with a virtual reality (VR headset 890). For example, one or more sensors included on a computing device 850 or other computing device depicted in FIG. 8, can provide input to VR headset 890 or in general, provide input to a VR space. The sensors can include, but are not limited to, a touchscreen, accelerometers, gyroscopes, pressure sensors, biometric sensors, temperature sensors, humidity sensors, and ambient light sensors. The computing device 850 can use the sensors to determine an absolute position and/or a detected rotation of the computing device in the VR space that can then be used as input to the VR space. For example, the computing device 850 may be incorporated into the VR space as a virtual object, such as a controller, a laser pointer, a keyboard, a weapon, etc. Positioning of the computing device/virtual object by the user when incorporated into the VR space can allow the user to position the computing device to view the virtual object in certain manners in the VR space. For example, if the virtual object represents a laser pointer, the user can manipulate the computing device as if it were an actual laser pointer. The user can move the computing device left and right, up and down, in a circle, etc., and use the device in a similar fashion to using a laser pointer.
[0086] In some implementations, one or more input devices included on, or connect to, the computing device 850 can be used as input to the VR space. The input devices can include, but are not limited to, a touchscreen, a keyboard, one or more buttons, a trackpad, a touchpad, a pointing device, a mouse, a trackball, a joystick, a camera, a microphone, earphones or buds with input functionality, a gaming controller, or other connectable input device. A user interacting with an input device included on the computing device 850 when the computing device is incorporated into the VR space can cause a particular action to occur in the VR space.
[0087] In some implementations, a touchscreen of the computing device 850 can be rendered as a touchpad in VR space. A user can interact with the touchscreen of the computing device 850. The interactions are rendered, in VR headset 890 for example, as movements on the rendered touchpad in the VR space. The rendered movements can control objects in the VR space.
[0088] In some implementations, one or more output devices included on the computing device 850 can provide output and/or feedback to a user of the VR headset 890 in the VR space. The output and feedback can be visual, tactical, or audio. The output and/or feedback can include, but is not limited to, vibrations, turning on and off or blinking and/or flashing of one or more lights or strobes, sounding an alarm, playing a chime, playing a song, and playing of an audio file. The output devices can include, but are not limited to, vibration motors, vibration coils, piezoelectric devices, electrostatic devices, light emitting diodes (LEDs), strobes, and speakers.
[0089] In some implementations, the computing device 850 may appear as another object in a computer-generated, 3D environment. Interactions by the user with the computing device 850 (e.g., rotating, shaking, touching a touchscreen, swiping a finger across a touch screen) can be interpreted as interactions with the object in the VR space. In the example of the laser pointer in a VR space, the computing device 850 appears as a virtual laser pointer in the computer-generated, 3D environment. As the user manipulates the computing device 850, the user in the VR space sees movement of the laser pointer. The user receives feedback from interactions with the computing device 850 in the VR space on the computing device 850 or on the VR headset 890.
[0090] In some implementations, one or more input devices in addition to the computing device (e.g., a mouse, a keyboard) can be rendered in a computer generated, 3D environment. The rendered input devices (e.g., the rendered mouse, the rendered keyboard) can be used as rendered in the VR space to control objects in the VR space.
[0091] Computing device 800 is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Computing device 850 is intended to represent various forms of mobile devices, such as personal digital assistants, cellular telephones, smart phones, and other similar computing devices. The components shown here, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed in this document. [0092] A number of embodiments have been described. Nevertheless, it will be understood that various modifications may be made without departing from the spirit and scope of the specification.
[0093] In addition, the logic flows depicted in the figures do not require the particular order shown, or sequential order, to achieve desirable results. In addition, other steps may be provided, or steps may be eliminated, from the described flows, and other components may be added to, or removed from, the described systems. Accordingly, other embodiments are within the scope of the following claims.

Claims

WHAT IS CLAIMED IS:
1. A method for creating a three-dimensional map for augmented reality (AR) localization, the method comprising:
obtaining a digital representation of a scene of an AR environment, the digital representation having been captured by a computing device;
identifying, using a machine learning (ML) model, a region of the digital representation having visual data identified as likely to change; and
removing a portion of the digital representation that corresponds to the region of the digital representation to obtain a reduced digital representation, the reduced digital representation being used to generate a three-dimensional (3D) map for the AR environment.
2. The method of claim 1, further comprising:
generating the 3D map based on the reduced digital representation, the 3D map not including the portion of the digital representation that corresponds to the region with the visual data identified as likely to change.
3. The method of claim 1 or 2, wherein the identifying includes:
detecting, using the ML model, a visual object in the digital representation that is likely to move from the scene, wherein the region of the digital representation is identified based on the detected visual object.
4. The method of claim 1 or 2, wherein the identifying includes:
detecting, using the ML model, a visual object in the digital representation; classifying the visual object into a classification; and
identifying the object as likely to change the scene based on a tag associated with the classification, the tag indicating that visual objects belonging to the classification are likely to move,
wherein the region of the digital representation is identified as a three- dimensional space that includes the visual object identified as likely to change.
5. The method of any of claims 1 to 4, wherein the identifying includes identifying, using the ML model, a pattern of visual points in the digital representation that are likely to change, wherein the pattern of visual points are excluded from the 3D map.
6. The method of any of claims 1 to 5, wherein the digital representation includes a set of visual feature points derived from the computing device, the method further comprising:
detecting a visual object that is likely to change based on the digital representation;
identifying a region of space that includes the visual object; and
removing or annotating one or more visual feature points from the set that are included within the region.
7. The method of any of claims 1 to 6, wherein the digital representation is a first digital representation, and the computing device is a first computing device, the method further comprising:
obtaining a second digital representation of at least a portion of the scene of the AR environment, the second digital representation having been captured by a second computing device; and
comparing the second digital representation with the 3D map to determine whether the second digital representation is from the same AR environment as the 3D map.
8. The method of any of claims 1 to 6, wherein the digital representation is a first digital representation, and the computing device is a first computing device, the method further comprising:
obtaining a second digital representation of at least a portion of the scene of the AR environment, the second digital representation having been captured by a second computing device;
identifying, using the ML model, a secondary region of the second digital representation, the secondary region having visual data identified as likely to change; removing a portion of the second digital representation that corresponds to the secondary region; and comparing the second digital representation with the 3D map to determine whether the second digital representation is from the same AR environment as the 3D map.
9. An augmented reality (AR) system configured to generate a three-dimensional (3D) map for an AR environment, the AR system comprising:
an AR collaborative service executable by at least one server; and
a client AR application executable by a computing device, the client AR application configured to communicate with the AR collaborative service via one or more application programming interfaces (APIs), the AR collaborative service or the client AR application configured to:
obtain a digital representation of a scene of an AR environment, the digital representation having been captured by the computing device;
identify, using a machine learning (ML) model, a region of the digital representation having visual data that is identified as likely to change; and remove a portion of the digital representation that corresponds to the region to obtain a reduced digital representation of the scene, the reduced digital representation being used for comparison with a three-dimensional (3D) map of the AR environment.
10. The AR system of claim 9, wherein the AR collaborative service is configured to compare the reduced digital representation with the 3D map in response to an attempt to localize the AR environment on the computing device.
11. The AR system of claim 9 or 10, wherein the client AR application or the AR collaborative service is configured to detect, using the ML model, a visual object in the digital representation that is likely to move, wherein the region of the digital representation is identified based on the detected visual object.
12. The AR system of claim 9 or 10, wherein the AR collaborative service is configured to:
detect, using the ML model, a visual object in the digital representation; classify the visual object into a classification; and identify the visual object as likely to change based on a tag associated with the classification, the tag indicating that visual objects belonging to the classification are likely to change.
13. The AR system of any of claims 9 to 12, wherein the client AR application is configured to identify, using the ML model, a pattern of visual points in the digital representation that are likely to change.
14. The AR system of any of claims 9 to 13, wherein the digital representation includes a set of visual feature points captured by the computing device, the client AR application or the AR collaborative service configured to:
detect a visual object that is likely to change based on the digital
representation;
identify a region of space that includes the visual object; and
remove one or more visual feature points from the set that are included within the region.
15. A non-transitory computer-readable medium storing executable instructions that when executed by at least one processor are configured to generate a three- dimensional (3D) map for an augmented reality (AR) environment, the executable instructions including instructions that cause the at least one processor to:
obtain a first digital representation of a scene of an AR environment, the first digital representation having been captured by a first computing device;
identify, using a machine learning (ML) model, a region of the first digital representation having visual data that is identified as likely to change;
remove a portion of the first digital representation that corresponds to the region to obtain a reduced digital representation;
generate a three-dimensional (3D) map for the AR environment for storage on an AR server; and
compare a second digital representation of at least a portion of the scene with the 3D map in response to an attempt to localize the AR environment on a second computing device, the second digital representation having been captured by the second computing device.
16. The non-transitory computer-readable medium of claim 15, further comprising:
detect, using the ML model, a visual object in the first digital representation that is likely to change.
17. The non-transitory computer-readable medium of claim 15, further comprising:
detect, using the ML model, a visual object in the first digital representation; classify the visual object into a classification; and
identify the visual object as likely to change based on a tag associated with the classification, the tag indicating that visual objects belonging to the classification are likely to change,
wherein the region of the first digital representation is identified as a three- dimensional space that includes the visual object identified as likely to change.
18. The non-transitory computer-readable medium of any of claims 15 to 17, further comprising:
identify, using the ML model, a pattern of points in the first digital representation that are likely to change, wherein the pattern of points are excluded from the 3D map.
19. The non-transitory computer-readable medium of any of claims 15 to 18, wherein the digital representation includes a set of visual feature points captured by the first computing device, further comprising:
detect a visual object that is likely to move from the scene based on the first digital representation;
identify a region of space that includes the visual object; and
remove one or more visual feature points from the set that are included within the region.
20. The non-transitory computer-readable medium of any of claims 15 to 19, further comprising:
detect, using the ML model, a visual object in the second digital representation that is likely to move.
EP19832782.7A 2019-04-26 2019-12-09 System and method for creating persistent mappings in augmented reality Pending EP3959692A1 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US16/395,832 US11055919B2 (en) 2019-04-26 2019-04-26 Managing content in augmented reality
US16/396,145 US11151792B2 (en) 2019-04-26 2019-04-26 System and method for creating persistent mappings in augmented reality
PCT/US2019/065235 WO2020219109A1 (en) 2019-04-26 2019-12-09 System and method for creating persistent mappings in augmented reality

Publications (1)

Publication Number Publication Date
EP3959692A1 true EP3959692A1 (en) 2022-03-02

Family

ID=69138001

Family Applications (2)

Application Number Title Priority Date Filing Date
EP19832503.7A Pending EP3959691A1 (en) 2019-04-26 2019-12-09 Managing content in augmented reality
EP19832782.7A Pending EP3959692A1 (en) 2019-04-26 2019-12-09 System and method for creating persistent mappings in augmented reality

Family Applications Before (1)

Application Number Title Priority Date Filing Date
EP19832503.7A Pending EP3959691A1 (en) 2019-04-26 2019-12-09 Managing content in augmented reality

Country Status (3)

Country Link
EP (2) EP3959691A1 (en)
CN (2) CN113614794B (en)
WO (2) WO2020219110A1 (en)

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120306850A1 (en) * 2011-06-02 2012-12-06 Microsoft Corporation Distributed asynchronous localization and mapping for augmented reality
US9996974B2 (en) * 2013-08-30 2018-06-12 Qualcomm Incorporated Method and apparatus for representing a physical scene
US9791917B2 (en) * 2015-03-24 2017-10-17 Intel Corporation Augmentation modification based on user interaction with augmented reality scene
EP3698233A1 (en) * 2017-10-20 2020-08-26 Google LLC Content display property management
CN108269307B (en) * 2018-01-15 2023-04-07 歌尔科技有限公司 Augmented reality interaction method and equipment
US10504282B2 (en) * 2018-03-21 2019-12-10 Zoox, Inc. Generating maps without shadows using geometry

Also Published As

Publication number Publication date
EP3959691A1 (en) 2022-03-02
WO2020219109A1 (en) 2020-10-29
CN113614794A (en) 2021-11-05
WO2020219110A1 (en) 2020-10-29
CN113614793A (en) 2021-11-05
CN113614794B (en) 2024-06-04
CN113614793B (en) 2024-08-23

Similar Documents

Publication Publication Date Title
US11151792B2 (en) System and method for creating persistent mappings in augmented reality
US11055919B2 (en) Managing content in augmented reality
US11908092B2 (en) Collaborative augmented reality
US11798237B2 (en) Method for establishing a common reference frame amongst devices for an augmented reality session
US10345925B2 (en) Methods and systems for determining positional data for three-dimensional interactions inside virtual reality environments
US11100712B2 (en) Positional recognition for augmented reality environment
US20170329503A1 (en) Editing animations using a virtual reality controller
US10055888B2 (en) Producing and consuming metadata within multi-dimensional data
CN108697935B (en) Avatars in virtual environments
JP6782846B2 (en) Collaborative manipulation of objects in virtual reality
JP7008730B2 (en) Shadow generation for image content inserted into an image
US11042749B2 (en) Augmented reality mapping systems and related methods
CN105190469A (en) Causing specific location of an object provided to a device
CN113614793B (en) System and method for creating persistent mappings in augmented reality
Rose et al. CAPTURE SHORTCUTS FOR SMART GLASSES USING ELECTROMYOGRAPHY

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: UNKNOWN

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20211015

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
PUAG Search results despatched under rule 164(2) epc together with communication from examining division

Free format text: ORIGINAL CODE: 0009017

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20240808

B565 Issuance of search results under rule 164(2) epc

Effective date: 20240808

RIC1 Information provided on ipc code assigned before grant

Ipc: G06T 19/00 20110101AFI20240805BHEP