WO2018213702A1 - Augmented reality system - Google Patents

Augmented reality system Download PDF

Info

Publication number
WO2018213702A1
WO2018213702A1 PCT/US2018/033385 US2018033385W WO2018213702A1 WO 2018213702 A1 WO2018213702 A1 WO 2018213702A1 US 2018033385 W US2018033385 W US 2018033385W WO 2018213702 A1 WO2018213702 A1 WO 2018213702A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
location
information
graphical model
parts
Prior art date
Application number
PCT/US2018/033385
Other languages
French (fr)
Inventor
Stephen PRIDEAUX-GHEE
André GOSSELIN
Neil Potter
Orit ITZHAR
Aakash CHOPRA
Per Nielsen
Vincent DEL CASTILLO
Qi Pan
Michael GERVAUTZ
Original Assignee
Ptc Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US15/789,329 external-priority patent/US11030808B2/en
Priority claimed from US15/789,341 external-priority patent/US10755480B2/en
Priority claimed from US15/789,316 external-priority patent/US10572716B2/en
Application filed by Ptc Inc. filed Critical Ptc Inc.
Publication of WO2018213702A1 publication Critical patent/WO2018213702A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/62Semi-transparency

Definitions

  • This specification relates generally to an augmented reality system.
  • Augmented reality (AR) content is produced by superimposing computer- generated content onto depictions of real-world content, such as images or video.
  • the computer-generated content may include graphics, text, or animation, for example.
  • Example processes include obtaining an image of an object captured by a device during relative motion between the object and the device; determining a location of the device relative to the object during image capture based on one or more attributes of the object in the image; mapping a three-dimensional (3D) graphical model representing the object to the object in the image based, at least in part, on the location of the device, where the 3D graphical model includes
  • the example processes may include one or more of the following features, either alone or in combination.
  • Determining the location of the device relative to the object may include identifying a feature of the object shown in the image, with the feature being among the one or more attributes; and determining an orientation of the object relative to the device based on the feature and based on the information about the object in the 3D graphical model. The orientation is part of the location.
  • Determining the location of the device relative to the object may include accounting for a difference between a position of a camera on the device used to capture the image and a predefined reference point on the device.
  • Determining the location of the device relative to the object may include updating the location of the device as relative positions between the object and the device change. Mapping the 3D graphical model to the object in the image may be performed for updated locations of the device. Mapping the 3D graphical model to the object in the image may include associating parts of the 3D graphical model to corresponding parts of the object shown in the image. A remainder of the 3D graphical model representing parts of the object not shown in the image may be positioned relative to the parts of the 3D graphical model overlaid on the parts of the object shown in the image.
  • the example processes may also include identifying the at least some information based on the part selected, where the at least some information includes information about the part.
  • the at least some information may include information about parts internal the object relative to the part selected.
  • Receiving the selection may include receiving a selection of a point on the image, where the point corresponds to the part as displayed in the image; and mapping the selected point to the 3D graphical model. Mapping the selected point may include determining a relative position of the device and the object; tracing a ray through the 3D graphical model based on a mapping of the 3D graphical model to the image; and identifying an intersection between the ray and the part.
  • the example method may include obtaining at least some information about one or more parts of the object that intersect the ray. At least some information may include data representing the one or more parts graphically, where the data enables rendering of the one or more parts relative to the object. The at least some information may include data representing one or more parameters relating to the one or more parts, where the data enables rendering of the one or more parameters relative to the object.
  • the example processes may also include identifying, based on the selection, the part based on one or more attributes of a pixel in the image that corresponds to the selection.
  • the information about the object in the 3D graphical model may include information about parts of the object.
  • the information about the parts may indicate which of the parts are selectable and may indicate which of the parts are selectable individually or as a group.
  • the example process may also include enabling configuration, through a user interface, of the information about the parts indicating which of the parts are selectable and indicating which of the parts are selectable individually or as a group.
  • the example process may also include drawing, based on the selection, a color graphic version of the part into a buffer; and using the color graphic version in the buffer to identify the part.
  • At least some of the information rendered from the graphical 3D model may be computer graphics that is at least partially transparent, and that at least partly overlays the image. At least some of the information rendered from the graphical 3D model may be computer graphics that is opaque, and that at least partly overlays the image. At least some of the information rendered from the graphical 3D model may be computer graphics that is in outline form, and that at least partly overlays the image.
  • An example method performed by a computing system includes: obtaining an image of an object captured by a device during relative motion between the object and the device; determining a location of the device relative to the object during image capture based on one or more attributes of the object in the image; storing, in computer memory, the image of the object and the location of the device during image capture; mapping a three-dimensional (3D) graphical model representing the object to the object in the image based, at least in part, on the location of the device, with the 3D graphical model including information about the object; receiving, at a time subsequent to capture of the image, first data representing an action to be performed for the object in the image; and in response to the first data, generating second data for use in rending content on a display device, with the second data being based on the image stored, the location of the device stored, and at least some of the information from the 3D graphical model.
  • the example method may include one or more of the following features, either alone or in combination.
  • the second data may be based also on the action to be performed for the object in the image.
  • the content may include the image augmented based on the at least some of the information from the 3D graphical model.
  • the example method may include receiving an update to the information; and storing the update in the 3D graphical model as part of the information.
  • the content may include the image augmented based on the update and presented from a perspective of the device that is based on the location.
  • the update may be received from a sensor associated with the object. The sensor may provide the update following capture of the image by the device.
  • the update may be received in realtime, and the second data may be generated in response to receipt of the update.
  • the image may be a frame of video capture by the device during the relative motion between the object and the device.
  • the location may include a position and an orientation of the device relative to the object for each of multiple frames of the video.
  • Determining the location may include: identifying a feature of the object shown in the image, with the feature being among the one or more attributes; and determining an orientation of the object relative to the device based on the feature and based on the information about the object in the 3D graphical model, with the orientation being part of the location.
  • Determining the location of the device may include updating the location of the device as relative positions between the object and the device change. Mapping the 3D graphical model to the object may be performed for updated locations of the device.
  • Mapping the 3D graphical model to the object in the image may include associating parts of the 3D graphical model to corresponding parts of the object shown in the image.
  • a remainder of the 3D graphical model may represent parts of the object not shown in the image being positioned relative to the parts of the 3D graphical model overlaid on the parts of the object shown in the image.
  • the at least some information from the 3D graphical model may represent components interior to the object.
  • An example method includes: obtaining, from computer memory, information from a three-dimensional (3D) graphical model that represents an object; identifying, based on the information, a first part of the object having an attribute; performing a recognition process on the object based on features of the object, where the recognition process attaches more importance to a second part of the object than to the first part, with the second part either not having the attribute or having less of the attribute than the first part; and providing data for rendering content on a graphical user interface based, at least in part, on recognition of the object performed by the recognition process.
  • the example method may include one or more of the following features, either alone or in combination.
  • attaching more importance to the second part of the object may include ignoring information about the first part of the object during the recognition process.
  • attaching more importance to the second part of the object ma include deemphasizing information about the first part of the object during the recognition process.
  • the example method may include tracking movement of the object from a first location to a second location. Tracking the movement may include: identifying, in the first image, a feature in the second part of the object, with the feature being identified based on a region in the second image that contains pixels having greater than a predefined difference; and identifying, in the second image, the feature in the second part of the object, with the feature being identified based on the region in the second image that contains the pixels having greater than the predefined difference.
  • the second location may be based on a location of the feature in the second image.
  • the feature may be a first feature
  • the tracking may include: identifying, in the first image, a second feature in the first part of the object, with the second feature being identified based on a second region in the second image that contains pixels having greater than a predefined difference; and identifying, in the second image, the second feature in the first part of the object, with the second feature being identified based on the second region in the second image that contains the pixels having greater than the predefined difference.
  • the second location may be based on both the location of the first feature in the second image and the location of the second feature in the second image. Deemphasizing may include weighting the location of the second feature in the second image less heavily than the location of the first feature in the second image.
  • the attribute of the object may include an amount of reflectivity in the first part of the object, an amount of transparency in the first part of the object, and/or an amount of flexibility in the first part of the object.
  • the attribute may include an amount of the first part of the objected that is coverable based on motion of one or more other parts of the object.
  • the image may be captured within a field specified for recognition of the object.
  • An example method performed by one or more processing devices includes: obtaining, from computer memory, information from a three-dimensional (3D) graphical model that represents an object; identifying, based on the information, rigid components of the object that are connected by a flexible component of the object; performing a recognition process on the object based on features of the rigid components, with the recognition process attaching more importance to the rigid components than to the flexible components; and providing data for rendering content on a graphical user interface based, at least in part, on recognition of the object performed by the recognition process.
  • the example method may include one or more of the following features, either alone or in combination.
  • the example method may include tracking movement of the object from a first location in the first image to a second location in a second image. Tracking the movement of the object from the first location in a first image to the second location in a second image may include ignoring the flexible component and not taking into account an impact of the flexible component when tracking the movement. Tracking movement of the object from the first location in the first image to the second location in the second image may include deemphasizing an impact of the flexible component when tracking the movement, but not ignoring the impact. Tracking movement of the object from the first location in the first image to the second location in the second image may include: tracking movement of the rigid
  • non-transitory machine-readable storage media examples include, e.g., read-only memory, an optical disk drive, memory disk drive, random access memory, and the like. All or part of the processes, methods, systems, and techniques described herein may be implemented as an apparatus, method, or system that includes one or more processing devices and memory storing instructions that are executable by the one or more processing devices to perform the stated operations.
  • Fig. 1 is a diagram of a display screen showing example AR content.
  • Fig. 2 is a flowchart of an example process for generating AR content.
  • Fig. 3 is a diagram of a display screen showing example AR content.
  • Fig. 4 is a diagram showing a representation of an object produced using three-dimensional (3D) graphic data next to an image of the object.
  • Fig. 5 comprised of Figs. 5A and 5B, shows, conceptually, a ray that is projected to, and through, an image of an object and that also impacts a digital twin for the object.
  • Fig. 6 comprised of Figs. 6A and 6B, shows, conceptually, a ray projected to, and through, an image of an object, and AR content generated based on that ray.
  • Fig. 7 is a block diagram of an example computer/network architecture on which the AR system described herein may be implemented.
  • Fig. 8 is a flowchart showing an example process for generating AR content.
  • Fig. 9 is a flowchart of an example process for generating AR content.
  • Fig. 10 is a diagram showing a field in which an image is to be captured.
  • Figs. 11 to 17 show examples of AR content that may be generated using, for example, stored imagery or video using the example processes described herein.
  • Fig. 18 is a flowchart of an example process for performing recognition and tracking processes on content in images.
  • Fig. 19 is a flowchart of an example process for performing recognition and tracking processes on content in images.
  • AR content is generated by superimposing computer- generated content onto actual graphics, such as an image or video of a real-life object.
  • Any appropriate computer-generated content may be used including, but not limited to, computer graphics, computer animation, and computer-generated text.
  • AR content 100 is shown on the display of tablet computing device 101 .
  • AR content 100 includes an image of a loader 102 and computer graphics 103 that are rendered at an appropriate location over the image of the loader.
  • the image was captured by a camera or other appropriate image capture device.
  • the computer graphics were generated by a computing device, such as a remote server or the tablet computing device, based on information about the object displayed (the loader).
  • the computer graphics may relate to the object in some way.
  • computer graphics 103 highlight a part of the loader, namely its arm.
  • the example AR system described herein is configured to identify an object in an image captured by an image capture device, and to map a three-dimensional (3D) graphical model to the image of the object.
  • the 3D graphical model contains information about the object, such as the object's structure, current or past status, and operational capabilities.
  • the mapping of the 3D graphical model to the image associates this information from the 3D graphical model with the image.
  • a point on the image may be selected, and information from the 3D graphical model relating to that point may be retrieved and used to display computer-generated content on the image.
  • a computer graphics rendering of a selected object part may be displayed, as is the case with the arm of Fig. 1 .
  • text associated with the selected part may be displayed.
  • the 3D graphical model is controlled to track relative movement of the image capture device and the object. That is, the image capture device may move relative to the object, or vice versa. During that
  • the 3D graphical model is also controlled to track the relative movement of the object even as the perspective of the object in the image changes vis-a-vis the image capture device.
  • the example AR system enables interaction with the object in real-time and from any appropriate orientation.
  • each instance of an object such as loader 102
  • a digital twin which is described herein.
  • An instance of an object includes a unique specimen of an object that is differentiated from other specimens of the object.
  • a loader may have a vehicle identification (ID) number that distinguishes it from all other loaders, including those that are the same make and model.
  • ID vehicle identification
  • Different types of information may be used to identify the instance of an object, as described herein.
  • a DT is specific to an object instance and, as such, includes information identifying the object instance.
  • an object is not limited to an individual article, but rather may include, e.g., any appropriate apparatus, system, software, structure, entity, or combination of one or more of these, that can be modeled using one or more DTs.
  • a DT is an example of a type of 3D graphical model that is usable with the AR system; however, other appropriate models may also be usable.
  • An example DT includes a computer-generated representation of an object comprised of information that models the object (referred to as the physical twin, or PT) or portions thereof.
  • the DT includes data for a 3D graphical model of the object and associates information about the object to information representing the object in the 3D graphical model.
  • the DT may include, but is not limited to, data representing the structure of the object or its parts, the operational capabilities of the object or its parts, and the state(s) of the object or its parts.
  • a DT may be comprised of multiple DTs. For example, there may be separate DT for each part of an object.
  • a part of an object may include any appropriate component, element, portion, section, or other constituent of an object, or combination thereof.
  • a DT may be generated based on design data, manufacturing data, and/or any other appropriate information (e.g., product specifications) about the object. This information may be generic to all such objects.
  • the DT may be generated using data that describes the structure and operational capabilities of the type (e.g., make and model) of the loader shown. This data may be obtained from any appropriate public or private database(s), assuming
  • the DT may be generated using information obtained from, and/or are managed by, systems such as, but not limited to, PLM (product lifecycle management) systems, CAD (computer-aided design) systems, SLM (service level management) systems, ALM (application lifecycle management) systems, CPM (connected product management) systems, ERP (enterprise resource planning) systems, CRM (customer relationship management) systems, and/or EAM (enterprise asset management) systems.
  • PLM product lifecycle management
  • CAD computer-aided design
  • SLM service level management
  • ALM application lifecycle management
  • CPM connected product management
  • ERP enterprise resource planning
  • CRM customer relationship management
  • EAM enterprise asset management
  • the information can cover a range of characteristics stored, e.g., in a bill of material (BOM) associated with the object (e.g., an EBOM - engineering BOM, an MBOM - manufacturing BOM, or an SBOM - service BOM), the object's service data and manuals, the object's behavior under various conditions, the object's relationship to other object(s) and artifacts connected to the object, and software that manages, monitors, and/or calculates the object's conditions and operations in different operating environments.
  • BOM bill of material
  • the DT may also be generated based on sensor data that is obtained for the particular instance of the object.
  • the sensor data may be obtained from readings taken from sensors placed on, or near, the actual instance of the object (e.g., loader 102).
  • the DT for loader 102 will be unique relative to DTs for other loaders, including those that are identical in structure and function to loader 102.
  • the DT may also include other information that is unique to the object, such as the object's repair history, its operational history, damage to the object, and so forth.
  • the DT for an object instance may have numerous uses including, but not limited to, generating AR content for display.
  • the example AR system described herein may superimpose computer-generated content that is based on, or represents, the DT or portions thereof onto an image of an object instance.
  • Example processes performed by the AR system identify an instance of the object, generate AR content for the object using the DT for that object, and use that AR content in various ways to enable access to information about the object.
  • Example process 200 that uses the DT to augment actual graphics, such as images or video, is shown in Fig. 2.
  • Example process 200 may be performed by the AR system described herein using any appropriate hardware.
  • an image of an object is captured (201 ) by an image capture device - a camera in this example - during relative motion between the device and the object.
  • the object may be any appropriate apparatus, system, structure, entity, or combination of one or more of these that can be captured in an image.
  • An example of an object is loader 102 of Fig. 1 .
  • the camera that captures the image may be a still camera or a video camera.
  • the camera may be part of a mobile computing device, such as a tablet computer or a smartphone.
  • the relative motion between the camera and the object includes the object remaining stationary while the camera moves. In some implementations, the relative motion between the camera and the object includes the object moving while the camera remains stationary. In some implementations, the relative motion between the camera and the object includes both the object and the camera moving. In any case, the relative motion is evident by the object occupying, in different images, different locations in the image frame. Multiple images may be captured during the relative motion and, as described below, a DT may be mapped to (e.g., associated with) the object in each image. As described below, in some implementations, the DT may track motion of the object in real-time, thereby allowing for interaction with the object via an image from different
  • real-time may not mean that two actions are simultaneous, but rather may include actions that occur on a continuous basis or track each other in time, taking into account delays associated with processing, data transmission, hardware, and the like.
  • tablet computer 101 may be used to capture the image of loader 102 at a first time, TV
  • the image may be part of a video stream comprised of frames of images that are captured by walking around the loader.
  • the image may be part of a video stream comprised of frames of images that are captured while the camera is stationary but the loader moves.
  • the tablet computer 101 may be used to capture a different image of loader 102 at a second, different time, T 2 .
  • T 2 a second, different time
  • process 200 identifies the object instance in the captured image and retrieves (202) a DT for the object instance - in the example of Fig. 1 , loader 102.
  • any appropriate identifying information may be used to identify the object instance.
  • the identifying information may be obtained from the object itself, from the image of the object, from a database, or from any other appropriate source.
  • the identifying information may be, or include, any combination of unique or semi-unique identifiers, such as a Bluetooth address, a media access control (MAC) address, an Internet Protocol (IP) address, a serial number, a quick response (QR) code or other type of bar code, a subnet address, a subscriber identification module (SIM), or the like.
  • MAC media access control
  • IP Internet Protocol
  • QR quick response
  • SIM subscriber identification module
  • Tags such as RFIDs
  • the identifying information may be, or include, global positioning system (GPS) or other coordinates that defines the location of the object.
  • GPS global positioning system
  • unique features of the object in the image may be used to identify the object instance.
  • a database may store information identifying markings, wear, damage, or other distinctive features of an object instance, together with a unique identifier for the object.
  • the AR system may compare information from the captured image to the stored information or a stored image. Comparison may be performed on a mobile device or on a remote computer. The result of the comparison may identify the object.
  • the DT for that object may be located (e.g., in memory, a location on a network, or elsewhere) using the obtained object identifier.
  • the DT corresponding to that identifier may be retrieved (202) for use by process 200.
  • Process 200 determines (203) a location of the camera relative to the object during image capture.
  • the location of the camera relative to the object can be specified, for example, by the distance between the camera and the object as well as the relative orientations of the camera and object. Other determinants of the relative location of the camera and the object, however, can be used.
  • the relative locations can be determined using known computer vision techniques for object recognition and tracking.
  • the location may be updated periodically or intermittently when relative motion between the object and the camera is detected. Location may be determined based on one or more attributes of the object in the image and based on information in the DT for the object. For example, a size of the object in the image - e.g., a length and/or width taken relative to appropriate reference points - may be determined. For example, in the image, the object may be five centimeters tall. Information in the DT specifies the actual size of the object in the real-world with one or more of the same dimensions as in the image. For example, in the real-world, the object may be three meters tall. In an example implementation, knowing the size of the object in the image and the size of the object in the real world, it is possible to determine the distance between the camera and the object when the image was captured. This distance is one aspect of the location of the camera.
  • the distance between the between the camera and the object is determined relative to a predefined reference point on the camera, rather than relative to a lens used to capture the image.
  • a predefined reference point on the camera For example, taking the case of some smartphones, the camera used to capture images is typically in an upper corner of the smartphone. Obtaining the distance relative to a predefined reference, such as a center point, on the smartphone may provide for greater accuracy in determining the location. Accordingly, when determining the distance, the offset between the predefined reference and the camera on the smartphone may be taken into account, and the distance may be corrected based on this offset.
  • process 200 identifies one or more features of the object, such as wheel 106 in the loader of Fig. 1 . In some
  • such features may be identified based on the content of the image.
  • a change in pixel color may be indicative of a feature of an object.
  • the change in pixel color may be averaged or otherwise processed over a distance before a feature of the object is confirmed.
  • sets of pixels of the image may be compared to known images in order to identify features. Any appropriate feature identification process may be used.
  • the orientation of the object in the image relative to the camera may be determined based on the features of the object identified in the image.
  • the features may be compared to features represented by 3D graphics data in the DT.
  • one or more 3D features from the DT may be projected into two-dimensional (2D) space, and their resulting 2D projections may be compared to one or more features of the object identified in the image.
  • Features of the object from the image and the 3D graphical model (from the DT) that match are aligned. That is, the 3D graphical model is oriented in 3D coordinate space so that its features align to identified features of the image.
  • the 3D graphical model may be at specified angle(s) relative to axes in the 3D coordinate space. These angles(s) define the orientation of the 3D graphical model and, thus, also define the orientation of the object in the image relative to the camera that captured the image. Other appropriate methods of identifying the orientation of the object in the image may also be used, or may be used on conjunction with those described herein.
  • Process 200 maps (204) the 3D graphical model defined by the DT to the object in the image based, at least in part, on the determined (203) location of the camera relative to the object.
  • the location may include the distance between the object in the image and the camera that captured the image, and an orientation of the object relative to the camera that captured the image. Other factors than these may also be used to specify the location.
  • mapping may include associating data from the DT, such as 3D graphics data and text, with corresponding parts of the object in the image. In the example of loader 102 of Fig.
  • data from the DT relating to its arm may be associated with the arm; data from the DT relating to front-end 108 may be associated with front-end 108; and so forth.
  • the associating process may include storing pointers or other constructs that relate data from the DT with corresponding pixels in the image of the object. This association may further identify where, in the image, data from the DT is to be rendered when generating AR content.
  • data from the DT - such as 3D graphics or text - is mapped to the image.
  • Fig. 4 shows, conceptually, 3D graphics 110 for the loader beside an actual image 112 of the loader.
  • the DT comprising the 3D graphics data may be stored in association with the image of the loader, as described above, and that association may be used in obtaining information about the loader from the image.
  • data in the DT relates features of the object in 3D
  • using the DT and the image of the object it is also possible to position 3D graphics for objects that are not visible in the image at appropriate locations. More
  • the location of the camera relative to the object may change in real-time as the relative positions between the object and the camera change.
  • the camera may be controlled to capture video of the object moving; the camera may be moved and capture video while the object remains stationary; or both the camera and the object may move while the camera captures video.
  • the loader may move from the position shown in Fig. 1 to the positon shown in Fig. 3.
  • the AR system may be configured so that the DT - e g., 3D graphics data and information defined by the DT - tracks that relative movement. That is, the DT may be moved so that appropriate content from the DT tracks corresponding features of the moving object.
  • the DT may be moved continuously with the object by adjusting the associations between data representing the object in an image frame and data representing the same parts of the object in the DT. For example, if a part of the object moves to coordinate XY in an image frame of video, the AR system may adjust the association between the DT and the image to reflect that data representing the moved part in the DT is also associated with coordinate XY.
  • movement of the object can predict its future location in a series of images - e.g., in frame-by-frame video - and the associations between DT data and image data may be adjusted to maintain correspondence between parts of the object in the image and their counterparts in the DT.
  • Take arm 113 of Fig. 3 as an example. In this example, movement of the camera may result in relative motion of arm 113 in the image frame. Movement in one direction may be a factor in determining future movement of the object in that same direction. The system may therefore predict how to adjust the associations based on the prior movement.
  • the mapping of the DT to the object associates attributes in the DT with the object. This applies not only to the object as a whole, but rather to any parts of the object for which the DT contains information. Included within the information about the object is information about whether individual parts of the object are selectable individually or as a group. In some implementations, to be selectable, a part may be separately defined within the DT and information, including 3D graphics, for the part, may be separately retrievable in response to an input, such as user or programmatic selection. In some implementations, selectability may be based on or more or more additional or other criteria.
  • a user interface may be generated to configure information in the DT to indicate which of the parts are selectable and which of the parts are selectable individually or as a group.
  • a DT may be generated at the time that the PT (object) is created.
  • the AR system may obtain, via a user interface, information indicating that an object having a given configuration and a given serial number has been manufactured.
  • the AR system may create, or tag, a DT for the object based on information such as that described herein.
  • Operational information about the instance of the object may not be available prior its use; however, that information can be incorporated into the DT as the information is obtained.
  • sensors on the (actual, real-world) object may be a source of operational information that can be relayed to the DT as that information is obtained.
  • a user may also specify in the DT, through the user interface, which parts of the object are selectable, either individually or as a group. This specification may be implemented by storing appropriate data, such as a tag or other identifier(s), in association with data representing the part.
  • process 200 receives (205) data representing a selection of a part of the object.
  • the data may represent a selection of a point on the image that represents the part of the object.
  • the selection may include a user-initiated selection, a programmatic selection, or any other type of selection.
  • a user may select a point in the image that corresponds to the loader 102 by touching the image at an appropriate point.
  • Data for the resulting selection is sent to the AR system, where that data is identified as representing a selection of a particular object or part on the loader represented in the image.
  • the selection may trigger display of information.
  • the user interface showing the object can be augmented with a set of visual crosshairs or a target that can remain stationary, such as in the center, relative to the user interface (not illustrated).
  • the user can select a part of the object by manipulating the camera's field of view such that the target points to any point of interest on the object.
  • the process 200 can be configured to continually and/or repeatedly analyze the point in the image under the target to identify any part or parts of the object that correspond to the point under the target.
  • the target can be configured to be movable within the user interface by the user, and/or the process can be configured to analyze a point under the target for detection of a part of the object upon active user input, such as a keyboard or mouse click.
  • the point selected is identified by the system, and information in the DT relating to an object or part at that point is identified.
  • the user may be prompted, and specify, whether the part, a group of parts, or the entire object is being selected.
  • the information is retrieved from the DT and is output (206) for rendering on a graphical user interface as part of AR content that may contain all or part of the original image.
  • 3D graphics data for the selected object or part may be retrieved and rendered over all or part of the object or part.
  • text data relating to the selected object or part may be retrieved and rendered proximate to the object or part.
  • the text may specify values of one or more operational parameters (e.g., temperature) or attributes (e.g., capabilities) of the part.
  • operational parameters e.g., temperature
  • attributes e.g., capabilities
  • both 3D graphics data and text data relating to the selected object or part may be retrieved and rendered with the object or part.
  • the resulting AR content may be used to control the object in the image.
  • the DT may be associated with the actual real-world object, e.g., through one or more computer networks.
  • a user may interact with the displayed AR content to send data through the network to control or interrogate the object, among other things. Examples of user interaction with displayed AR content that may be employed herein are described in U.S. Patent Publication No. 2016/0328883 entitled "Augmented Reality System", which is incorporated herein by reference.
  • any appropriate method may be used by the AR system to identify the object or part selected.
  • ray tracing may be used to select the object or part.
  • example ray 302 (shown as a dashed line) radiates from within the field of view 301 of a camera 303 and intersects a 2D image 306 of a loader at point 308.
  • Fig. 5B shows a close-up of view of point 308.
  • the intersection point - in this case point 308 in image 306 - also relates to a corresponding point 309 on the DT 310 associated with object 313.
  • point 308 of image 306 relates, via ray 302, to point 309 on DT 310.
  • Selection of point 308 in image 306 thus results in selection of point 309 on DT 310.
  • selection of point 308 on image 306 may cause ray 302 to be projected from that point to, and through, the 3D graphical model defined by the DT 310.
  • Parts of the object defined by the 3D graphical model are identified based on their intersection with ray 302.
  • the ray is a mathematic and programmatic construct. The ray not a physical manifestation.
  • a ray may intersect, and travel through, a 3D graphical model defined by the DT. That is, because the image and DT are associated as described herein, the ray can be programmatically projected to, and through, appropriate locations on the 3D graphical model contained in the DT. Accordingly, any part or component that intersects the ray may be selectable, and data therefor retrievable to generate AR content.
  • ray 302 travels through DT 310. By passing through DT 310, ray 302 intersects the exterior of the object 313 and also one or more selectable parts that are interior to object 313. For example, referring to Figs.
  • ray 302 may intersect part 320 that is interior to the 3D graphical model of DT 310. This interior part may be selected, and rendered at an appropriate location as computer-generated graphics 320 on image 306, as shown in Fig. 6B. In some implementations computer-generated graphics 320 may be partially
  • the user may be prompted with a list of all parts - both interior and exterior to object 313 - that the ray intersects.
  • the prompt may be a pop-up box or any other appropriate type of computer graphic.
  • the user may then select one or more of the parts.
  • the selection may include the type of data to display for each part (e.g., 3D graphics, text, etc.) or that information may be determined as described herein based on the type of the selection.
  • Corresponding identifiers for the selected parts are retrieved, and information for those selected parts is identified in the DT based on the identifiers.
  • the system retrieves appropriate data for the selected part and outputs that data for rendering as AR content at appropriate positions on the original image.
  • internal parts may be rendered in outline form or in different colors, with each different color reflecting a depth of the part within the object along a ray.
  • methods other than ray tracing may be used to identify parts that are selected.
  • different parts of an image may be rendered using different colored pixels. Selection of a part may be identified based on the pixel that is selected. Implementations such as this may employ a dual-buffer scheme comprised of a front buffer and a back buffer. A current image is viewed from the front buffer while a subsequent image is being drawn to the back buffer. At an appropriate time, the back buffer becomes the front buffer, and vice versa, so that the subsequent image can be viewed.
  • an image is generated based on data written to the front buffer. Parts of that image are drawn in different colors into the back buffer. The parts may be distinguished, and identified, based on characteristics of the image, e.g., pixel transitions and the like.
  • a user selects a part of the object in the image, and the colored part from the back buffer is identified corresponding to (e.g., at a same location as) the selection.
  • the DT for the object is identified beforehand, as described herein.
  • the selected color part is then compared to parts in the 3D graphical model for the object in order to identify the color part.
  • Information from the DT may then be used to render graphical and/or textual content in association with the selected part. For example, a graphical overlay may be presented over the selected part or text from the DT may be displayed next to the part.
  • the DT contains information indicating whether a part is selectable individually or as a group.
  • selection of a part 309 is interpreted by the AR system based on whether DT 310 indicates that part 309 is selectable, and based on whether part 309 is selectable individually or as a group. If the part is selectable, and selectable individually, then corresponding information from the DT is retrieved and output for rendering as AR content with the image.
  • 3D computer graphics data may be output and rendered over the image so that the 3D graphics data overlays a corresponding part of the image. An example of this is the 3D graphics version of the loader arm overlaid on the images of Figs. 1 and 3.
  • text data may be output and rendered on the image so that the text is displayed over or alongside the image.
  • the text can be rendered on the image alongside the part of interest, e.g. the loader arm.
  • the part is selectable, and selectable as a group
  • information about the group is retrieved and output for rendering as AR content with the image.
  • the information may be any appropriate type of information, such as 3D graphics, text, and so forth.
  • the user may be prompted to indicate whether a part, multiple parts, or an entire object is selected, in which case appropriate AR content is retrieved and displayed.
  • the system may be configured to recognize certain actions as selecting a part, multiple parts, or an entire object.
  • different types of selections may trigger displays of different types of data.
  • the type of data displayed may be triggered based on the duration of a selection.
  • a first-duration selection e.g., one that lasts for a first period of time
  • a second- duration selection e.g., one that lasts for a second period of time
  • a third-duration selection e.g., one that lasts for a third period of time
  • the type of selection may not be based on temporal considerations, but rather may be based on other factors.
  • one type of data e.g., 3D graphics
  • a second type of data e.g., text
  • the system may be configured to associate any appropriate type of selection with display of one or more appropriate types of data to generate AR content.
  • the AR system obtains the DT for an object and uses the DT to generate graphics or text to superimpose onto an image of an object.
  • any appropriate content including, but not limited to, animation; video; and non-computer-generated images, video or text, may be obtained from a DT or elsewhere and superimposed onto an image to generate AR content.
  • the AR content may include audio, such as computer-generated or real-life audio, that is presented in conjunction with an image and/or graphics.
  • the data received (205) may represent a selection from a menu.
  • the data received (205) may represent a selection from a menu.
  • a menu may be displayed overlaid on the image or separate from the image.
  • the menu may be a drop-down menu or a pop-up menu that is triggered for display by selecting an appropriate area of the image.
  • the menu may list, textually, parts contained in the object, including both those that are visible in the image and those that are not visible in the image (e.g., internal parts).
  • the object instance may be identified beforehand in the manner described herein, and a list of its selectable parts from the DT displayed on the menu. A user may select one of more of the listed parts. Data representing that selection is obtained by process 200, which uses that data to obtain information about the selected part from the object's DT.
  • the information may be used to generate AR content from the image and the information about the part.
  • graphics - which may be, e.g., transparent, opaque, outline, or a combination thereof - may be retrieved from the DT for the object instance and displayed over the part selected.
  • other information such as text, may also be displayed.
  • the data received (205) may represent a selection of computer-generated graphics that are displayed overlaid on the image.
  • the object instance displayed in the image may be identified beforehand in the manner described herein.
  • Computer graphics from the DT for selectable parts of the object may be overlaid onto the image, as appropriate, or may be displayed separately.
  • the computer graphics can be displayed in a partially transparent fashion such that both the overlaid computer graphics and the underlying image are visible to the user simultaneously.
  • a user may select (205) one of more of the displayed parts by selecting (e.g., touching-on) the computer graphics displayed for that part.
  • the computer graphics represents both internal and external parts of the object.
  • the computer graphics may be displayed using navigable layers, that may be reached, for selection, through interaction with one or more appropriate controls. For example, one or more layers containing internal object parts may be selected, and individual parts may be selected from that layer. Other methods may also be used for selecting internal parts. In any event, data
  • process 200 which uses that data to obtain information about the part from the object's DT.
  • the information may be used to generate AR content from the image and the information about the part.
  • computer graphics (which may be, e.g., transparent, opaque, outline, or a combination thereof) for the selected part or parts may be retained, and remain overlaid on the image. The remaining computer graphics may be eliminated.
  • other information such as text, may also be displayed based on the selection.
  • a menu may be displayed overlaid on the image or separate from the image.
  • the menu may be a drop-down menu or a pop-up menu that is triggered for display by selecting an appropriate area of the image.
  • the menu may show, graphically, parts contained in the object, including both those that are visible in the image and those that are not visible in the image (e.g., internal parts).
  • the object instance may be identified beforehand in the manner described herein, and computer graphics that represent its selectable parts displayed on the menu. A user may select one of more of the displayed parts.
  • Data representing that selection is obtained by process 200, which uses that data to obtain information about the selected part from the object's DT.
  • the information may be used to generate AR content from the image and the information about the part.
  • computer graphics which may be, e.g., transparent, opaque, outline, or a combination thereof
  • other information such as text, may also be displayed.
  • Fig. 7 shows an example computer/network architecture 400 on which the example AR system and the example processes described herein, may be
  • AR system and processes are not limited to use with the Fig. 7 architecture, and may be implemented on any appropriate computer architecture and/or network architecture.
  • example AR system 400 includes a front-end 401 and a back-end 402.
  • Front-end 401 may include one or more mobile computing devices (or simply, mobile devices).
  • a mobile device may include any appropriate device capable of displaying digital imagery including, but not limited to, digital (AR) glasses, a smartphone, a digital camera, a tablet computing device, and so forth.
  • a mobile device 404 may include one or more processing devices 405 (e.g., microprocessors) and memory 406 storing instructions 407 that are executable by the one or more processing devices and images and/or video 440 that can be accessed and processed as described herein to generate AR content at a time subsequent to image capture.
  • the instructions are part of one or more computer programs that are used to implement at least part of the AR system.
  • the instructions may be part of an application (or "app") that performs operations including, for example, displaying AR content to a user.
  • Mobile device 404 also includes one or more sensing mechanisms, such as a camera for capturing actual graphics, such as images and video.
  • Mobile device 404 may also be connected to, and accessible over, a wireless network, such as a long term evolution (LTE) network or a Wi-Fi network.
  • LTE long term evolution
  • the subject 410 of AR content may be any appropriate object, e.g., device, system, or entity, examples of which are described herein.
  • Back-end 402 may include one or more computing systems 412a, 412b examples of which include servers, desktop computers, and mobile devices.
  • a back- end computing system may include one or more processing devices 415 (e.g., microprocessors) and memory 416 storing instructions 417 that are executable by the one or more processing devices.
  • the instructions are part of one or more computer programs that may be used to implement at least part of the AR system. For example, the instructions may be part of a computer program to generate DTs, to analyze DT content, to communicate with other systems 420 and databases 421 containing device information, and so forth.
  • a back-end computing system may also be connected to, and accessible over, a wired or wireless network.
  • the AR system described herein may not include the back-end 402, but rather may be implemented solely on the front-end.
  • Front-end 401 and back-end 402 may communicate with each other, and with other systems, such as those described herein, over one or more computer networks, which may include wireless and/or wired networks.
  • a front-end device may include a local computing system (e.g., 404) to render AR content and a back-end device may include a remote computing system (e.g., 412a, 412b).
  • the capabilities of these different devices may dictate where and/or how a DT, and thus AR content, is generated. For example, the DT and AR content may be generated locally; the DT and AR content may be generated remotely and only displayed locally; or the DT and AR content may be generated using a combination of local and remote processing resources.
  • the local computing system may have no onboard sensing capability and be capable only of external monitoring; in some implementations, the local computing system may include basic onboard sensing and no processing capability; in some implementations, the local computing system may include onboard sensing and basic processing capability; and in some implementations, the local computing system may include onboard sensing and processing capability equivalent at least to that of a desktop computer.
  • the remote computing system may be capable of advanced servicing and data processing.
  • Fig. 8 shows an example process 500 for producing AR content from image data and 3D graphics data in the DT.
  • Process 500 may be performed, e.g., on the architecture of Fig. 7.
  • a declarative model is generated (501 ) for an object.
  • the declarative model may be generated in computer code, and may include information to describe structures and functions of the object.
  • the information may include semantic data that is stored in association with actual design data.
  • the declarative model of the object may be annotated to identify, among other things, features and attributes of the object.
  • the annotations may include attributes of those features, such as size, shape, color, etc. Any appropriate techniques may be used to annotate the model.
  • metadata may be associated with specific features in the model.
  • a look-up table or other appropriate construct may be used to associate coordinates of the model with corresponding annotations.
  • the computer code defining the declarative model is compiled (502) to produce a compiled model.
  • the compiled model is comprised of intermediate object code that can be read by an AR player.
  • the declarative model, and thus the compiled model defines the DT, or at least a part of the DT, for the object.
  • the AR player may be executable on a front-end device of the AR system, and comprises computer code that is executable to generate AR content based on the compiled model and on an image (or other graphic) of the object.
  • the AR system links (504) information from the compiled model to corresponding information in an image (e.g., the image of the object), and generates (505) AR content based on the linked information.
  • the AR system outputs (506) data representing the AR content for rendering on a display screen of a computing object, such as a tablet computing object.
  • the AR player may identify objects and their attributes that were selected as described herein.
  • the compiled model may be read to locate the selected objects in the compiled model. Any appropriate number of attributes may be used to correlate features from the image to features in the compiled model.
  • the AR system links the information from the compiled to the object shown in the image.
  • the compiled model may contain information describing the make, model, tread, and so forth of a tire.
  • the compiled model may also contain sensor readings, or other information. That information is linked to the tire in the image. That information may be used to generate AR content, as described herein.
  • the AR player may generate AR content by rendering computer graphics generated from data in the DT over appropriate locations of the image. For example, the AR player may identify an element of a graphic in the manner described above, obtain information about that graphic from annotations and/or other information available in the compiled model, and generate the graphic based on information from the compiled model and/or sensor readings.
  • the computer graphics that form part of the AR content may overlay the same element shown in an image to enhance or explain an aspect of the element. In some implementations, the computer graphics do not overlay the element, but rather are adjacent to, or reference, the element.
  • the AR content may be generated for an image or video, e.g., on a frame-by-frame basis. Thus, the AR content may be static (unchanging) or dynamic (changeable over time). In the case of video, features in frames of video may be identified using appropriate object identification and object tracking techniques.
  • the computer graphics portion of the AR content may track movement frame-by-frame of the actual object during playback of the video.
  • the video may be real-time video, although that is not a requirement.
  • the DT may be generated or updated in real-time, and the resulting computer graphics superimposed on frames in real-time. Updating the DT may include changing the declarative model and the compiled model, and/or other data used to define the DT, as appropriate.
  • AR content is generated by superimposing computer- generated content onto actual graphics, such as an image or video of a real-life object.
  • Any appropriate computer-generated content may be used including, but not limited to, computer graphics, computer animation, and computer-generated text.
  • the actual graphic such as an image or video of an object is stored in computer memory.
  • a location such as the position and orientation, of the device that captured the image is also stored.
  • a graphical model such as the digital twin (DT) is mapped to the object in the image, and is used to generate content following capture and storage of the image.
  • a computing device in the AR system may receive a command from a user or other system to access the image, to replay a video of which the image is part, to obtain information about the object in the image, or take any other appropriate action.
  • One or more processes executing in the AR system may then generate AR content based on the image, a location of the device, the action, and information in the graphical model that represents the object.
  • a technician may capture a video of an object, such as a printer, by walking around the printer with a video camera in-hand.
  • the video - comprised of sequential image frames - is stored in computer memory.
  • the printer in the video is recognized using one or more appropriate computer vision
  • the recognition may include identifying the location of the video camera that captured the video, including its position and orientation relative to the printer, and storing that information in computer memory.
  • a graphical model containing information about the printer is mapped to the printer in the video, as described herein.
  • the mapping may include associating information in the graphical model to corresponding parts of the printer, and storing those associations in memory.
  • the resulting mapping enables the information from the graphical model to be used to augment the video of the printer.
  • the information may represent computer graphics that may be overlaid on the printer during presentation of the video.
  • the computer graphics may display interior components of the printer, exterior components of the printer, readings or text relating to the operation of the printer, and so forth. Any appropriate information may be displayed.
  • video or individual images of the printer may be accessed, augmented, and presented at any time following image capture.
  • video of the printer may be presented to a user at a time after the video was captured, and may be replayed to identify information about the printer even after the technician has left the vicinity of the printer.
  • the printer may be connected to a network, and may include sensors associated with one or more of its components. Information from the sensors - e.g., sensor readings - may be incorporated into the graphical model in real-time. Accordingly, even after the technician has left the vicinity of the printer, the technician may use the video and the graphical model to obtain current information about the printer. For example, the technician may replay the video, which may be augmented with current sensor readings, such as an out-of-paper indication or a paper blockage indication. The technician may use the video and the graphical model, remotely or in the vicinity of the printer, to identify locations of any problem, to diagnose the problem, to repair the problem, and/or to discuss, over any communications medium, repair with a third party in the vicinity of the printer.
  • one or more image capture devices may be located in the vicinity of the object. These image capture devices may send information to the AR system to augment the original video or image.
  • the object - also referred to as the subject - may be a beach.
  • the image capture devices may capture images of the water, and send those images to the AR system.
  • the images of the water may be correlated to the original image or video and may be used to augment the original image or video to identify a current location of the water.
  • This information may be augmented, as appropriate, with information from the graphical model, such as a prior or current temperature of the water, current or predicted future weather conditions at the beach, and so forth, as appropriate.
  • actions may be taken with respect to stored video.
  • stored video may be presented, and a user may select a part of an object in the video.
  • information about an object in the image may be presented including, for example, current sensor information, components interior to the selected part, and so forth. Selection may be performed as described above - for example, with respect to Fig. 5.
  • the example AR system described herein is configured to identify an object in an image captured by an image capture device, and to map a three-dimensional (3D) graphical model to the image of the object.
  • the 3D graphical model contains information about the object, such as the object's structure, current or past status, and operational capabilities.
  • the mapping of the 3D graphical model to the image associates this information from the 3D graphical model with the image.
  • an action may be taken with respect to the image currently or at a later date.
  • the image (which may be part of a video), the location of the image capture device during capture, and associations to the 3D graphical model are stored in computer memory, and may be used to access the image or any appropriate content at a later date. For example, a stored video may be accessed and played on a computing device. Information from the 3D graphical model may be accessed and retrieved to augment the video.
  • the information may include past or present sensor readings and, in some cases, updates to the 3D graphical model may require further mapping to the video. In some cases, the information may include past or present sensor locations.
  • a point on the image may be selected, and information from the 3D graphical model relating to that point may be retrieved and used to display computer-generated content on the image.
  • a computer graphics rendering of a selected object part may be displayed, as is the case with the arm of Fig. 1 .
  • text associated with the selected part may be displayed.
  • the 3D graphical model is controlled to track relative movement of the image capture device and the object in stored images or video.
  • the image capture device may move relative to the object, or vice versa, during image or video capture.
  • the 3D graphical model also tracks the relative movement of the object even as the perspective of the object in the image changes vis-a-vis the image capture device.
  • the example AR system enables interaction with the object from any appropriate orientation.
  • the DT for an object may also be generated based on sensor data that is obtained for the particular instance of the object.
  • the sensor data may be obtained from readings taken from sensors placed on, or near, the actual instance of the object (e.g., loader 102 of Fig. 1 ).
  • the DT for loader 102 since that sensor data is unique to loader 102, the DT for loader 102 will be unique relative to DTs for other loaders, including those that are identical in structure and function to loader 102.
  • the DT may also include other information that is unique to the object, such as the object's repair history, its operational history, damage to the object, and so forth.
  • the DT may be updated periodically, intermittently, in response to changes in sensor readings, or at any appropriate time. Updates to the DT may be incorporated into the DT, where appropriate, and used to augment an image, such as the loader of Fig. 1 .
  • video showing operation of the loader may be captured and stored.
  • a DT may be associated with the loader. Sensors on the loader (not shown) may be used to monitor information such as fuel level, tire wear, and so forth. Values for such information may be received by the AR system from the sensors, and may update the DT for the loader.
  • the video of the loader when the video of the loader is played at a future date (e.g., at some point in time after its capture), information from the sensors may be used to augment images in the video.
  • the sensor information may be received in real-time or at least at some point following the initial capture and storage of the video. Accordingly, even though the video may have been captured at some point in the past, the sensor information may be current or more up-to-date than any information obtained at the time the image was captured.
  • the stored video may be used both to access, from the DT, information about the structure of the loader and information about its current status.
  • the video may be played to recreate a scene at the time video or imagery was captured, and to augment that scene with current information.
  • sensor or other data may be extrapolated or generated based on current or past data to predict future information. This future information may be incorporated into imagery or video, as appropriate.
  • the updates may include updated imagery.
  • updates to the original image or object obtained using on-location or other cameras may be received following original image capture. These updates may be incorporated into the DT, and used to augment the original image.
  • current video of water in a static image may be received, and that video may be incorporated into the image's DT, and used to augment the image.
  • the original static image may, by virtue of the
  • the original static image may have a video component that reflects the current, and changing state of the water, as opposed to the originally-captured image of the water.
  • This is an example of AR content that is generated from real-life, or actual video content only, rather than from an actual image and computer-generated imagery.
  • AR content such as this may be augmented with computer-generated imagery, e.g., to show the temperature of the water, current or predicted temperature of the air, the time, and so forth.
  • Example process 900 that uses the DT to augment actual graphics, such as images or video, is shown in Fig. 9.
  • Example process 900 may be performed by the AR system described herein using any appropriate hardware.
  • an image of an object is captured (901 ) by an image capture device - a camera in this example - during relative motion between the device and the object.
  • the object may be any appropriate apparatus, system, structure, entity, or combination of one or more of these that can be captured in an image.
  • An example of an object is loader 102 of Fig. 1.
  • the camera that captures the image may be a still camera or a video camera.
  • the camera may be part of a mobile computing device, such as a tablet computer or a smartphone.
  • Data representing the image is stored (902) in computer memory.
  • the image may be one frame of multiple frames that comprise the video.
  • process 900 requires that the camera be within a predefined location relative to the object during image capture.
  • the camera in order to determine the location of an object relative to the camera as described below, in some implementations, the camera needs to be within a predefined field of view (represented by a rectangle 510 defined by intersecting sets of parallel lines) relative to the object when the image is captured.
  • recognition may be performed regardless of where the camera is positioned during image capture. Because the camera is within the predefined location, the location(s) of components of the object may be estimated, and information from those parts may be used in determining the location of the object in the image.
  • image marker-based tracking may be used to identify the location based on is image artifacts, or other active scanning techniques to scan and construct a 3D map of the environment, from which location can be determined. Using this tracking ability, and identifying information as described below, a DT can be associated with the object.
  • the relative motion between the camera and the object includes the object remaining stationary while the camera moves. In some implementations, the relative motion between the camera and the object includes the object moving while the camera remains stationary. In some implementations, the relative motion between the camera and the object includes both the object and the camera moving. In any case, the relative motion is evident by the object occupying, in different images, different locations in the image frame. Multiple images may be captured and stored (902) during relative motion and, as described below, a DT may be mapped to (e.g., associated with) the object in each image. As described below, in some implementations, the DT may track motion of the object, thereby allowing for interaction with the object via an image from different
  • real-time information may be received from an object (or subject) of the image, and that information may be incorporated into the DT in real-time and used to augment stored video.
  • real-time may not mean that two actions are simultaneous, but rather may include actions that occur on a continuous basis or track each other in time, taking into account delays associated with processing, data transmission, hardware, and the like.
  • tablet computer 101 may be used to capture the image of loader 102 at a first time, T-i .
  • the image may be part of a video stream comprised of frames of images that are captured by walking around the loader.
  • the image may be part of a video stream comprised of frames of images that are captured while the camera is stationary but the loader moves.
  • the tablet computer 101 may be used to capture a different image of loader 102 at a second, different time, T 2 .
  • T 2 a second, different time
  • process 900 identifies the object instance in the captured image and retrieves (903) a DT for the object instance - in the example of Fig. 1 , loader 102.
  • any appropriate identifying information may be used to identify the object instance.
  • the identifying information may be obtained from the object itself, from the image of the object, from a database, or from any other appropriate source.
  • the identifying information may be, or include, any combination of unique or semi-unique identifiers, such as a Bluetooth address, a media access control (MAC) address, an Internet Protocol (IP) address, a serial number, a quick response (QR) code or other type of bar code, a subnet address, a subscriber identification module (SIM), or the like.
  • Tags such as RFIDs, may be used for identification.
  • the identifying information may be, or include, global positioning system (GPS) or other coordinates that defines the location of the object.
  • GPS global positioning system
  • unique features of the object in the image may be used to identify the object instance.
  • a database may store information identifying markings, wear, damage, image artifacts as described above, or other distinctive features of an object instance, together with a unique identifier for the object.
  • the AR system may compare information from the captured image to the stored information or a stored image. Comparison may be performed on a mobile device or on a remote computer. The result of the comparison may identify the object. After the object is identified, the DT for that object may be located (e.g., in memory, a location on a network, or elsewhere) using the obtained object identifier. The DT corresponding to that identifier may be retrieved (903) for use by process 900.
  • Process 900 determines (904) a location of the camera relative to the object during image capture.
  • the location of the camera relative to the object can be specified, for example, by the distance between the camera and the object as well as the relative orientations of the camera and object. Other determinants of the relative location of the camera and the object, however, can be used.
  • the relative locations can be determined using known computer vision techniques for object recognition and tracking.
  • the location may be updated periodically or intermittently when relative motion between the object and the camera is detected.
  • For each image - including a frame of video - the location of the camera relative to the object, as determined herein, is stored (905) in computer memory.
  • the stored information may be used, as described herein, to implement or update mapping of the DT to the object in the image based on movement of the object.
  • location may be determined based on one or more attributes of the object in the stored image and based on information in the DT for the object. For example, a size of the object in the image - e.g., a length and/or width taken relative to appropriate reference points - may be determined. For example, in the image, the object may be five centimeters tall. Information in the DT specifies the actual size of the object in the real-world with one or more of the same dimensions as in the image. For example, in the real-world, the object may be three meters tall. In an example implementation, knowing the size of the object in the image and the size of the object in the real world, it is possible to determine the distance between the camera and the object when the image was captured. This distance is one aspect of the location of the camera.
  • the distance between the between the camera and the object is determined relative to a predefined reference point on the camera, rather than relative to a lens used to capture the image.
  • a predefined reference point on the camera For example, taking the case of some smartphones, the camera used to capture images is typically in an upper corner of the smartphone. Obtaining the distance relative to a predefined reference, such as a center point, on the smartphone may provide for greater accuracy in determining the location. Accordingly, when determining the distance, the offset between the predefined reference and the camera on the smartphone may be taken into account, and the distance may be corrected based on this offset.
  • process 900 identifies one or more features of the object in the stored image, such as wheel 106 in the loader of Fig. 1 . Such features may be identified based on the content of the image.
  • a change in pixel color may be indicative of a feature of an object.
  • the change in pixel color may be averaged or otherwise processed over a distance before a feature of the object is confirmed.
  • sets of pixels of the image may be compared to known images in order to identify features. Any appropriate feature identification process may be used.
  • the orientation of the object in the image relative to the camera may be determined based on the features of the object identified in the image.
  • the features may be compared to features represented by 3D graphics data in the DT.
  • one or more 3D features from the DT may be projected into two-dimensional (2D) space, and their resulting 2D projections may be compared to one or more features of the object identified in the image.
  • Features of the object from the image and the 3D graphical model (from the DT) that match are aligned. That is, the 3D graphical model is oriented in 3D coordinate space so that its features align to identified features of the image.
  • the 3D graphical model may be at specified angle(s) relative to axes in the 3D coordinate space. These angles(s) define the orientation of the 3D graphical model and, thus, also define the orientation of the object in the image relative to the camera that captured the image. Other appropriate methods of identifying the orientation of the object in the image may also be used, or may be used on conjunction with those described herein.
  • the location (e.g., position and orientation) of the camera relative to the object is stored (905). In the case of video, which is comprised of multiple image frames in sequence, the location of the camera is stored for each image frame.
  • Process 900 maps (906) the 3D graphical model defined by the DT to the object in the image based, at least in part, on the determined (904) location of the camera relative to the object.
  • the location may include the distance between the object in the image and the camera that captured the image, and an orientation of the object relative to the camera that captured the image. Other factors than these may also be used to specify the location.
  • mapping may include associating data from the DT, such as 3D graphics data and text, with corresponding parts of the object in the image. In the example of loader 102 of Fig.
  • data from the DT relating to its arm may be associated with the arm; data from the DT relating to front-end 108 may be associated with front-end 108; and so forth.
  • the associating process may include storing pointers or other constructs that relate data from the DT with corresponding pixels in the image of the object. This association may further identify where, in the image, data from the DT is to be rendered when generating AR content.
  • data from the DT - such as 3D graphics or text - is mapped to the image.
  • Fig. 4 shows, conceptually, 3D graphics 110 for the loader beside an actual image 112 of the loader.
  • the DT comprising the 3D graphics data may be stored in association with the image of the loader, as described above, and that association may be used in obtaining information about the loader from the image.
  • data in the DT relates features of the object in 3D
  • using the DT and the image of the object it is also possible to position 3D graphics for objects that are not visible in the image at appropriate locations. More specifically, in the example of Fig. 4 above, because image 112 is 2D, only a projection of the object into 2D space is visible.
  • data - e.g., 3D graphics data - from the DT is associated with the image of the object in 2D.
  • the DT specifies the entire structure of the object using 3D graphics data.
  • the location of the camera relative to the object may change in the stored video as the relative positions between the object and the camera change.
  • the camera may be controlled to capture video of the object moving; the camera may be moved and capture video while the object remains stationary; or both the camera and the object may move while the camera captures video.
  • the loader may move from the position shown in Fig. 1 to the positon shown in Fig. 3.
  • the AR system may be configured so that the DT - e g., 3D graphics data and information defined by the DT - tracks that relative movement. That is, the DT may be moved so that appropriate content from the DT tracks corresponding features of the moving object.
  • the DT may be moved continuously with the object in the stored video by adjusting the associations between data representing the object in an image frame and data representing the same parts of the object in the DT.
  • the AR system may adjust the association between the DT and the image to reflect that data representing the moved part in the DT is also associated with coordinate XY.
  • movement of the object can predict its future location in a series of images - e.g., in frame-by- frame video - and the associations between DT data and image data may be adjusted to maintain correspondence between parts of the object in the image and their counterparts in the DT.
  • Take arm 113 of Fig. 3 as an example.
  • movement of the camera may result in relative motion of arm 113 in the image frame. Movement in one direction may be a factor in determining future movement of the object in that same direction.
  • the system may therefore predict how to adjust the associations for future movement in the video based on the prior movement.
  • a 3D graphical model representing the object and stored as part of the DT is mapped to each image, e.g., in a video sequence, and information representing the mappings is stored (907) in computer memory.
  • information, as described herein, mapping the 3D graphical model is stored for each image, and that information is retrievable and usable to generate AR content for the image at any appropriate time.
  • the video and mapping information may be stored at an initial time, and the video and mappings may be used at any point following the initial time to generated AR content using the video and information from the DT resulting from the mapping.
  • the location, including the position and orientation, of the image capture device may be stored for each image.
  • mapping may be performed dynamically using the stored location. For example, as an image is retrieved from storage, stored location information for the image capture device is also retrieved. That stored location information is used, together with any other appropriate information, to map a 3D graphical model of the object from the object's DT to the image in the manner described herein. Each time the image changes, as is the case for video, that mapping process may be performed or updated.
  • the mapping of the DT to the object associates attributes in the DT with the object. This applies not only to the object as a whole, but rather to any parts of the object for which the DT contains information. Included within the information about the object is information about whether individual parts of the object are selectable individually or as a group.
  • a part may be separately defined within the DT and information, including 3D graphics, for the part, may be separately retrievable in response to an input, such as user or programmatic selection.
  • selectability of a part may be based on or more or more additional or other criteria.
  • a user interface may be generated to configure information in the DT to indicate which of the parts are selectable and which of the parts are selectable individually or as a group.
  • a DT may be generated at the time that the PT (object) is created.
  • the AR system may obtain, via a user interface, information indicating that an object having a given configuration and a given serial number has been manufactured.
  • the AR system may create, or tag, a DT for the object based on information such as that described herein.
  • Operational information about the instance of the object may not be available prior its use; however, that information can be incorporated into the DT as the information is obtained.
  • sensors on the (actual, real-world) object may be a source of operational information that can be relayed to the DT as that information is obtained.
  • a user may also specify in the DT, through the user interface, which parts of the object are selectable, either individually or as a group. This specification may be implemented by storing appropriate data, such as a tag or other identifier(s), in association with data representing the part. Referring back to Fig. 9, following storage, process 900 receives (908) data representing an action to be performed with respect to the stored video.
  • data is generated (909) for use in rending content on a display device.
  • the generated data is based on one or more of: the stored image, the stored location of the image capture device, at least some information from the retrieved DT, or the action to be taken.
  • the action to be performed may include replaying video from a prior point in time, and augmenting that video with 3D graphics at selected points in the video or where otherwise appropriate.
  • the video may be retrieved and played from the perspective of the image capture device. This perspective is quantified using information identifying the location of the image capture device relative to an object in the video.
  • mapping information for each frame of video is stored.
  • that mapping information may be used to correlate the 3D graphics to the corresponding part of the image, and may be used to generate AR content that includes the image and 3D graphics.
  • the location (e.g., position and orientation) of the image capture device may be stored for each image, including for frames of video. Accordingly, in some
  • mapping process may be performed dynamically as each image is retrieved. For example, rather than performing mapping beforehand and storing the mapping information in memory, as each frame of video is played, mapping may be performed. Performing mapping dynamically may have
  • the mapping may be performed using a combination of stored mapping information and dynamic mapping. For example, parts of an object that do not change may be mapped beforehand and mapping information therefor stored. Other parts of the object that do change, and for which the DT may change over time, may be mapped dynamically.
  • the data may represent an instruction to play the video, to move to a particular image in the video, to display 3D graphical content for all or part of the video, to identify updated sensor information for parts of an object shown in the video, to access to the object's BOM, to access the object's service history, to access the object's operating history, to access the object's current operating conditions, to generate data based on determined sensor values, and so forth.
  • the data may represent a selection of a point on an image that represents the part of the object. The selection may include a user-initiated selection, a programmatic selection, or any other type of selection. For example, as shown in Fig.
  • a user may select a point in the image that corresponds to the loader 102 by touching the image at an appropriate point.
  • Data for the resulting selection is sent to the AR system, where that data is identified as representing a selection of a particular object or part on the loader represented in the image.
  • the selection may trigger display of information.
  • the user interface showing the object can be augmented with a set of visual crosshairs or a target that can remain stationary, such as in the center, relative to the user interface (not illustrated).
  • the user can select a part of the object by manipulating the crosshairs such that the target points to any point of interest on the object.
  • the process 900 can be configured to continually and/or repeatedly analyze the point in the image under the target to identify any part or parts of the object that correspond to the point under the target.
  • the target can be configured to be movable within the user interface by the user, and/or the process can be configured to analyze a point under the target for detection of a part of the object upon active user input, such as a keyboard or mouse click.
  • the point selected is identified by the system, and information in the DT relating to an object or part at that point is identified. The user may be prompted, and specify, whether the part, a group of parts, or the entire object is being selected. The information is retrieved from the DT and is output for rendering on a graphical user interface as part of AR content that may contain all or part of the original image.
  • 3D graphics data for the selected object or part in stored video or other storage imagery may be retrieved and rendered over all or part of the object or part.
  • text data relating to the selected object or part may be retrieved and rendered proximate to the object or part.
  • the text may specify values of one or more operational parameters (e.g., temperature) or attributes (e.g., capabilities) of the part.
  • both 3D graphics data and text data relating to the selected object or part may be retrieved and rendered with the object or part.
  • the resulting AR content may be used to control the object in the image using previously stored video or imagery.
  • the DT may be associated with the actual real-world object, e.g., through one or more computer networks.
  • a user may interact with the displayed AR content to send data through the network to control or interrogate the object, among other things.
  • Figs. 11 to 17 show examples of AR content that may be generated using stored imagery or video according to the example processes described herein.
  • Fig. 11 shows the results of an action taken with respect to stored image 511 of loader 102.
  • the action is to display, on image 511 , interior parts 512 of the loader and current sensor reading 513.
  • the graphic depicting the current sensor reading is located near to the part read (e.g., the vent), and includes an arrow pointing to that part.
  • a graphic such as this, or any other appropriate graphic, may be used to represent any sensor reading.
  • the content augmenting the image is obtained from the loader's DT, as described herein.
  • Fig. 12 shows the results of an action taken with respect to stored image 511 of loader 102.
  • the action is to display, on image 511 , the loader and its components in outline form.
  • the content augmenting the image is obtained from the loader's DT, as described herein.
  • Fig. 13 shows the results of an action taken with respect to stored image 511 of loader 102.
  • the action is to display, on image 511 , the loader and its components in shadow form together with its interior parts 515 in color.
  • the content augmenting the image is obtained from the loader's DT, as described herein.
  • Fig. 14 shows the results of an action taken with respect to stored image 511 of loader 102.
  • the action is to display, on image 511 , the loader and its components in shadow form together with two sensor readings 520, 521 .
  • the content augmenting the image is obtained from the loader's DT, as described herein.
  • Fig. 15 shows the results of an action taken with respect to stored image 511 of loader 102.
  • the action is to display, on image 511 , a selected circular region 524 of the loader and components in that circular region in shadow form together with interior components 525 in that circular region in color.
  • the interior components within the selected circular region are displayed.
  • the remainder of the image - the part not in the circular region - retains its original characteristics and is not augmented.
  • the content augmenting the image is obtained from the loader's DT, as described herein.
  • Fig. 16 shows the results of an action taken with respect to stored image 511 of loader 102.
  • the action is to display, on image 511 , the loader and its components in outline form, together with current sensor reading 527.
  • the content augmenting the image is obtained from the loader's DT, as described herein.
  • Fig. 17 shows the results of an action taken with respect to stored image 530 of loader 102, which is different from image 511 and may be part of an image sequence containing image 511 , although that is not a requirement.
  • the action is to display, on image 511 , the loader and its components in outline form, together with current sensor reading 531.
  • the content augmenting the image is obtained from the loader's DT, as described herein.
  • Fig. 7 shows an example computer/network architecture 400 on which the example AR system and the example processes, may be implemented.
  • the AR system and processes are not limited to use with the Fig. 7 architecture, and may be implemented on any appropriate computer architecture and/or network architecture.
  • Fig. 8 shows an example process 500 for producing AR content from image data and 3D graphics data in the DT. Process 500 may be performed, e.g., on the architecture of Fig. 7.
  • An augmented reality (AR) system is an example of a computer graphics system in which the processes may be used.
  • the processes may be used in any appropriate technological context or computer graphics system, and are not limited to use with an AR system or to use with example AR system described herein.
  • CAD computer-aided design
  • CV computer vision
  • "uncertain" content includes, but is not limited to, content of an image that does not necessarily represent an object containing that content.
  • an image may include an object having parts that are specular, or reflective.
  • the parts of the image that are reflective may be
  • a loader e.g., Fig. 1
  • the windshield may display reflected content, such as trees or clouds, due to its reflectivity.
  • the recognition processes may have difficulty deciding that the object is a loader.
  • specular highlights such as the sun on a wet road, can cause glare and other such effects that can not only mask the real features of an object, but can also generate some of high-contrast features that content recognition processes rely upon for accurate recognition. If these features disappear when the angle of light changes (which is what happens with specular highlights), then recognition processes may be confused about the content of the image.
  • hinged parts that are connected for movement such as arm 103 of the loader of Fig. 1
  • arm 103 of the loader of Fig. 1 can cause uncertainty because they can block out areas of an image by virtue of their movement. Accordingly, parts of an object that have one or more of the foregoing attributes may also hinder content recognition processes.
  • the example processes described herein identify parts of an image that constitute uncertain content and, during recognition and tracking processes, place more importance on parts of the object that do not include uncertain content or that include less uncertain content than other parts of the object.
  • placing more importance may include deemphasizing information from parts of the object that have more than a defined amount of an attribute, such as reflectivity, transparency, or flexibility, and/or emphasizing parts of the object that have less than a defined amount of the attribute.
  • deemphasizing a part includes ignoring information about the part. For example, information in the image from the deemphasized part may not be taken into account during recognition and tracking.
  • deemphasizing a part includes applying less weight to information representing the part than to information representing other parts of the object, e.g., other parts that do not have, or have less of, an attribute such as transparency, reflectivity, or flexibility.
  • information in the image from the deemphasized part may have applied a smaller weighting factor than information from the other parts of the image.
  • emphasizing a part includes applying greater weight to information representing parts of the object that do not have, or have less of, an attribute, such as transparency, reflectivity, or flexibility, than to information representing other parts of the object having more of the attribute.
  • information in the image from the emphasized parts may have applied a larger weighting factor than information from the other parts.
  • a content recognition process (which may, or may not, be part of an object tracking process) may receive an image, and may attempt to recognize an object, such as a loader, in the image. This may include identifying enough parts of the object to associate, with the object, a graphical model identifying features of, and information about, the object.
  • the content recognition process may identify, using information from this model, parts of the object that have uncertain content. In this example, information from those parts is deemphasized relative to other parts of the object that do not have uncertain content or that have uncertain content that is less pronounced (e.g., less of the uncertain content).
  • a window 116 may be highly reflective and also refractive (at certain angles), whereas shiny paint on front-end 108 may be reflective, but less so than the window.
  • the recognition processes may give greater weight, during recognition, to information (e.g., pixels) representing the loader's front-end than to information representing the window.
  • the window is deemphasized in the recognition process relative to the front-end (or conversely, the front-end is emphasized over the window).
  • recognition and tracking processes may give no weight to information representing the window, and base recognition solely on other parts of the object that exhibit less than a threshold amount of reflectivity.
  • content, and objects that are part of that content may include any appropriate structures, matter, features, etc. in an image.
  • water which may be highly reflective in certain light, may be deemphasized relative to flora in the scene.
  • Recognition processes include, but are not limited, to initial recognition of an object and tracking motion of that object in a series of frames, such as image frames of video containing that object.
  • Example tracking processes perform recognition on an image-by-image (e.g., frame-by-frame) basis, as described herein.
  • AR content may be generated by superimposing computer-generated content onto actual graphics, such as an image or video of a real-life object. Any appropriate computer-generated content may be used.
  • he DT for an object instance may have numerous uses including, but not limited to, performing content recognition and tracking and generating AR content, as described herein.
  • the example AR system described herein may superimpose computer-generated content that is based on, or that represents, the DT or portions thereof onto an image of an object instance.
  • Example processes performed by the AR system identify an instance of the object, generate AR content for the object using the DT for that object, and use that AR content in various ways to enable access to information about the object.
  • Example process 1800 that uses the DT to recognize and track objects in images or video is shown in Fig. 18.
  • Example process 1800 may be performed by the AR system described herein using any appropriate hardware.
  • an image of an object is captured (1801 ) by an image capture device - a camera in this example.
  • the object may be any
  • the camera that captures the image may be a still camera or a video camera.
  • the camera may be part of a mobile computing device, such as a tablet computer or a smartphone, or it may be a stand-alone camera.
  • Process 1800 identifies (1802) the object instance in the captured image, and retrieves (1803) a DT for the object instance - in the example of Fig. 1 , loader 102.
  • a DT is used in this examples presented herein; however, any appropriate computer- aided design (CAD) or other computer-readable construct may be used in addition to, or instead of, the DT.
  • Any appropriate identifying information may be used to identify the object instance. The identifying information may be obtained from the object itself, from the image of the object, from a database, or from any other appropriate source.
  • the identifying information may be, or include, any combination of unique or semi-unique identifiers, such as a Bluetooth address, a media access control (MAC) address, an Internet Protocol (IP) address, a serial number, a quick response (QR) code or other type of bar code, a subnet address, a subscriber identification module (SIM), or the like.
  • Tags such as RFIDs, may be used for identification.
  • the identifying information may be, or include, global positioning system (GPS) or other coordinates that define the location of the object.
  • GPS global positioning system
  • unique features of the object in the image may be used to identify the object instance.
  • a database may store information identifying markings, wear, damage, or other distinctive features of an object instance, together with a unique identifier for the object.
  • the AR system may compare information from the captured image to the stored information or a stored image. Comparison may be performed on a mobile device or on a remote computer. The result of the comparison may identify the object. After the object is identified, the DT for that object may be located (e.g., in memory, a location on a network, or elsewhere) using the obtained object identifier.
  • Process 1800 performs (1804) a recognition process on the object.
  • the recognition process includes identifying features, structures, locations, orientations, etc. of the object based on one or more images of the object captured using the camera.
  • the recognition process requires that the camera be within a predefined location relative to the object during image capture. For example, as shown in Fig. 10, in order to perform initial object recognition, in some implementations, the camera needs to be within a predefined field of view (represented by a rectangle 510 defined by intersecting sets of parallel lines) relative to the object when the image is captured. In some implementations, recognition may be performed regardless of where the camera is positioned during image capture.
  • the recognition process (1804) includes
  • edges in the object may be recognized based on regions of the object that contain adjacent pixels having greater than a predefined difference.
  • adjacent pixels regions may be analyzed to determine differences in the luminance and/or chrominance of those pixel regions. Adjacent regions having more than a predefined difference in luminance and/or chrominance may be characterized as edges of the object.
  • the pixels may be analyzed (e.g., averaged) over a region that spans at least at least a predefined minimum number of pixels.
  • the change in pixel characteristics may be averaged or otherwise processed over a distance before a feature of the object is confirmed.
  • dark-light transitions may be identified; sharp edges may be identified; corners may be identified; changes in contrast may be identified; and so forth.
  • sets of pixels of the image may be compared to known images in order to identify features. Any appropriate feature identification process may be used.
  • process 1800 identifies parts of the DT or model that have one or more attributes, such as reflectivity, transparency, or flexibility, or more than a predefined amount of one or more of these attributes. Because the initial recognition process is performed within a predefined location (e.g., field of view 510) relative to the object, the approximate locations of these parts of the object may be obtained based on the DT for the object. For example, in the case of a loader, a DT for the loader may specify the locations of windows, the reflectivity of its paint, and the locations of any other parts that are likely to adversely impact the recognition process.
  • attributes such as reflectivity, transparency, or flexibility
  • the location(s) of these parts may be estimated, and information from those parts may be deemphasized when performing recognition, including identifying object edges.
  • features identified from edges of the object may be correlated to expected locations of the parts of the object that have one or more attributes. For example, an edge may be detected at a location where a window is expected. This edge may represent, for example, the structure of the loader that holds the window. By detecting this edge, the location of the window may be confirmed.
  • object recognition includes identifying edges or other distinguishing features of objects based on pixel transitions.
  • the recognition processes may identify those features (e.g., based on pixel transitions) in all parts of an object or image.
  • Features identified in parts of the object that contain uncertain content may be weighted less than features identified in other parts of the object containing no, or less, uncertain content. Any appropriate weighting factor or technique may be used to weight the edges.
  • the recognition process (1804) also identifies an orientation of the object in the image.
  • the orientation of the object in the image may be determined based on the edges (or other features) of the object identified in the image. For example, the edges may be compared to edges represented by 3D graphics data in the DT. To make the comparison, one or more 3D features from the DT may be projected into two-dimensional (2D) space, and their resulting 2D projections may be compared to one or more features of the object identified in the image. Edges of the object from the image and the 3D graphical model (from the DT) that match are aligned. That is, the 3D graphical model is oriented in 3D coordinate space so that its features align to identified features of the image.
  • the 3D graphical model may be at specified angle(s) relative to axes in the 3D coordinate space. These angles(s) define the orientation of the 3D graphical model and, thus, also define the orientation of the object in the image relative to the camera that captured the image. Other appropriate methods of identifying the orientation of the object in the image may also be used, or may be used in conjunction with those described herein.
  • the recognition process compares identified edges or other features of the object edges or other features defined in the DT for that object. Based on a number and weighting of the matches, the recognition process is able to recognize the object in the image.
  • Process 1800 stores (1805) data representing the features (e.g., edges) of the object in computer memory, and uses the data in subsequent applications, including for generating AR content.
  • the recognition process (1804) can also be used to perform the identification process (1802). For example, a plurality of different candidate objects, each with a different associated DT can be compared to the captured image of step 1802.
  • the recognition process (1804) can be applied to the image for each DT, taking into account, for example, the one or more attributes of each DT, in attempt to recognize and therefore identify one of the candidate objects.
  • the identification in this case, can be a general object or a specific object instance depending on whether the DT defines a general class of objects or a specific object.
  • mapping may include associating data from the DT, such as 3D graphics data and text, with recognized parts of the object in the image.
  • data from the DT relating to its arm (covered by graphics 103) may be associated with the arm; data from the DT relating to front-end 108 may be associated with front-end 108; and so forth.
  • the associating process may include storing pointers or other constructs that relate data from the DT with corresponding pixels in the image of the object. This association may further identify where, in the image, data from the DT is to be rendered when generating AR content.
  • data from the DT - such as 3D graphics or text - is mapped to the image.
  • the DT comprising the 3D graphics data may be stored in association with the image of the loader, and that association may be used in obtaining information about the loader from the image.
  • the mapping may include associating parts of an object having attributes that result in uncertain content with corresponding information from the DT, and using those associations to track movement of the object between image frames.
  • the DT may contain information including, but not limited to, locations of windows, locations of chrome fixtures, the reflectivity of the loader's paint, and any other appropriate information about attributes that may affect the ability of the system to recognize the loader.
  • a location of the camera relative to the object may change as the relative positions between the object and the camera change.
  • the camera may be controlled to capture video of the object moving; the camera may be moved and capture video while the object remains stationary; or both the camera and the object may move while the camera captures video.
  • the relative motion between the camera and the object includes the object remaining stationary while the camera moves.
  • the relative motion between the camera and the object includes the object moving while the camera remains stationary.
  • the relative motion between the camera and the object includes both the object and the camera moving. In any case, the relative motion is evident by the object occupying, in different (e.g., first and second) images, different locations in the image frame.
  • multiple images may be captured during the relative motion and, as described below, the same DT may be mapped to (e.g., associated with) the object in each image.
  • the motion of the object may be tracked between image frames in real-time, and the DT may track the object's motion in realtime, thereby allowing for interaction with the object via an image from different perspectives and in real-time.
  • real-time may not mean that two actions are simultaneous, but rather may include actions that occur on a continuous basis or track each other in time, taking into account delays associated with processing, data transmission, hardware, and the like.
  • tablet computer 101 may be used to capture the image of loader 102 at a first time, TV
  • the image may be part of a video stream comprised of frames of images that are captured by walking around the loader.
  • the image may be part of a video stream comprised of frames of images that are captured while the camera is stationary but the loader moves.
  • the tablet computer 101 may be used to capture a different image of loader 102 at a second, different time, T 2 . As is clear from Figs. 1 and 3, the two images were taken from different perspectives.
  • a recognition process is performed (1808a) to recognize the object in the first image.
  • Recognition of the object in the first image may include identifying the position and orientation of the object in the first image.
  • the recognition process may contain operations included in, or identical to, those performed in recognition process 1804.
  • the first image from which movement of the object may be tracked may be the original image upon which initial recognition was based, or it may be an image that follows the original image in a sequence of images (e.g., in frames of video) and that contains the object moved from its original position.
  • recognition process 1808a need not be performed, since recognition has already been performed for the original image in operation 1804.
  • tracking may be performed between consecutive images in the sequence or between non-consecutive images in the sequence. For example, if the sequence contains frames A, B following immediately from A, C following immediately from B, and D following immediately from C, tracking may be performed from frame A to frame B, from frame A to frame D, and so forth.
  • features such as edges, in the first image
  • the 3D graphical model is mapped to the object at its new location.
  • the DT that represents the instance of the object is retrieved from computer memory.
  • the information includes, among other things, information identifying parts of the object that contain uncertain content.
  • the information may include parts of the object, such as locations of windows, locations of chrome fixtures, the reflectivity of the loader's paint, and any other appropriate information about attributes that may affect the ability of the system to recognize the loader.
  • process 1800 Because process 1800 knows the location of the object within the image based on the features already recognized, process 1800 also knows the locations of the parts containing uncertain content. Thus, for each frame of an image that is used in tracking, the process transforms information from the DT into the image coordinate space as described herein, and then use that information to identify the regions of the image that can be deemed problematic because these regions contain uncertain content, such as specular or flexible items. With these regions identified, the process may remove these points from a pass of the tracking process, or weight them less. Accordingly, as described above, the recognition process deemphasizes information from regions of the image deemed to contain uncertain content, e.g., by weighting that information less in its recognition analysis or by ignoring that information. Regions of the first image deemed not to contain uncertain content, or less than a threshold amount of uncertain content, are weighted more heavily in the recognition analysis.
  • Process 1800 tracks movement of the object from a first location in the first image to a second location in a second, different image.
  • the tracking process includes recognizing that the object has moved from the first image to the second image. Recognizing motion of the object includes performing (1808b), for the second image, a recognition process that places more importance on parts of the object that do not include uncertain content or that include less uncertain content than other parts of the object.
  • features, such as edges, in the second image are identified based on a region in the second image that contains pixels having greater than a predefined difference. Using these features, the 3D graphical model is mapped to the object at its new location. As explained above, the DT that represents the instance of the object is retrieved from computer memory.
  • the information includes, among other things, information identifying parts of the object that contain uncertain content.
  • the information may include parts of the object, such as locations of windows, locations of chrome fixtures, the reflectivity of the loader's paint, and any other appropriate information about attributes that may affect the ability of the system to recognize the loader.
  • process 1800 Because process 1800 knows the location of the object within the image based on the features already recognized, process 1800 also knows the locations of these parts. Thus, for each frame of an image that is used in tracking, the process transforms information from the DT into the image coordinate space as described herein, and then uses that information to identify the regions of the image that can be deemed problematic because these regions contain uncertain content, such as specular or flexible items. With these regions identified, the process may remove these points from a pass of the tracking process, or weight them less. Accordingly, as described above, the recognition process deemphasizes information from regions of the image deemed to contain uncertain content, e.g., by weighting that information less in its recognition analysis or by ignoring that information. Regions of the second image deemed not to contain uncertain content, or less than a threshold amount of uncertain content, are weighted more heavily in the recognition analysis.
  • the loader may move from the position shown in Fig. 1 to the position shown in Fig. 3. Movement of the object between positions may be tracked as described herein. Also, during the movement, the AR system may be configured so that the DT - e g., 3D graphics data and information defined by the DT - also tracks that relative movement. That is, the DT may be moved so that appropriate content from the DT tracks corresponding features of the moving object. In some implementations, the DT may be moved continuously with the object by adjusting the associations between data representing the object in an image frame and data representing the same parts of the object in the DT.
  • the AR system may adjust the association between the DT and the image to reflect that data representing the moved part in the DT is also associated with coordinate XY. Accordingly, during tracking processes, recognition occurs as described herein based on features, such as a edges of the object, and the DT, which moves along with the object is used to identify uncertain content. This uncertain content is deemphasized or ignored in the recognition process.
  • the prior location of an object in a prior image may also be used to predict a current location of the object.
  • This information along with features, such as edges, that are weighted based on the amount of uncertain content they contain, may be used to determine the current location of the object, and to recognize the object.
  • movement of the object can predict its future location in a series of images - e.g., in frame-by-frame video - and the associations between DT data and image data may be adjusted to maintain correspondence between parts of the object in the image and their counterparts in the DT.
  • movement of the camera may result in relative motion of arm 113 in the image frame. Movement in one direction may be a factor in determining future movement of the object in that same direction, and thus in recognizing a future location of the arm.
  • the system may also predict how to adjust the associations based on the prior movement.
  • Process 1800 provides (1809) 3D graphical content for rendering, on a graphical user interface, in association with the recognized object. For example, as the object moves from a first location to a second location, process 1800 also provides appropriate 3D graphical content for rendering relative to the object at the second location, as described herein. For example, the content may overlay the image of the object or otherwise augment the image of the object.
  • Fig. 19 shows an example process 1900 for treating rigid content differently than flexible content during recognition and tracking.
  • operations 1901 incorporate all or some features of, or are identical to, operations 1801 to 1807 of process 1800.
  • the recognition process (1904) may include recognizing rigid components of the object based on the object's DT.
  • the rigid components include parts of the object that have less than a predefined degree of flexibility.
  • the identification and recognition may be performed in the same manner as described above with respect to process 1800.
  • the recognition process (1904) may include recognizing flexible or movable parts of the object based on the object's DT.
  • the flexible or movable components include parts of the object that have more than a predefined degree of flexibility are or movable within a range of motion.
  • the recognition may be performed in the same manner as described above.
  • the DT for the object contains information identifying the rigid components of the object, and identifying the flexible or movable of the object.
  • process 1900 tracks (1908) movement of the object primarily by tracking movement of the rigid components individually from first locations in the first image to second locations in a second image.
  • the tracking process includes recognizing that the rigid components have moved from the first image to the second image. Recognizing motion of the rigid components includes performing a recognition process of the type described herein to identify the rigid components based on identified edges and content included in the DT for the object.
  • a recognition process is performed (1908a) to recognize the object in the first image.
  • Recognition of the object in the first image may include identifying the position and orientation of the rigid components in the first image.
  • the recognition process may contain operations included in, or identical to, those performed in recognition process 1904.
  • the first image from which movement of the object may be tracked may be the original image upon which initial recognition was based, or it may be an image that follows the original image in a sequence of images (e.g., in frames of video) and that contains the object moved from its original position. If the first image is the original image, recognition process 1908a need not be performed, since recognition has already been performed for the original image in operation 1904. Furthermore, tracking may be performed between consecutive images in the sequenced or between on-consecutive images in the sequence, as described.
  • features such as edges, in the rigid components
  • the 3D graphical model is mapped to the object at its new location.
  • constituents of the DT representing the rigid components may be mapped to locations of the rigid components.
  • the DT that represents the instance of the object is retrieved from computer memory.
  • the information includes, among other things, information identifying parts of the object that contain uncertain content.
  • the information may include parts of the object, such as flexible
  • connections e.g., wires or hoses.
  • process 1908a knows the location of the rigid components within the image based on the features already recognized, process 1908a can predict the locations of any flexible or connector components based on information from the DT.
  • the process transforms information from the DT into the image coordinate space as described herein, and then uses that information to identify the regions of the image that can be deemed problematic because these regions contain uncertain content.
  • the recognition process deemphasizes information from regions of the image deemed to contain uncertain content, e.g., by weighting that information less in its recognition analysis or by ignoring that information. Regions of the first image deemed not to contain uncertain content, or less than a threshold amount of uncertain content, are weighted more heavily in the recognition analysis.
  • Process 1900 tracks movement of the object from a first location in the image to a second location in a second, different image.
  • the tracking process includes recognizing that the object has moved from the first image to the second image. Recognizing motion of the object includes performing (1908b), for the second image, a recognition process that places more importance on parts of the object that do not include uncertain content (e.g., rigid components) or that include less uncertain content than other parts of the object.
  • recognition process 1908b features, such as edges, in the second image, are identified based on a region in the second image that contains pixels having greater than a predefined difference.
  • the 3D graphical model is mapped to the object at its new location.
  • the DT that represents the instance of the object is retrieved from computer memory.
  • the information includes, among other things, information identifying parts of the object that contain uncertain content.
  • the information may include parts of the object, such as flexible
  • process 1908b Because process 1908b knows the location of the object within the image based on the features already recognized, process 1908b also knows the locations of these parts. Thus, for each frame of an image that is used in tracking, the process transforms information from the DT into the image coordinate space as described herein, and then use that information to identify the regions of the image that can be deemed problematic because these regions contain uncertain content, such as flexible components. With these regions identified, the processes may remove these points from a pass of the tracking process, or weight them less.
  • the recognition process deemphasizes information from regions of the image deemed to contain uncertain content, e.g., by weighting that information less in its recognition analysis or by ignoring that information. Regions of the second image deemed not to contain uncertain content, or less than a threshold amount of uncertain content, are weighted more heavily in the
  • a region of uncertainty may be defined on the area of the object that may be at least partially obscured as a result of motion. Such area(s) may be treated as containing uncertain content, which may be deemphasized during recognition as described herein.
  • Fig. 7 shows an example computer/network architecture 400 on which the example AR system and the example processes, may be implemented.
  • the AR system and processes are not limited to use with the Fig. 7 architecture, and may be implemented on any appropriate computer architecture and/or network architecture.
  • Fig. 8 shows an example process 500 for producing AR content from image data and 3D graphics data in the DT. Process 500 may be performed, e.g., on the architecture of Fig. 7.
  • Computing systems that may be used to implement all or part of the front-end and/or back-end of the AR system may include various forms of digital computers. Examples of digital computers include, but are not limited to, laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, smart televisions and other appropriate computers. Mobile devices may be used to implement all or part of the front-end and/or back-end of the AR system. Mobile devices include, but are not limited to, tablet computing devices, personal digital assistants, cellular telephones, smartphones, digital cameras, digital glasses and other portable computing devices.
  • the computing devices described herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the technology.
  • a computer program product e.g., a computer program tangibly embodied in one or more information carriers, e.g., in one or more tangible machine-readable storage media, for execution by, or to control the operation of, data processing apparatus, e.g., a programmable processor, a computer, or multiple computers
  • a computer program can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, part, subroutine, or other unit suitable for use in a computing environment.
  • a computer program can be deployed to be executed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a network.
  • Actions associated with implementing the processes can be performed by one or more programmable processors executing one or more computer programs to perform the functions of the calibration process. All or part of the processes can be implemented as, special purpose logic circuitry, e.g., an FPGA (field
  • ASIC application-specific integrated circuit
  • processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer.
  • a processor will receive instructions and data from a read-only storage area or a random access storage area or both.
  • Elements of a computer include one or more processors for executing instructions and one or more storage area devices for storing instructions and data.
  • a computer will also include, or be operatively coupled to receive data from, or transfer data to, or both, one or more machine-readable storage media, such as mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks.
  • Non-transitory machine- readable storage media suitable for embodying computer program instructions and data include all forms of non-volatile storage area, including by way of example, semiconductor storage area devices, e.g., EPROM, EEPROM, and flash storage area devices; magnetic disks, e.g., internal hard disks or removable disks; magneto- optical disks; and CD-ROM and DVD-ROM disks.
  • semiconductor storage area devices e.g., EPROM, EEPROM, and flash storage area devices
  • magnetic disks e.g., internal hard disks or removable disks
  • magneto- optical disks e.g., CD-ROM and DVD-ROM disks.
  • Each computing device such as a tablet computer, may include a hard drive for storing data and computer programs, and a processing device (e.g., a
  • Each computing device may include an image capture device, such as a still camera or video camera.
  • the image capture device may be built-in or simply accessible to the computing device.
  • Each computing device may include a graphics system, including a display screen.
  • a display screen such as an LCD or a CRT (Cathode Ray Tube) displays, to a user, images that are generated by the graphics system of the computing device.
  • display on a computer display e.g., a monitor
  • the computer display is LCD- based
  • the orientation of liquid crystals can be changed by the application of biasing voltages in a physical transformation that is visually apparent to the user.
  • the computer display is a CRT
  • the state of a fluorescent screen can be changed by the impact of electrons in a physical transformation that is also visually apparent.
  • Each display screen may be touch-sensitive, allowing a user to enter information onto the display screen via a virtual keyboard.
  • a physical QWERTY keyboard and scroll wheel may be provided for entering information onto the display screen.
  • Each computing device, and computer programs executed thereon, may also be configured to accept voice commands, and to perform functions in response to such commands. For example, the example processes described herein may be initiated at a client, to the extent possible, via voice commands.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

An example method is performed by a computing system, and includes: obtaining an image of an object captured by a device during relative motion between the object and the device; determining a location of the device relative to the object during image capture based on one or more attributes of the object in the image; mapping a three-dimensional (3D) graphical model representing the object to the object in the image based, at least in part, on the location of the device, where the 3D graphical model includes information about the object; receiving a selection of a part of the object; and outputting, for rendering on a user interface, at least some information from the 3D graphical model based on the part selected.

Description

AUGMENTED REALITY SYSTEM
TECHNICAL FIELD
This specification relates generally to an augmented reality system.
BACKGROUND
Augmented reality (AR) content is produced by superimposing computer- generated content onto depictions of real-world content, such as images or video. The computer-generated content may include graphics, text, or animation, for example.
SUMMARY
Example processes include obtaining an image of an object captured by a device during relative motion between the object and the device; determining a location of the device relative to the object during image capture based on one or more attributes of the object in the image; mapping a three-dimensional (3D) graphical model representing the object to the object in the image based, at least in part, on the location of the device, where the 3D graphical model includes
information about the object; receiving a selection of a part of the object; and outputting, for rendering on a user interface, at least some information from the 3D graphical model based on the part selected. The example processes may include one or more of the following features, either alone or in combination.
In an example, receiving the selection includes receiving a selection of a point on the image, where the point corresponds to the part as displayed in the image. In an example, receiving the selection includes displaying, along with the image, a menu including the part; and receiving the selection based on selection of the part in the menu. In an example, receiving the selection includes displaying, along with the image, computer graphics showing the part; and receiving the selection based on a selection of the computer graphics. Determining the location of the device relative to the object may include obtaining a first size of the object shown in the image, with the first size being among the one or more attributes; obtaining a second size of the object from the 3D graphical model; and comparing the first size to the second size to determine a distance between the device and the object. The distance is part of the location.
Determining the location of the device relative to the object may include identifying a feature of the object shown in the image, with the feature being among the one or more attributes; and determining an orientation of the object relative to the device based on the feature and based on the information about the object in the 3D graphical model. The orientation is part of the location.
Determining the location of the device relative to the object may include accounting for a difference between a position of a camera on the device used to capture the image and a predefined reference point on the device.
Determining the location of the device relative to the object may include updating the location of the device as relative positions between the object and the device change. Mapping the 3D graphical model to the object in the image may be performed for updated locations of the device. Mapping the 3D graphical model to the object in the image may include associating parts of the 3D graphical model to corresponding parts of the object shown in the image. A remainder of the 3D graphical model representing parts of the object not shown in the image may be positioned relative to the parts of the 3D graphical model overlaid on the parts of the object shown in the image.
The example processes may also include identifying the at least some information based on the part selected, where the at least some information includes information about the part. The at least some information may include information about parts internal the object relative to the part selected.
Receiving the selection may include receiving a selection of a point on the image, where the point corresponds to the part as displayed in the image; and mapping the selected point to the 3D graphical model. Mapping the selected point may include determining a relative position of the device and the object; tracing a ray through the 3D graphical model based on a mapping of the 3D graphical model to the image; and identifying an intersection between the ray and the part. The example method may include obtaining at least some information about one or more parts of the object that intersect the ray. At least some information may include data representing the one or more parts graphically, where the data enables rendering of the one or more parts relative to the object. The at least some information may include data representing one or more parameters relating to the one or more parts, where the data enables rendering of the one or more parameters relative to the object.
The example processes may also include identifying, based on the selection, the part based on one or more attributes of a pixel in the image that corresponds to the selection. The information about the object in the 3D graphical model may include information about parts of the object. The information about the parts may indicate which of the parts are selectable and may indicate which of the parts are selectable individually or as a group.
The example process may also include enabling configuration, through a user interface, of the information about the parts indicating which of the parts are selectable and indicating which of the parts are selectable individually or as a group. The example process may also include drawing, based on the selection, a color graphic version of the part into a buffer; and using the color graphic version in the buffer to identify the part.
At least some of the information rendered from the graphical 3D model may be computer graphics that is at least partially transparent, and that at least partly overlays the image. At least some of the information rendered from the graphical 3D model may be computer graphics that is opaque, and that at least partly overlays the image. At least some of the information rendered from the graphical 3D model may be computer graphics that is in outline form, and that at least partly overlays the image. An example method performed by a computing system includes: obtaining an image of an object captured by a device during relative motion between the object and the device; determining a location of the device relative to the object during image capture based on one or more attributes of the object in the image; storing, in computer memory, the image of the object and the location of the device during image capture; mapping a three-dimensional (3D) graphical model representing the object to the object in the image based, at least in part, on the location of the device, with the 3D graphical model including information about the object; receiving, at a time subsequent to capture of the image, first data representing an action to be performed for the object in the image; and in response to the first data, generating second data for use in rending content on a display device, with the second data being based on the image stored, the location of the device stored, and at least some of the information from the 3D graphical model. The example method may include one or more of the following features, either alone or in combination.
The second data may be based also on the action to be performed for the object in the image. The content may include the image augmented based on the at least some of the information from the 3D graphical model.
The example method may include receiving an update to the information; and storing the update in the 3D graphical model as part of the information. The content may include the image augmented based on the update and presented from a perspective of the device that is based on the location. The update may be received from a sensor associated with the object. The sensor may provide the update following capture of the image by the device. The update may be received in realtime, and the second data may be generated in response to receipt of the update.
The image may be a frame of video capture by the device during the relative motion between the object and the device. The location may include a position and an orientation of the device relative to the object for each of multiple frames of the video. The content may include the video augmented with at least some of the information and presented from a perspective of the device. Determining the location may include: obtaining a first size of the object shown in the image, with the first size being among the one or more attributes;
obtaining a second size of the object from the 3D graphical model; and comparing the first size to the second size to determine a distance between the device and the object, with the distance being part of the location. Determining the location may include: identifying a feature of the object shown in the image, with the feature being among the one or more attributes; and determining an orientation of the object relative to the device based on the feature and based on the information about the object in the 3D graphical model, with the orientation being part of the location.
Determining the location of the device may include updating the location of the device as relative positions between the object and the device change. Mapping the 3D graphical model to the object may be performed for updated locations of the device.
Mapping the 3D graphical model to the object in the image may include associating parts of the 3D graphical model to corresponding parts of the object shown in the image. A remainder of the 3D graphical model may represent parts of the object not shown in the image being positioned relative to the parts of the 3D graphical model overlaid on the parts of the object shown in the image.
The at least some information from the 3D graphical model may represent components interior to the object.
An example method includes: obtaining, from computer memory, information from a three-dimensional (3D) graphical model that represents an object; identifying, based on the information, a first part of the object having an attribute; performing a recognition process on the object based on features of the object, where the recognition process attaches more importance to a second part of the object than to the first part, with the second part either not having the attribute or having less of the attribute than the first part; and providing data for rendering content on a graphical user interface based, at least in part, on recognition of the object performed by the recognition process. The example method may include one or more of the following features, either alone or in combination.
In the example method, attaching more importance to the second part of the object may include ignoring information about the first part of the object during the recognition process. In the example method, attaching more importance to the second part of the object ma include deemphasizing information about the first part of the object during the recognition process.
The example method may include tracking movement of the object from a first location to a second location. Tracking the movement may include: identifying, in the first image, a feature in the second part of the object, with the feature being identified based on a region in the second image that contains pixels having greater than a predefined difference; and identifying, in the second image, the feature in the second part of the object, with the feature being identified based on the region in the second image that contains the pixels having greater than the predefined difference. The second location may be based on a location of the feature in the second image.
The feature may be a first feature, and the tracking may include: identifying, in the first image, a second feature in the first part of the object, with the second feature being identified based on a second region in the second image that contains pixels having greater than a predefined difference; and identifying, in the second image, the second feature in the first part of the object, with the second feature being identified based on the second region in the second image that contains the pixels having greater than the predefined difference. The second location may be based on both the location of the first feature in the second image and the location of the second feature in the second image. Deemphasizing may include weighting the location of the second feature in the second image less heavily than the location of the first feature in the second image.
The attribute of the object may include an amount of reflectivity in the first part of the object, an amount of transparency in the first part of the object, and/or an amount of flexibility in the first part of the object. The attribute may include an amount of the first part of the objected that is coverable based on motion of one or more other parts of the object. The image may be captured within a field specified for recognition of the object.
An example method performed by one or more processing devices includes: obtaining, from computer memory, information from a three-dimensional (3D) graphical model that represents an object; identifying, based on the information, rigid components of the object that are connected by a flexible component of the object; performing a recognition process on the object based on features of the rigid components, with the recognition process attaching more importance to the rigid components than to the flexible components; and providing data for rendering content on a graphical user interface based, at least in part, on recognition of the object performed by the recognition process. The example method may include one or more of the following features, either alone or in combination.
The example method may include tracking movement of the object from a first location in the first image to a second location in a second image. Tracking the movement of the object from the first location in a first image to the second location in a second image may include ignoring the flexible component and not taking into account an impact of the flexible component when tracking the movement. Tracking movement of the object from the first location in the first image to the second location in the second image may include deemphasizing an impact of the flexible component when tracking the movement, but not ignoring the impact. Tracking movement of the object from the first location in the first image to the second location in the second image may include: tracking movement of the rigid
components individually; and predicting a location of the flexible component based on locations of the rigid components following movement.
Any two or more of the features described in this specification, including in this summary section, may be combined to form implementations not specifically described in this specification. All or part of the processes, methods, systems, and techniques described herein may be implemented as a computer program product that includes
instructions that are stored on one or more non-transitory machine-readable storage media, and that are executable on one or more processing devices. Examples of non-transitory machine-readable storage media include, e.g., read-only memory, an optical disk drive, memory disk drive, random access memory, and the like. All or part of the processes, methods, systems, and techniques described herein may be implemented as an apparatus, method, or system that includes one or more processing devices and memory storing instructions that are executable by the one or more processing devices to perform the stated operations.
The details of one or more implementations are set forth in the accompanying drawings and the description below. Other features and advantages will be apparent from the description and drawings, and from the claims.
DESCRIPTION OF THE DRAWINGS
Fig. 1 is a diagram of a display screen showing example AR content.
Fig. 2 is a flowchart of an example process for generating AR content.
Fig. 3 is a diagram of a display screen showing example AR content.
Fig. 4 is a diagram showing a representation of an object produced using three-dimensional (3D) graphic data next to an image of the object.
Fig. 5, comprised of Figs. 5A and 5B, shows, conceptually, a ray that is projected to, and through, an image of an object and that also impacts a digital twin for the object.
Fig. 6, comprised of Figs. 6A and 6B, shows, conceptually, a ray projected to, and through, an image of an object, and AR content generated based on that ray.
Fig. 7 is a block diagram of an example computer/network architecture on which the AR system described herein may be implemented.
Fig. 8 is a flowchart showing an example process for generating AR content.
Fig. 9 is a flowchart of an example process for generating AR content. Fig. 10 is a diagram showing a field in which an image is to be captured.
Figs. 11 to 17 show examples of AR content that may be generated using, for example, stored imagery or video using the example processes described herein.
Fig. 18 is a flowchart of an example process for performing recognition and tracking processes on content in images.
Fig. 19 is a flowchart of an example process for performing recognition and tracking processes on content in images.
Like reference numerals in different figures indicate like elements. DETAILED DESCRIPTION
DISPLAYING CONTENT IN AN AUGMENTED REALITY SYSTEM
Described herein are example implementations of an augmented reality (AR) system. In some examples, AR content is generated by superimposing computer- generated content onto actual graphics, such as an image or video of a real-life object. Any appropriate computer-generated content may be used including, but not limited to, computer graphics, computer animation, and computer-generated text.
Referring to Fig. 1 , example AR content 100 is shown on the display of tablet computing device 101 . In this example, AR content 100 includes an image of a loader 102 and computer graphics 103 that are rendered at an appropriate location over the image of the loader. The image was captured by a camera or other appropriate image capture device. The computer graphics were generated by a computing device, such as a remote server or the tablet computing device, based on information about the object displayed (the loader). As described herein, the computer graphics may relate to the object in some way. For example, in Fig. 1 , computer graphics 103 highlight a part of the loader, namely its arm.
The example AR system described herein is configured to identify an object in an image captured by an image capture device, and to map a three-dimensional (3D) graphical model to the image of the object. In an example, the 3D graphical model contains information about the object, such as the object's structure, current or past status, and operational capabilities. The mapping of the 3D graphical model to the image associates this information from the 3D graphical model with the image. As a result of this mapping, a point on the image may be selected, and information from the 3D graphical model relating to that point may be retrieved and used to display computer-generated content on the image. In an example, a computer graphics rendering of a selected object part may be displayed, as is the case with the arm of Fig. 1 . In another example, text associated with the selected part may be displayed. In the example AR system, the 3D graphical model is controlled to track relative movement of the image capture device and the object. That is, the image capture device may move relative to the object, or vice versa. During that
movement, the 3D graphical model is also controlled to track the relative movement of the object even as the perspective of the object in the image changes vis-a-vis the image capture device. As a result, the example AR system enables interaction with the object in real-time and from any appropriate orientation.
In the AR system, each instance of an object, such as loader 102, has a digital twin (DT), which is described herein. An instance of an object (or object instance) includes a unique specimen of an object that is differentiated from other specimens of the object. For example, a loader may have a vehicle identification (ID) number that distinguishes it from all other loaders, including those that are the same make and model. Different types of information may be used to identify the instance of an object, as described herein. A DT is specific to an object instance and, as such, includes information identifying the object instance. In some
implementations, there may be a single DT for each corresponding object instance. As used herein, an object is not limited to an individual article, but rather may include, e.g., any appropriate apparatus, system, software, structure, entity, or combination of one or more of these, that can be modeled using one or more DTs.
In this regard, a DT is an example of a type of 3D graphical model that is usable with the AR system; however, other appropriate models may also be usable. An example DT includes a computer-generated representation of an object comprised of information that models the object (referred to as the physical twin, or PT) or portions thereof. The DT includes data for a 3D graphical model of the object and associates information about the object to information representing the object in the 3D graphical model. For example, the DT may include, but is not limited to, data representing the structure of the object or its parts, the operational capabilities of the object or its parts, and the state(s) of the object or its parts. In some
implementations, a DT may be comprised of multiple DTs. For example, there may be separate DT for each part of an object. In some examples, a part of an object may include any appropriate component, element, portion, section, or other constituent of an object, or combination thereof.
A DT may be generated based on design data, manufacturing data, and/or any other appropriate information (e.g., product specifications) about the object. This information may be generic to all such objects. In the loader example of Fig. 1 , the DT may be generated using data that describes the structure and operational capabilities of the type (e.g., make and model) of the loader shown. This data may be obtained from any appropriate public or private database(s), assuming
permissions have been granted. For example, the DT may be generated using information obtained from, and/or are managed by, systems such as, but not limited to, PLM (product lifecycle management) systems, CAD (computer-aided design) systems, SLM (service level management) systems, ALM (application lifecycle management) systems, CPM (connected product management) systems, ERP (enterprise resource planning) systems, CRM (customer relationship management) systems, and/or EAM (enterprise asset management) systems. The information can cover a range of characteristics stored, e.g., in a bill of material (BOM) associated with the object (e.g., an EBOM - engineering BOM, an MBOM - manufacturing BOM, or an SBOM - service BOM), the object's service data and manuals, the object's behavior under various conditions, the object's relationship to other object(s) and artifacts connected to the object, and software that manages, monitors, and/or calculates the object's conditions and operations in different operating environments.
The DT may also be generated based on sensor data that is obtained for the particular instance of the object. For example, the sensor data may be obtained from readings taken from sensors placed on, or near, the actual instance of the object (e.g., loader 102). In this example, since that sensor data is unique to loader 102, the DT for loader 102 will be unique relative to DTs for other loaders, including those that are identical in structure and function to loader 102. The DT may also include other information that is unique to the object, such as the object's repair history, its operational history, damage to the object, and so forth.
The DT for an object instance may have numerous uses including, but not limited to, generating AR content for display. For example, the example AR system described herein may superimpose computer-generated content that is based on, or represents, the DT or portions thereof onto an image of an object instance. Example processes performed by the AR system identify an instance of the object, generate AR content for the object using the DT for that object, and use that AR content in various ways to enable access to information about the object.
An example process 200 that uses the DT to augment actual graphics, such as images or video, is shown in Fig. 2. Example process 200 may be performed by the AR system described herein using any appropriate hardware.
According to process 200, an image of an object is captured (201 ) by an image capture device - a camera in this example - during relative motion between the device and the object. As noted, the object may be any appropriate apparatus, system, structure, entity, or combination of one or more of these that can be captured in an image. An example of an object is loader 102 of Fig. 1 . The camera that captures the image may be a still camera or a video camera. The camera may be part of a mobile computing device, such as a tablet computer or a smartphone.
In some implementations, the relative motion between the camera and the object includes the object remaining stationary while the camera moves. In some implementations, the relative motion between the camera and the object includes the object moving while the camera remains stationary. In some implementations, the relative motion between the camera and the object includes both the object and the camera moving. In any case, the relative motion is evident by the object occupying, in different images, different locations in the image frame. Multiple images may be captured during the relative motion and, as described below, a DT may be mapped to (e.g., associated with) the object in each image. As described below, in some implementations, the DT may track motion of the object in real-time, thereby allowing for interaction with the object via an image from different
perspectives and in real-time. In this regard, in some implementations, real-time may not mean that two actions are simultaneous, but rather may include actions that occur on a continuous basis or track each other in time, taking into account delays associated with processing, data transmission, hardware, and the like.
In an example, tablet computer 101 may be used to capture the image of loader 102 at a first time, TV For example, the image may be part of a video stream comprised of frames of images that are captured by walking around the loader. In another example, the image may be part of a video stream comprised of frames of images that are captured while the camera is stationary but the loader moves.
Referring also to Fig. 3, in this example, the tablet computer 101 may be used to capture a different image of loader 102 at a second, different time, T2. As is clear from Figs. 1 and 3, the two images were taken from different perspectives.
Referring back to Fig. 2, process 200 identifies the object instance in the captured image and retrieves (202) a DT for the object instance - in the example of Fig. 1 , loader 102. In this regard, any appropriate identifying information may be used to identify the object instance. The identifying information may be obtained from the object itself, from the image of the object, from a database, or from any other appropriate source. For example, the identifying information may be, or include, any combination of unique or semi-unique identifiers, such as a Bluetooth address, a media access control (MAC) address, an Internet Protocol (IP) address, a serial number, a quick response (QR) code or other type of bar code, a subnet address, a subscriber identification module (SIM), or the like. Tags, such as RFIDs, may be used for identification. For objects that do not move, the identifying information may be, or include, global positioning system (GPS) or other coordinates that defines the location of the object. For objects that do not include intelligence or other specific identifiers (like bar codes or readable serial numbers), unique features of the object in the image may be used to identify the object instance. For example, a database may store information identifying markings, wear, damage, or other distinctive features of an object instance, together with a unique identifier for the object. The AR system may compare information from the captured image to the stored information or a stored image. Comparison may be performed on a mobile device or on a remote computer. The result of the comparison may identify the object. After the object is identified, the DT for that object may be located (e.g., in memory, a location on a network, or elsewhere) using the obtained object identifier. The DT corresponding to that identifier may be retrieved (202) for use by process 200.
Process 200 determines (203) a location of the camera relative to the object during image capture. The location of the camera relative to the object can be specified, for example, by the distance between the camera and the object as well as the relative orientations of the camera and object. Other determinants of the relative location of the camera and the object, however, can be used. For example, the relative locations can be determined using known computer vision techniques for object recognition and tracking.
The location may be updated periodically or intermittently when relative motion between the object and the camera is detected. Location may be determined based on one or more attributes of the object in the image and based on information in the DT for the object. For example, a size of the object in the image - e.g., a length and/or width taken relative to appropriate reference points - may be determined. For example, in the image, the object may be five centimeters tall. Information in the DT specifies the actual size of the object in the real-world with one or more of the same dimensions as in the image. For example, in the real-world, the object may be three meters tall. In an example implementation, knowing the size of the object in the image and the size of the object in the real world, it is possible to determine the distance between the camera and the object when the image was captured. This distance is one aspect of the location of the camera.
In some implementations, the distance between the between the camera and the object is determined relative to a predefined reference point on the camera, rather than relative to a lens used to capture the image. For example, taking the case of some smartphones, the camera used to capture images is typically in an upper corner of the smartphone. Obtaining the distance relative to a predefined reference, such as a center point, on the smartphone may provide for greater accuracy in determining the location. Accordingly, when determining the distance, the offset between the predefined reference and the camera on the smartphone may be taken into account, and the distance may be corrected based on this offset.
The location of the camera relative to the object is also based on the orientation of the object relative to the camera during image capture. In an example implementation, to identify the orientation, process 200 identifies one or more features of the object, such as wheel 106 in the loader of Fig. 1 . In some
implementations, such features may be identified based on the content of the image. In an example, a change in pixel color may be indicative of a feature of an object. In another example, the change in pixel color may be averaged or otherwise processed over a distance before a feature of the object is confirmed. In another example, sets of pixels of the image may be compared to known images in order to identify features. Any appropriate feature identification process may be used.
The orientation of the object in the image relative to the camera may be determined based on the features of the object identified in the image. For example, the features may be compared to features represented by 3D graphics data in the DT. To make the comparison, one or more 3D features from the DT may be projected into two-dimensional (2D) space, and their resulting 2D projections may be compared to one or more features of the object identified in the image. Features of the object from the image and the 3D graphical model (from the DT) that match are aligned. That is, the 3D graphical model is oriented in 3D coordinate space so that its features align to identified features of the image. In a state of alignment with the object in the image, the 3D graphical model may be at specified angle(s) relative to axes in the 3D coordinate space. These angles(s) define the orientation of the 3D graphical model and, thus, also define the orientation of the object in the image relative to the camera that captured the image. Other appropriate methods of identifying the orientation of the object in the image may also be used, or may be used on conjunction with those described herein.
Process 200 maps (204) the 3D graphical model defined by the DT to the object in the image based, at least in part, on the determined (203) location of the camera relative to the object. As explained above, the location may include the distance between the object in the image and the camera that captured the image, and an orientation of the object relative to the camera that captured the image. Other factors than these may also be used to specify the location. In some implementations, mapping may include associating data from the DT, such as 3D graphics data and text, with corresponding parts of the object in the image. In the example of loader 102 of Fig. 1 , data from the DT relating to its arm (covered by graphics 103) may be associated with the arm; data from the DT relating to front-end 108 may be associated with front-end 108; and so forth. In this example, the associating process may include storing pointers or other constructs that relate data from the DT with corresponding pixels in the image of the object. This association may further identify where, in the image, data from the DT is to be rendered when generating AR content. In the example of the loader of Fig. 1 , data from the DT - such as 3D graphics or text - is mapped to the image. Fig. 4 shows, conceptually, 3D graphics 110 for the loader beside an actual image 112 of the loader. In the AR system described herein, the DT comprising the 3D graphics data may be stored in association with the image of the loader, as described above, and that association may be used in obtaining information about the loader from the image.
Furthermore, because data in the DT relates features of the object in 3D, using the DT and the image of the object, it is also possible to position 3D graphics for objects that are not visible in the image at appropriate locations. More
specifically, in the example of Fig. 4 above, because image 112 is 2D, only a projection of the object into 2D space is visible. Using the techniques described herein, data - e.g., 3D graphics data - from the DT is associated with the image of the object in 2D. However, the DT specifies the entire structure of the object using 3D graphics data. Accordingly, by knowing where some of the 3D graphics data fits relative to the object (i.e., where the 3D data fits relative to parts of the object that are visible in the image), it is possible to appropriately position the remaining 3D graphics for the object, including for parts of the object that are not visible or shown in the image. Implementations employing these features are described below.
The location of the camera relative to the object may change in real-time as the relative positions between the object and the camera change. For example, the camera may be controlled to capture video of the object moving; the camera may be moved and capture video while the object remains stationary; or both the camera and the object may move while the camera captures video. Referring to Figs. 1 and 3, for example, the loader may move from the position shown in Fig. 1 to the positon shown in Fig. 3. During the resulting relative movement, the AR system may be configured so that the DT - e g., 3D graphics data and information defined by the DT - tracks that relative movement. That is, the DT may be moved so that appropriate content from the DT tracks corresponding features of the moving object. In some implementations, the DT may be moved continuously with the object by adjusting the associations between data representing the object in an image frame and data representing the same parts of the object in the DT. For example, if a part of the object moves to coordinate XY in an image frame of video, the AR system may adjust the association between the DT and the image to reflect that data representing the moved part in the DT is also associated with coordinate XY. In some implementations, movement of the object can predict its future location in a series of images - e.g., in frame-by-frame video - and the associations between DT data and image data may be adjusted to maintain correspondence between parts of the object in the image and their counterparts in the DT. Take arm 113 of Fig. 3 as an example. In this example, movement of the camera may result in relative motion of arm 113 in the image frame. Movement in one direction may be a factor in determining future movement of the object in that same direction. The system may therefore predict how to adjust the associations based on the prior movement.
The mapping of the DT to the object associates attributes in the DT with the object. This applies not only to the object as a whole, but rather to any parts of the object for which the DT contains information. Included within the information about the object is information about whether individual parts of the object are selectable individually or as a group. In some implementations, to be selectable, a part may be separately defined within the DT and information, including 3D graphics, for the part, may be separately retrievable in response to an input, such as user or programmatic selection. In some implementations, selectability may be based on or more or more additional or other criteria.
In some implementations, a user interface may be generated to configure information in the DT to indicate which of the parts are selectable and which of the parts are selectable individually or as a group. In this regard, in some
implementations, a DT may be generated at the time that the PT (object) is created. For example, the AR system may obtain, via a user interface, information indicating that an object having a given configuration and a given serial number has been manufactured. In response to appropriate instructions, the AR system may create, or tag, a DT for the object based on information such as that described herein.
Operational information about the instance of the object may not be available prior its use; however, that information can be incorporated into the DT as the information is obtained. For example, sensors on the (actual, real-world) object may be a source of operational information that can be relayed to the DT as that information is obtained. A user may also specify in the DT, through the user interface, which parts of the object are selectable, either individually or as a group. This specification may be implemented by storing appropriate data, such as a tag or other identifier(s), in association with data representing the part.
Referring back to Fig. 2, process 200 receives (205) data representing a selection of a part of the object. For example, the data may represent a selection of a point on the image that represents the part of the object. The selection may include a user-initiated selection, a programmatic selection, or any other type of selection. For example, as shown in Fig. 1 , a user may select a point in the image that corresponds to the loader 102 by touching the image at an appropriate point. Data for the resulting selection is sent to the AR system, where that data is identified as representing a selection of a particular object or part on the loader represented in the image. The selection may trigger display of information.
In some implementations, instead of or in addition to the user selecting a point of the image by touching a screen or selecting with a pointer, the user interface showing the object can be augmented with a set of visual crosshairs or a target that can remain stationary, such as in the center, relative to the user interface (not illustrated). The user can select a part of the object by manipulating the camera's field of view such that the target points to any point of interest on the object. The process 200 can be configured to continually and/or repeatedly analyze the point in the image under the target to identify any part or parts of the object that correspond to the point under the target. In some implementations, the target can be configured to be movable within the user interface by the user, and/or the process can be configured to analyze a point under the target for detection of a part of the object upon active user input, such as a keyboard or mouse click.
In this example, in some implementations, the point selected is identified by the system, and information in the DT relating to an object or part at that point is identified. The user may be prompted, and specify, whether the part, a group of parts, or the entire object is being selected. The information is retrieved from the DT and is output (206) for rendering on a graphical user interface as part of AR content that may contain all or part of the original image. In an example, 3D graphics data for the selected object or part may be retrieved and rendered over all or part of the object or part. In an example, text data relating to the selected object or part may be retrieved and rendered proximate to the object or part. For example, the text may specify values of one or more operational parameters (e.g., temperature) or attributes (e.g., capabilities) of the part. In an example, both 3D graphics data and text data relating to the selected object or part may be retrieved and rendered with the object or part. In some implementations, the resulting AR content may be used to control the object in the image. For example, the DT may be associated with the actual real-world object, e.g., through one or more computer networks. A user may interact with the displayed AR content to send data through the network to control or interrogate the object, among other things. Examples of user interaction with displayed AR content that may be employed herein are described in U.S. Patent Publication No. 2016/0328883 entitled "Augmented Reality System", which is incorporated herein by reference.
Any appropriate method may be used by the AR system to identify the object or part selected. In some implementations, ray tracing may be used to select the object or part. For example, as shown in Fig. 5A, example ray 302 (shown as a dashed line) radiates from within the field of view 301 of a camera 303 and intersects a 2D image 306 of a loader at point 308. Fig. 5B shows a close-up of view of point 308. As shown conceptually in Fig. 5, the intersection point - in this case point 308 in image 306 - also relates to a corresponding point 309 on the DT 310 associated with object 313. That is, point 308 of image 306 relates, via ray 302, to point 309 on DT 310. Selection of point 308 in image 306 thus results in selection of point 309 on DT 310. That is, selection of point 308 on image 306 may cause ray 302 to be projected from that point to, and through, the 3D graphical model defined by the DT 310. Parts of the object defined by the 3D graphical model are identified based on their intersection with ray 302. Notably, the ray is a mathematic and programmatic construct. The ray not a physical manifestation.
As noted, a ray may intersect, and travel through, a 3D graphical model defined by the DT. That is, because the image and DT are associated as described herein, the ray can be programmatically projected to, and through, appropriate locations on the 3D graphical model contained in the DT. Accordingly, any part or component that intersects the ray may be selectable, and data therefor retrievable to generate AR content. For example, ray 302 travels through DT 310. By passing through DT 310, ray 302 intersects the exterior of the object 313 and also one or more selectable parts that are interior to object 313. For example, referring to Figs. 6A and 6B, ray 302 may intersect part 320 that is interior to the 3D graphical model of DT 310. This interior part may be selected, and rendered at an appropriate location as computer-generated graphics 320 on image 306, as shown in Fig. 6B. In some implementations computer-generated graphics 320 may be partially
transparent or in outline form. That may be the case for any of the computer graphics generated herein for display as AR content on an image.
In some implementations, upon selection of a point on image 306, the user may be prompted with a list of all parts - both interior and exterior to object 313 - that the ray intersects. For example, the prompt may be a pop-up box or any other appropriate type of computer graphic. The user may then select one or more of the parts. The selection may include the type of data to display for each part (e.g., 3D graphics, text, etc.) or that information may be determined as described herein based on the type of the selection. Corresponding identifiers for the selected parts are retrieved, and information for those selected parts is identified in the DT based on the identifiers. The system retrieves appropriate data for the selected part and outputs that data for rendering as AR content at appropriate positions on the original image. In some implementations, internal parts may be rendered in outline form or in different colors, with each different color reflecting a depth of the part within the object along a ray. In some implementations, methods other than ray tracing may be used to identify parts that are selected. For example in some implementations different parts of an image may be rendered using different colored pixels. Selection of a part may be identified based on the pixel that is selected. Implementations such as this may employ a dual-buffer scheme comprised of a front buffer and a back buffer. A current image is viewed from the front buffer while a subsequent image is being drawn to the back buffer. At an appropriate time, the back buffer becomes the front buffer, and vice versa, so that the subsequent image can be viewed. In an example operation, an image is generated based on data written to the front buffer. Parts of that image are drawn in different colors into the back buffer. The parts may be distinguished, and identified, based on characteristics of the image, e.g., pixel transitions and the like. A user selects a part of the object in the image, and the colored part from the back buffer is identified corresponding to (e.g., at a same location as) the selection. The DT for the object is identified beforehand, as described herein. The selected color part is then compared to parts in the 3D graphical model for the object in order to identify the color part. Information from the DT may then be used to render graphical and/or textual content in association with the selected part. For example, a graphical overlay may be presented over the selected part or text from the DT may be displayed next to the part.
As explained above, the DT contains information indicating whether a part is selectable individually or as a group. In the example of Fig. 5, selection of a part 309 is interpreted by the AR system based on whether DT 310 indicates that part 309 is selectable, and based on whether part 309 is selectable individually or as a group. If the part is selectable, and selectable individually, then corresponding information from the DT is retrieved and output for rendering as AR content with the image. In some implementations, 3D computer graphics data may be output and rendered over the image so that the 3D graphics data overlays a corresponding part of the image. An example of this is the 3D graphics version of the loader arm overlaid on the images of Figs. 1 and 3. In some implementations, text data may be output and rendered on the image so that the text is displayed over or alongside the image. In an example, the text can be rendered on the image alongside the part of interest, e.g. the loader arm. If the part is selectable, and selectable as a group, then information about the group is retrieved and output for rendering as AR content with the image. The information may be any appropriate type of information, such as 3D graphics, text, and so forth. As described herein, in some implementations, the user may be prompted to indicate whether a part, multiple parts, or an entire object is selected, in which case appropriate AR content is retrieved and displayed. In some implementations, the system may be configured to recognize certain actions as selecting a part, multiple parts, or an entire object.
In some implementations, different types of selections may trigger displays of different types of data. For example, the type of data displayed may be triggered based on the duration of a selection. For example, a first-duration selection (e.g., one that lasts for a first period of time) may trigger display of 3D graphics, a second- duration selection (e.g., one that lasts for a second period of time) may trigger display of text, and a third-duration selection (e.g., one that lasts for a third period of time) may trigger display of both 3D graphics and text. In some implementations, the type of selection may not be based on temporal considerations, but rather may be based on other factors. For example, if the selection is a swipe-type-touch, one type of data (e.g., 3D graphics) may be displayed, whereas if the selection is a tap- type-touch, a second type of data (e.g., text) may be displayed. The system may be configured to associate any appropriate type of selection with display of one or more appropriate types of data to generate AR content.
In the examples presented above, the AR system obtains the DT for an object and uses the DT to generate graphics or text to superimpose onto an image of an object. However, any appropriate content including, but not limited to, animation; video; and non-computer-generated images, video or text, may be obtained from a DT or elsewhere and superimposed onto an image to generate AR content. In some implementations, the AR content may include audio, such as computer-generated or real-life audio, that is presented in conjunction with an image and/or graphics.
Referring back to operation 205, in some implementations, the data received (205) may represent a selection from a menu. For example, in some
implementations, a menu may be displayed overlaid on the image or separate from the image. For example, the menu may be a drop-down menu or a pop-up menu that is triggered for display by selecting an appropriate area of the image. In any case, the menu may list, textually, parts contained in the object, including both those that are visible in the image and those that are not visible in the image (e.g., internal parts). For example, the object instance may be identified beforehand in the manner described herein, and a list of its selectable parts from the DT displayed on the menu. A user may select one of more of the listed parts. Data representing that selection is obtained by process 200, which uses that data to obtain information about the selected part from the object's DT. As described herein, the information may be used to generate AR content from the image and the information about the part. For example, as described, graphics - which may be, e.g., transparent, opaque, outline, or a combination thereof - may be retrieved from the DT for the object instance and displayed over the part selected. As described herein, other information, such as text, may also be displayed.
Referring back to operation 205, in some implementations, the data received (205) may represent a selection of computer-generated graphics that are displayed overlaid on the image. For example, in some implementations, the object instance displayed in the image may be identified beforehand in the manner described herein. Computer graphics from the DT for selectable parts of the object may be overlaid onto the image, as appropriate, or may be displayed separately. The computer graphics can be displayed in a partially transparent fashion such that both the overlaid computer graphics and the underlying image are visible to the user simultaneously. A user may select (205) one of more of the displayed parts by selecting (e.g., touching-on) the computer graphics displayed for that part. In some implementations, the computer graphics represents both internal and external parts of the object. As such, the computer graphics may be displayed using navigable layers, that may be reached, for selection, through interaction with one or more appropriate controls. For example, one or more layers containing internal object parts may be selected, and individual parts may be selected from that layer. Other methods may also be used for selecting internal parts. In any event, data
representing the part selected is obtained by process 200, which uses that data to obtain information about the part from the object's DT. As described herein, the information may be used to generate AR content from the image and the information about the part. In this example, computer graphics (which may be, e.g., transparent, opaque, outline, or a combination thereof) for the selected part or parts may be retained, and remain overlaid on the image. The remaining computer graphics may be eliminated. As described herein, other information, such as text, may also be displayed based on the selection.
Referring back to operation 205, in some implementations, the data received
(205) may represent a selection of computer-generated graphics that are displayed in a menu associated with the image. For example, in some implementations, a menu may be displayed overlaid on the image or separate from the image. As above, the menu may be a drop-down menu or a pop-up menu that is triggered for display by selecting an appropriate area of the image. In any case, the menu may show, graphically, parts contained in the object, including both those that are visible in the image and those that are not visible in the image (e.g., internal parts). For example, the object instance may be identified beforehand in the manner described herein, and computer graphics that represent its selectable parts displayed on the menu. A user may select one of more of the displayed parts. Data representing that selection is obtained by process 200, which uses that data to obtain information about the selected part from the object's DT. As described herein, the information may be used to generate AR content from the image and the information about the part. For example, as described, computer graphics (which may be, e.g., transparent, opaque, outline, or a combination thereof) may be retrieved from the DT for the object instance and displayed over the part selected. As described herein, other information, such as text, may also be displayed.
Fig. 7 shows an example computer/network architecture 400 on which the example AR system and the example processes described herein, may be
implemented. The AR system and processes, however, are not limited to use with the Fig. 7 architecture, and may be implemented on any appropriate computer architecture and/or network architecture.
In Fig. 7, example AR system 400 includes a front-end 401 and a back-end 402. Front-end 401 may include one or more mobile computing devices (or simply, mobile devices). A mobile device may include any appropriate device capable of displaying digital imagery including, but not limited to, digital (AR) glasses, a smartphone, a digital camera, a tablet computing device, and so forth. A mobile device 404 may include one or more processing devices 405 (e.g., microprocessors) and memory 406 storing instructions 407 that are executable by the one or more processing devices and images and/or video 440 that can be accessed and processed as described herein to generate AR content at a time subsequent to image capture. The instructions are part of one or more computer programs that are used to implement at least part of the AR system. For example, the instructions may be part of an application (or "app") that performs operations including, for example, displaying AR content to a user. Mobile device 404 also includes one or more sensing mechanisms, such as a camera for capturing actual graphics, such as images and video. Mobile device 404 may also be connected to, and accessible over, a wireless network, such as a long term evolution (LTE) network or a Wi-Fi network. The subject 410 of AR content may be any appropriate object, e.g., device, system, or entity, examples of which are described herein.
Back-end 402 may include one or more computing systems 412a, 412b examples of which include servers, desktop computers, and mobile devices. A back- end computing system may include one or more processing devices 415 (e.g., microprocessors) and memory 416 storing instructions 417 that are executable by the one or more processing devices. The instructions are part of one or more computer programs that may be used to implement at least part of the AR system. For example, the instructions may be part of a computer program to generate DTs, to analyze DT content, to communicate with other systems 420 and databases 421 containing device information, and so forth. A back-end computing system may also be connected to, and accessible over, a wired or wireless network. In some implementations, the AR system described herein may not include the back-end 402, but rather may be implemented solely on the front-end.
Front-end 401 and back-end 402 may communicate with each other, and with other systems, such as those described herein, over one or more computer networks, which may include wireless and/or wired networks.
In some implementations, a front-end device may include a local computing system (e.g., 404) to render AR content and a back-end device may include a remote computing system (e.g., 412a, 412b). The capabilities of these different devices may dictate where and/or how a DT, and thus AR content, is generated. For example, the DT and AR content may be generated locally; the DT and AR content may be generated remotely and only displayed locally; or the DT and AR content may be generated using a combination of local and remote processing resources. In some implementations, the local computing system may have no onboard sensing capability and be capable only of external monitoring; in some implementations, the local computing system may include basic onboard sensing and no processing capability; in some implementations, the local computing system may include onboard sensing and basic processing capability; and in some implementations, the local computing system may include onboard sensing and processing capability equivalent at least to that of a desktop computer. In some implementations, there may be no remote computing device, but rather only mobile-to-mobile device connection; in some implementations, the remote computing system may be capable of only signal exchange, but not processing; in some implementations, the remote computing system may be capable of device and data management, basic processing, and routing to integrated peripheral systems; and in some
implementations, the remote computing system may be capable of advanced servicing and data processing.
Fig. 8 shows an example process 500 for producing AR content from image data and 3D graphics data in the DT. Process 500 may be performed, e.g., on the architecture of Fig. 7. According to process 500, a declarative model is generated (501 ) for an object. The declarative model may be generated in computer code, and may include information to describe structures and functions of the object. The information may include semantic data that is stored in association with actual design data. In some examples, the declarative model of the object may be annotated to identify, among other things, features and attributes of the object. The annotations may include attributes of those features, such as size, shape, color, etc. Any appropriate techniques may be used to annotate the model. For example, metadata may be associated with specific features in the model. In some
implementations, a look-up table (LUT) or other appropriate construct may be used to associate coordinates of the model with corresponding annotations.
The computer code defining the declarative model is compiled (502) to produce a compiled model. The compiled model is comprised of intermediate object code that can be read by an AR player. The declarative model, and thus the compiled model, defines the DT, or at least a part of the DT, for the object. In this example, the AR player may be executable on a front-end device of the AR system, and comprises computer code that is executable to generate AR content based on the compiled model and on an image (or other graphic) of the object.
To generate AR content for an object, the AR system links (504) information from the compiled model to corresponding information in an image (e.g., the image of the object), and generates (505) AR content based on the linked information. The AR system outputs (506) data representing the AR content for rendering on a display screen of a computing object, such as a tablet computing object. By way of example, the AR player may identify objects and their attributes that were selected as described herein. The compiled model may be read to locate the selected objects in the compiled model. Any appropriate number of attributes may be used to correlate features from the image to features in the compiled model. The AR system links the information from the compiled to the object shown in the image. For example, the compiled model may contain information describing the make, model, tread, and so forth of a tire. The compiled model may also contain sensor readings, or other information. That information is linked to the tire in the image. That information may be used to generate AR content, as described herein.
The AR player may generate AR content by rendering computer graphics generated from data in the DT over appropriate locations of the image. For example, the AR player may identify an element of a graphic in the manner described above, obtain information about that graphic from annotations and/or other information available in the compiled model, and generate the graphic based on information from the compiled model and/or sensor readings.
In some implementations, the computer graphics that form part of the AR content may overlay the same element shown in an image to enhance or explain an aspect of the element. In some implementations, the computer graphics do not overlay the element, but rather are adjacent to, or reference, the element. As noted, the AR content may be generated for an image or video, e.g., on a frame-by-frame basis. Thus, the AR content may be static (unchanging) or dynamic (changeable over time). In the case of video, features in frames of video may be identified using appropriate object identification and object tracking techniques. The computer graphics portion of the AR content may track movement frame-by-frame of the actual object during playback of the video. The video may be real-time video, although that is not a requirement. In the case of real-time video, the DT may be generated or updated in real-time, and the resulting computer graphics superimposed on frames in real-time. Updating the DT may include changing the declarative model and the compiled model, and/or other data used to define the DT, as appropriate. GENERATING TIME-DELAYED AUGMENTED REALITY CONTENT
Also described herein are example implementations of an AR system. As explained, in some examples, AR content is generated by superimposing computer- generated content onto actual graphics, such as an image or video of a real-life object. Any appropriate computer-generated content may be used including, but not limited to, computer graphics, computer animation, and computer-generated text.
In the example AR system, the actual graphic, such as an image or video of an object is stored in computer memory. A location, such as the position and orientation, of the device that captured the image is also stored. A graphical model, such as the digital twin (DT), is mapped to the object in the image, and is used to generate content following capture and storage of the image. For example, at some time after capture and storage, a computing device in the AR system may receive a command from a user or other system to access the image, to replay a video of which the image is part, to obtain information about the object in the image, or take any other appropriate action. One or more processes executing in the AR system may then generate AR content based on the image, a location of the device, the action, and information in the graphical model that represents the object.
By way of example, a technician may capture a video of an object, such as a printer, by walking around the printer with a video camera in-hand. The video - comprised of sequential image frames - is stored in computer memory. The printer in the video is recognized using one or more appropriate computer vision
techniques. The recognition may include identifying the location of the video camera that captured the video, including its position and orientation relative to the printer, and storing that information in computer memory. A graphical model containing information about the printer is mapped to the printer in the video, as described herein. The mapping may include associating information in the graphical model to corresponding parts of the printer, and storing those associations in memory. The resulting mapping enables the information from the graphical model to be used to augment the video of the printer. For example, the information may represent computer graphics that may be overlaid on the printer during presentation of the video. The computer graphics may display interior components of the printer, exterior components of the printer, readings or text relating to the operation of the printer, and so forth. Any appropriate information may be displayed.
Because the video, the location of the video camera during capture, and the graphical model are stored in computer memory, video or individual images of the printer may be accessed, augmented, and presented at any time following image capture. For example, video of the printer may be presented to a user at a time after the video was captured, and may be replayed to identify information about the printer even after the technician has left the vicinity of the printer. In some
implementations, the printer may be connected to a network, and may include sensors associated with one or more of its components. Information from the sensors - e.g., sensor readings - may be incorporated into the graphical model in real-time. Accordingly, even after the technician has left the vicinity of the printer, the technician may use the video and the graphical model to obtain current information about the printer. For example, the technician may replay the video, which may be augmented with current sensor readings, such as an out-of-paper indication or a paper blockage indication. The technician may use the video and the graphical model, remotely or in the vicinity of the printer, to identify locations of any problem, to diagnose the problem, to repair the problem, and/or to discuss, over any communications medium, repair with a third party in the vicinity of the printer.
In some implementations, one or more image capture devices may be located in the vicinity of the object. These image capture devices may send information to the AR system to augment the original video or image. For example, the object - also referred to as the subject - may be a beach. The image capture devices may capture images of the water, and send those images to the AR system. The images of the water may be correlated to the original image or video and may be used to augment the original image or video to identify a current location of the water. This information may be augmented, as appropriate, with information from the graphical model, such as a prior or current temperature of the water, current or predicted future weather conditions at the beach, and so forth, as appropriate.
In some implementations, actions may be taken with respect to stored video. For example, stored video may be presented, and a user may select a part of an object in the video. In response to the selection, information about an object in the image may be presented including, for example, current sensor information, components interior to the selected part, and so forth. Selection may be performed as described above - for example, with respect to Fig. 5.
The example AR system described herein is configured to identify an object in an image captured by an image capture device, and to map a three-dimensional (3D) graphical model to the image of the object. In an example, the 3D graphical model contains information about the object, such as the object's structure, current or past status, and operational capabilities. The mapping of the 3D graphical model to the image associates this information from the 3D graphical model with the image. As a result of this mapping, an action may be taken with respect to the image currently or at a later date. More specifically, as described, the image (which may be part of a video), the location of the image capture device during capture, and associations to the 3D graphical model are stored in computer memory, and may be used to access the image or any appropriate content at a later date. For example, a stored video may be accessed and played on a computing device. Information from the 3D graphical model may be accessed and retrieved to augment the video.
In some cases, the information may include past or present sensor readings and, in some cases, updates to the 3D graphical model may require further mapping to the video. In some cases, the information may include past or present sensor locations. In an example that includes playing stored video, a point on the image may be selected, and information from the 3D graphical model relating to that point may be retrieved and used to display computer-generated content on the image. In an example, a computer graphics rendering of a selected object part may be displayed, as is the case with the arm of Fig. 1 . In another example, text associated with the selected part may be displayed. In the example AR system, the 3D graphical model is controlled to track relative movement of the image capture device and the object in stored images or video. That is, the image capture device may move relative to the object, or vice versa, during image or video capture. During that movement in the stored images or video, the 3D graphical model also tracks the relative movement of the object even as the perspective of the object in the image changes vis-a-vis the image capture device. As a result, the example AR system enables interaction with the object from any appropriate orientation.
As explained above, the DT for an object may also be generated based on sensor data that is obtained for the particular instance of the object. For example, the sensor data may be obtained from readings taken from sensors placed on, or near, the actual instance of the object (e.g., loader 102 of Fig. 1 ). In this example, since that sensor data is unique to loader 102, the DT for loader 102 will be unique relative to DTs for other loaders, including those that are identical in structure and function to loader 102. The DT may also include other information that is unique to the object, such as the object's repair history, its operational history, damage to the object, and so forth. In some implementations, the DT may be updated periodically, intermittently, in response to changes in sensor readings, or at any appropriate time. Updates to the DT may be incorporated into the DT, where appropriate, and used to augment an image, such as the loader of Fig. 1 . In the case of loader 102, video showing operation of the loader may be captured and stored. Following capture and recognition of the loader, a DT may be associated with the loader. Sensors on the loader (not shown) may be used to monitor information such as fuel level, tire wear, and so forth. Values for such information may be received by the AR system from the sensors, and may update the DT for the loader. Accordingly, when the video of the loader is played at a future date (e.g., at some point in time after its capture), information from the sensors may be used to augment images in the video. The sensor information may be received in real-time or at least at some point following the initial capture and storage of the video. Accordingly, even though the video may have been captured at some point in the past, the sensor information may be current or more up-to-date than any information obtained at the time the image was captured. As a result, the stored video may be used both to access, from the DT, information about the structure of the loader and information about its current status. Thus, in some implementations, the video may be played to recreate a scene at the time video or imagery was captured, and to augment that scene with current information. In some implementations, sensor or other data may be extrapolated or generated based on current or past data to predict future information. This future information may be incorporated into imagery or video, as appropriate.
In some implementations, the updates may include updated imagery. For example, updates to the original image or object obtained using on-location or other cameras may be received following original image capture. These updates may be incorporated into the DT, and used to augment the original image. For example, in the beach case described above, current video of water in a static image may be received, and that video may be incorporated into the image's DT, and used to augment the image. Thus, the original static image may, by virtue of the
augmentation, show flowing water. That is, the original static image may have a video component that reflects the current, and changing state of the water, as opposed to the originally-captured image of the water. This is an example of AR content that is generated from real-life, or actual video content only, rather than from an actual image and computer-generated imagery. AR content such as this may be augmented with computer-generated imagery, e.g., to show the temperature of the water, current or predicted temperature of the air, the time, and so forth.
An example process 900 that uses the DT to augment actual graphics, such as images or video, is shown in Fig. 9. Example process 900 may be performed by the AR system described herein using any appropriate hardware.
According to process 900, an image of an object is captured (901 ) by an image capture device - a camera in this example - during relative motion between the device and the object. As noted, the object may be any appropriate apparatus, system, structure, entity, or combination of one or more of these that can be captured in an image. An example of an object is loader 102 of Fig. 1. The camera that captures the image may be a still camera or a video camera. The camera may be part of a mobile computing device, such as a tablet computer or a smartphone. Data representing the image is stored (902) in computer memory. In the case of video, the image may be one frame of multiple frames that comprise the video.
In some implementations, process 900 requires that the camera be within a predefined location relative to the object during image capture. For example, as shown in Fig. 10, in order to determine the location of an object relative to the camera as described below, in some implementations, the camera needs to be within a predefined field of view (represented by a rectangle 510 defined by intersecting sets of parallel lines) relative to the object when the image is captured. In some implementations, recognition may be performed regardless of where the camera is positioned during image capture. Because the camera is within the predefined location, the location(s) of components of the object may be estimated, and information from those parts may be used in determining the location of the object in the image. For example, in some implementations, image marker-based tracking may be used to identify the location based on is image artifacts, or other active scanning techniques to scan and construct a 3D map of the environment, from which location can be determined. Using this tracking ability, and identifying information as described below, a DT can be associated with the object.
In some implementations, the relative motion between the camera and the object includes the object remaining stationary while the camera moves. In some implementations, the relative motion between the camera and the object includes the object moving while the camera remains stationary. In some implementations, the relative motion between the camera and the object includes both the object and the camera moving. In any case, the relative motion is evident by the object occupying, in different images, different locations in the image frame. Multiple images may be captured and stored (902) during relative motion and, as described below, a DT may be mapped to (e.g., associated with) the object in each image. As described below, in some implementations, the DT may track motion of the object, thereby allowing for interaction with the object via an image from different
perspectives in the stored video. In some implementations, real-time information may be received from an object (or subject) of the image, and that information may be incorporated into the DT in real-time and used to augment stored video. In this regard, in some implementations, real-time may not mean that two actions are simultaneous, but rather may include actions that occur on a continuous basis or track each other in time, taking into account delays associated with processing, data transmission, hardware, and the like.
In Fig. 1 , tablet computer 101 may be used to capture the image of loader 102 at a first time, T-i . For example, the image may be part of a video stream comprised of frames of images that are captured by walking around the loader. In another example, the image may be part of a video stream comprised of frames of images that are captured while the camera is stationary but the loader moves.
Referring also to Fig. 3, in this example, the tablet computer 101 may be used to capture a different image of loader 102 at a second, different time, T2. As is clear from Figs. 1 and 3, the two images were taken from different perspectives.
Referring back to Fig. 9, process 900 identifies the object instance in the captured image and retrieves (903) a DT for the object instance - in the example of Fig. 1 , loader 102. In this regard, any appropriate identifying information may be used to identify the object instance. The identifying information may be obtained from the object itself, from the image of the object, from a database, or from any other appropriate source. For example, the identifying information may be, or include, any combination of unique or semi-unique identifiers, such as a Bluetooth address, a media access control (MAC) address, an Internet Protocol (IP) address, a serial number, a quick response (QR) code or other type of bar code, a subnet address, a subscriber identification module (SIM), or the like. Tags, such as RFIDs, may be used for identification. For objects that do not move, the identifying information may be, or include, global positioning system (GPS) or other coordinates that defines the location of the object. For objects that do not include intelligence or other specific identifiers (like bar codes or readable serial numbers), unique features of the object in the image may be used to identify the object instance. For example, a database may store information identifying markings, wear, damage, image artifacts as described above, or other distinctive features of an object instance, together with a unique identifier for the object. The AR system may compare information from the captured image to the stored information or a stored image. Comparison may be performed on a mobile device or on a remote computer. The result of the comparison may identify the object. After the object is identified, the DT for that object may be located (e.g., in memory, a location on a network, or elsewhere) using the obtained object identifier. The DT corresponding to that identifier may be retrieved (903) for use by process 900.
Process 900 determines (904) a location of the camera relative to the object during image capture. The location of the camera relative to the object can be specified, for example, by the distance between the camera and the object as well as the relative orientations of the camera and object. Other determinants of the relative location of the camera and the object, however, can be used. For example, the relative locations can be determined using known computer vision techniques for object recognition and tracking. The location may be updated periodically or intermittently when relative motion between the object and the camera is detected. For each image - including a frame of video - the location of the camera relative to the object, as determined herein, is stored (905) in computer memory. The stored information may be used, as described herein, to implement or update mapping of the DT to the object in the image based on movement of the object.
In some implementations, location may be determined based on one or more attributes of the object in the stored image and based on information in the DT for the object. For example, a size of the object in the image - e.g., a length and/or width taken relative to appropriate reference points - may be determined. For example, in the image, the object may be five centimeters tall. Information in the DT specifies the actual size of the object in the real-world with one or more of the same dimensions as in the image. For example, in the real-world, the object may be three meters tall. In an example implementation, knowing the size of the object in the image and the size of the object in the real world, it is possible to determine the distance between the camera and the object when the image was captured. This distance is one aspect of the location of the camera.
In some implementations, the distance between the between the camera and the object is determined relative to a predefined reference point on the camera, rather than relative to a lens used to capture the image. For example, taking the case of some smartphones, the camera used to capture images is typically in an upper corner of the smartphone. Obtaining the distance relative to a predefined reference, such as a center point, on the smartphone may provide for greater accuracy in determining the location. Accordingly, when determining the distance, the offset between the predefined reference and the camera on the smartphone may be taken into account, and the distance may be corrected based on this offset.
The location of the camera relative to the object is also based on the orientation of the object relative to the camera during image capture. In an example implementation, to identify the orientation, process 900 identifies one or more features of the object in the stored image, such as wheel 106 in the loader of Fig. 1 . Such features may be identified based on the content of the image. In an example, a change in pixel color may be indicative of a feature of an object. In another example, the change in pixel color may be averaged or otherwise processed over a distance before a feature of the object is confirmed. In another example, sets of pixels of the image may be compared to known images in order to identify features. Any appropriate feature identification process may be used.
The orientation of the object in the image relative to the camera may be determined based on the features of the object identified in the image. For example, the features may be compared to features represented by 3D graphics data in the DT. To make the comparison, one or more 3D features from the DT may be projected into two-dimensional (2D) space, and their resulting 2D projections may be compared to one or more features of the object identified in the image. Features of the object from the image and the 3D graphical model (from the DT) that match are aligned. That is, the 3D graphical model is oriented in 3D coordinate space so that its features align to identified features of the image. In a state of alignment with the object in the image, the 3D graphical model may be at specified angle(s) relative to axes in the 3D coordinate space. These angles(s) define the orientation of the 3D graphical model and, thus, also define the orientation of the object in the image relative to the camera that captured the image. Other appropriate methods of identifying the orientation of the object in the image may also be used, or may be used on conjunction with those described herein. As noted, the location (e.g., position and orientation) of the camera relative to the object is stored (905). In the case of video, which is comprised of multiple image frames in sequence, the location of the camera is stored for each image frame.
Process 900 maps (906) the 3D graphical model defined by the DT to the object in the image based, at least in part, on the determined (904) location of the camera relative to the object. As explained above, the location may include the distance between the object in the image and the camera that captured the image, and an orientation of the object relative to the camera that captured the image. Other factors than these may also be used to specify the location. In some implementations, mapping may include associating data from the DT, such as 3D graphics data and text, with corresponding parts of the object in the image. In the example of loader 102 of Fig. 1 , data from the DT relating to its arm (covered by graphics 103) may be associated with the arm; data from the DT relating to front-end 108 may be associated with front-end 108; and so forth. In this example, the associating process may include storing pointers or other constructs that relate data from the DT with corresponding pixels in the image of the object. This association may further identify where, in the image, data from the DT is to be rendered when generating AR content. In the example of the loader of Fig. 1 , data from the DT - such as 3D graphics or text - is mapped to the image. As noted, Fig. 4 shows, conceptually, 3D graphics 110 for the loader beside an actual image 112 of the loader. In the AR system described herein, the DT comprising the 3D graphics data may be stored in association with the image of the loader, as described above, and that association may be used in obtaining information about the loader from the image.
Furthermore, because data in the DT relates features of the object in 3D, using the DT and the image of the object, it is also possible to position 3D graphics for objects that are not visible in the image at appropriate locations. More specifically, in the example of Fig. 4 above, because image 112 is 2D, only a projection of the object into 2D space is visible. Using the techniques described herein, data - e.g., 3D graphics data - from the DT is associated with the image of the object in 2D. However, the DT specifies the entire structure of the object using 3D graphics data. Accordingly, by knowing where some of the 3D graphics data fits relative to the object (i.e., where the 3D data fits relative to parts of the object that are visible in the image), it is possible to appropriately position the remaining 3D graphics for the object, including for parts of the object that are not visible or shown in the image. Implementations employing these features are described below.
The location of the camera relative to the object may change in the stored video as the relative positions between the object and the camera change. For example, the camera may be controlled to capture video of the object moving; the camera may be moved and capture video while the object remains stationary; or both the camera and the object may move while the camera captures video.
Referring to Figs. 1 and 3, for example, the loader may move from the position shown in Fig. 1 to the positon shown in Fig. 3. During the resulting relative movement, the AR system may be configured so that the DT - e g., 3D graphics data and information defined by the DT - tracks that relative movement. That is, the DT may be moved so that appropriate content from the DT tracks corresponding features of the moving object. In some implementations, the DT may be moved continuously with the object in the stored video by adjusting the associations between data representing the object in an image frame and data representing the same parts of the object in the DT. For example, if a part of the object moves to coordinate XY in an image frame of video, the AR system may adjust the association between the DT and the image to reflect that data representing the moved part in the DT is also associated with coordinate XY. In some implementations, movement of the object can predict its future location in a series of images - e.g., in frame-by- frame video - and the associations between DT data and image data may be adjusted to maintain correspondence between parts of the object in the image and their counterparts in the DT. Take arm 113 of Fig. 3 as an example. In this example, movement of the camera may result in relative motion of arm 113 in the image frame. Movement in one direction may be a factor in determining future movement of the object in that same direction. The system may therefore predict how to adjust the associations for future movement in the video based on the prior movement.
In some implementations, a 3D graphical model representing the object and stored as part of the DT is mapped to each image, e.g., in a video sequence, and information representing the mappings is stored (907) in computer memory. For example, in some implementations, information, as described herein, mapping the 3D graphical model is stored for each image, and that information is retrievable and usable to generate AR content for the image at any appropriate time. In an example, the video and mapping information may be stored at an initial time, and the video and mappings may be used at any point following the initial time to generated AR content using the video and information from the DT resulting from the mapping. In some implementations, the location, including the position and orientation, of the image capture device may be stored for each image. For stored video, mapping may be performed dynamically using the stored location. For example, as an image is retrieved from storage, stored location information for the image capture device is also retrieved. That stored location information is used, together with any other appropriate information, to map a 3D graphical model of the object from the object's DT to the image in the manner described herein. Each time the image changes, as is the case for video, that mapping process may be performed or updated.
In this regard, the mapping of the DT to the object associates attributes in the DT with the object. This applies not only to the object as a whole, but rather to any parts of the object for which the DT contains information. Included within the information about the object is information about whether individual parts of the object are selectable individually or as a group. In some implementations, to be selectable, a part may be separately defined within the DT and information, including 3D graphics, for the part, may be separately retrievable in response to an input, such as user or programmatic selection. In some implementations, selectability of a part may be based on or more or more additional or other criteria.
In some implementations, a user interface may be generated to configure information in the DT to indicate which of the parts are selectable and which of the parts are selectable individually or as a group. In this regard, in some
implementations, a DT may be generated at the time that the PT (object) is created. For example, the AR system may obtain, via a user interface, information indicating that an object having a given configuration and a given serial number has been manufactured. In response to appropriate instructions, the AR system may create, or tag, a DT for the object based on information such as that described herein.
Operational information about the instance of the object may not be available prior its use; however, that information can be incorporated into the DT as the information is obtained. For example, sensors on the (actual, real-world) object may be a source of operational information that can be relayed to the DT as that information is obtained. A user may also specify in the DT, through the user interface, which parts of the object are selectable, either individually or as a group. This specification may be implemented by storing appropriate data, such as a tag or other identifier(s), in association with data representing the part. Referring back to Fig. 9, following storage, process 900 receives (908) data representing an action to be performed with respect to the stored video. In response to this received data, data is generated (909) for use in rending content on a display device. In this example, the generated data is based on one or more of: the stored image, the stored location of the image capture device, at least some information from the retrieved DT, or the action to be taken. For example, the action to be performed may include replaying video from a prior point in time, and augmenting that video with 3D graphics at selected points in the video or where otherwise appropriate. In this example, the video may be retrieved and played from the perspective of the image capture device. This perspective is quantified using information identifying the location of the image capture device relative to an object in the video.
In response to an action, such as selecting part of the image as described herein, 3D graphics for the selected part may be retrieved from the object's DT. As noted, in some implementations, mapping information for each frame of video is stored. In this case, that mapping information may be used to correlate the 3D graphics to the corresponding part of the image, and may be used to generate AR content that includes the image and 3D graphics. In some implementations, the location (e.g., position and orientation) of the image capture device may be stored for each image, including for frames of video. Accordingly, in some
implementations, the mapping process may be performed dynamically as each image is retrieved. For example, rather than performing mapping beforehand and storing the mapping information in memory, as each frame of video is played, mapping may be performed. Performing mapping dynamically may have
advantages in cases where an objects DT changes over time. In some
implementations, the mapping may be performed using a combination of stored mapping information and dynamic mapping. For example, parts of an object that do not change may be mapped beforehand and mapping information therefor stored. Other parts of the object that do change, and for which the DT may change over time, may be mapped dynamically.
In operation 908, any appropriate action may be taken. For example, the data may represent an instruction to play the video, to move to a particular image in the video, to display 3D graphical content for all or part of the video, to identify updated sensor information for parts of an object shown in the video, to access to the object's BOM, to access the object's service history, to access the object's operating history, to access the object's current operating conditions, to generate data based on determined sensor values, and so forth. In an example, the data may represent a selection of a point on an image that represents the part of the object. The selection may include a user-initiated selection, a programmatic selection, or any other type of selection. For example, as shown in Fig. 1 , a user may select a point in the image that corresponds to the loader 102 by touching the image at an appropriate point. Data for the resulting selection is sent to the AR system, where that data is identified as representing a selection of a particular object or part on the loader represented in the image. The selection may trigger display of information.
In some implementations, instead of or in addition to the user selecting a point of the image by touching a screen or selecting with a pointer, the user interface showing the object can be augmented with a set of visual crosshairs or a target that can remain stationary, such as in the center, relative to the user interface (not illustrated). The user can select a part of the object by manipulating the crosshairs such that the target points to any point of interest on the object. The process 900 can be configured to continually and/or repeatedly analyze the point in the image under the target to identify any part or parts of the object that correspond to the point under the target. In some implementations, the target can be configured to be movable within the user interface by the user, and/or the process can be configured to analyze a point under the target for detection of a part of the object upon active user input, such as a keyboard or mouse click. In an example, in some implementations, the point selected is identified by the system, and information in the DT relating to an object or part at that point is identified. The user may be prompted, and specify, whether the part, a group of parts, or the entire object is being selected. The information is retrieved from the DT and is output for rendering on a graphical user interface as part of AR content that may contain all or part of the original image. In an example, 3D graphics data for the selected object or part in stored video or other storage imagery may be retrieved and rendered over all or part of the object or part. In an example, text data relating to the selected object or part may be retrieved and rendered proximate to the object or part. For example, the text may specify values of one or more operational parameters (e.g., temperature) or attributes (e.g., capabilities) of the part. In an example, both 3D graphics data and text data relating to the selected object or part may be retrieved and rendered with the object or part.
In some implementations, the resulting AR content may be used to control the object in the image using previously stored video or imagery. For example, the DT may be associated with the actual real-world object, e.g., through one or more computer networks. A user may interact with the displayed AR content to send data through the network to control or interrogate the object, among other things.
Examples of user interaction with displayed AR content that may be employed herein are described in U.S. Patent Publication No. 2016/0328883 entitled
"Augmented Reality System", which is incorporated herein by reference.
Figs. 11 to 17 show examples of AR content that may be generated using stored imagery or video according to the example processes described herein.
Although examples are presented, any appropriate content may be generated using the example processes described herein.
Fig. 11 shows the results of an action taken with respect to stored image 511 of loader 102. In this example, the action is to display, on image 511 , interior parts 512 of the loader and current sensor reading 513. As shown, the graphic depicting the current sensor reading is located near to the part read (e.g., the vent), and includes an arrow pointing to that part. A graphic such as this, or any other appropriate graphic, may be used to represent any sensor reading. The content augmenting the image is obtained from the loader's DT, as described herein.
Fig. 12 shows the results of an action taken with respect to stored image 511 of loader 102. In this example, the action is to display, on image 511 , the loader and its components in outline form. The content augmenting the image is obtained from the loader's DT, as described herein.
Fig. 13 shows the results of an action taken with respect to stored image 511 of loader 102. In this example, the action is to display, on image 511 , the loader and its components in shadow form together with its interior parts 515 in color. The content augmenting the image is obtained from the loader's DT, as described herein.
Fig. 14 shows the results of an action taken with respect to stored image 511 of loader 102. In this example, the action is to display, on image 511 , the loader and its components in shadow form together with two sensor readings 520, 521 . The content augmenting the image is obtained from the loader's DT, as described herein.
Fig. 15 shows the results of an action taken with respect to stored image 511 of loader 102. In this example, the action is to display, on image 511 , a selected circular region 524 of the loader and components in that circular region in shadow form together with interior components 525 in that circular region in color. As shown, only the interior components within the selected circular region are displayed. The remainder of the image - the part not in the circular region - retains its original characteristics and is not augmented. The content augmenting the image is obtained from the loader's DT, as described herein.
Fig. 16 shows the results of an action taken with respect to stored image 511 of loader 102. In this example, the action is to display, on image 511 , the loader and its components in outline form, together with current sensor reading 527. The content augmenting the image is obtained from the loader's DT, as described herein.
Fig. 17 shows the results of an action taken with respect to stored image 530 of loader 102, which is different from image 511 and may be part of an image sequence containing image 511 , although that is not a requirement. In this example, the action is to display, on image 511 , the loader and its components in outline form, together with current sensor reading 531. The content augmenting the image is obtained from the loader's DT, as described herein.
As noted, Fig. 7 shows an example computer/network architecture 400 on which the example AR system and the example processes, may be implemented. The AR system and processes, however, are not limited to use with the Fig. 7 architecture, and may be implemented on any appropriate computer architecture and/or network architecture. As noted, Fig. 8 shows an example process 500 for producing AR content from image data and 3D graphics data in the DT. Process 500 may be performed, e.g., on the architecture of Fig. 7.
PROCESSING UNCERTAIN CONTENT IN A COMPUTER GRAPHICS SYSTEM
Also described herein are example processes, which may be performed by a computer graphics system, for recognizing and processing uncertain content. An augmented reality (AR) system is an example of a computer graphics system in which the processes may be used. However, the processes may be used in any appropriate technological context or computer graphics system, and are not limited to use with an AR system or to use with example AR system described herein.
In this regard, when recognizing and tracking content in images using computer vision processes, a factor that affects these operations is lighting and, in particular, the effects produced when light is cast upon reflective or translucent (e.g., polished or glass) surfaces. The specular highlights that result can create artificial visual artifacts that can confuse content recognition processes into thinking that the processes are seeing some natural feature in an image when, in fact, with a simple and sometimes small change in viewing angle, that feature
moves/changes/disappears because the specular highlights change. Reflections and refractions in transparent materials can cause similar problems, creating artificial features that can result in false positives during recognition and tracking. Likewise, flexible components, such as cables and hoses, introduce areas of uncertainty both due to the tolerances of fitting such items (e.g., how they lie at rest relative to an object) and due to how those components move in position as rigid components of the object also move. The processes described herein use
computer-aided design (CAD) information to identify moving parts of objects that may introduce uncertainty in content recognition results. By providing computer vision (CV) processes information about spatial regions where such uncertainty exists in an image, the CV processes can adapt and choose not to use information from parts of the object that produce such uncertainty, or deemphasize information from those parts of the object.
Accordingly, in some implementations, "uncertain" content includes, but is not limited to, content of an image that does not necessarily represent an object containing that content. For example, an image may include an object having parts that are specular, or reflective. When recognizing an object in the image - e.g., as part of an AR process - the parts of the image that are reflective may be
deemphasized for reasons explained above. Take the case of an object, such as a loader (e.g., Fig. 1 ), that contains a windshield 116 that reflects objects in certain light. When attempting to recognize and to track the loader, the windshield may display reflected content, such as trees or clouds, due to its reflectivity. As a result of this reflected content, the recognition processes may have difficulty deciding that the object is a loader. In other examples, specular highlights, such as the sun on a wet road, can cause glare and other such effects that can not only mask the real features of an object, but can also generate some of high-contrast features that content recognition processes rely upon for accurate recognition. If these features disappear when the angle of light changes (which is what happens with specular highlights), then recognition processes may be confused about the content of the image.
Other attributes that can impact recognition processes include, but are not limited to, transparency, flexibility, refractivity, translucence, and movability. Consider the preceding loader example. In some light, windows may not reflect, but rather may be transparent. An image, therefore, may show what is on the other side of a window inside the operator cab such the seat and controls, which may impact the ability to recognize the loader in the image. In another example, flexible
components, such as wires, hoses, or the like, may be difficult to track during motion, since their shapes and orientations may change between consecutive image frames. In another example, hinged parts that are connected for movement, such as arm 103 of the loader of Fig. 1 , can cause uncertainty because they can block out areas of an image by virtue of their movement. Accordingly, parts of an object that have one or more of the foregoing attributes may also hinder content recognition processes.
The example processes described herein identify parts of an image that constitute uncertain content and, during recognition and tracking processes, place more importance on parts of the object that do not include uncertain content or that include less uncertain content than other parts of the object. In some
implementations, placing more importance may include deemphasizing information from parts of the object that have more than a defined amount of an attribute, such as reflectivity, transparency, or flexibility, and/or emphasizing parts of the object that have less than a defined amount of the attribute. In some examples, deemphasizing a part includes ignoring information about the part. For example, information in the image from the deemphasized part may not be taken into account during recognition and tracking. In some examples, deemphasizing a part includes applying less weight to information representing the part than to information representing other parts of the object, e.g., other parts that do not have, or have less of, an attribute such as transparency, reflectivity, or flexibility. For example, information in the image from the deemphasized part may have applied a smaller weighting factor than information from the other parts of the image. In some examples, emphasizing a part includes applying greater weight to information representing parts of the object that do not have, or have less of, an attribute, such as transparency, reflectivity, or flexibility, than to information representing other parts of the object having more of the attribute. For example, information in the image from the emphasized parts may have applied a larger weighting factor than information from the other parts.
By way of example, a content recognition process (which may, or may not, be part of an object tracking process) may receive an image, and may attempt to recognize an object, such as a loader, in the image. This may include identifying enough parts of the object to associate, with the object, a graphical model identifying features of, and information about, the object. The content recognition process may identify, using information from this model, parts of the object that have uncertain content. In this example, information from those parts is deemphasized relative to other parts of the object that do not have uncertain content or that have uncertain content that is less pronounced (e.g., less of the uncertain content). In the loader example above, a window 116 may be highly reflective and also refractive (at certain angles), whereas shiny paint on front-end 108 may be reflective, but less so than the window. Accordingly, the recognition processes may give greater weight, during recognition, to information (e.g., pixels) representing the loader's front-end than to information representing the window. As a result, the window is deemphasized in the recognition process relative to the front-end (or conversely, the front-end is emphasized over the window). However, in this example, information from both the window and the front-end are considered. In some examples, recognition and tracking processes may give no weight to information representing the window, and base recognition solely on other parts of the object that exhibit less than a threshold amount of reflectivity.
For the purposes of the processes described herein, content, and objects that are part of that content, may include any appropriate structures, matter, features, etc. in an image. In an example, to recognize a scene, water, which may be highly reflective in certain light, may be deemphasized relative to flora in the scene.
Recognition processes include, but are not limited, to initial recognition of an object and tracking motion of that object in a series of frames, such as image frames of video containing that object. Example tracking processes perform recognition on an image-by-image (e.g., frame-by-frame) basis, as described herein.
As noted, the example processes are usable in an AR system, such as the AR system shown in Fig. 7. AR content may be generated by superimposing computer-generated content onto actual graphics, such as an image or video of a real-life object. Any appropriate computer-generated content may be used.
As noted, he DT for an object instance may have numerous uses including, but not limited to, performing content recognition and tracking and generating AR content, as described herein. For example, the example AR system described herein may superimpose computer-generated content that is based on, or that represents, the DT or portions thereof onto an image of an object instance. Example processes performed by the AR system identify an instance of the object, generate AR content for the object using the DT for that object, and use that AR content in various ways to enable access to information about the object.
An example process 1800 that uses the DT to recognize and track objects in images or video is shown in Fig. 18. Example process 1800 may be performed by the AR system described herein using any appropriate hardware.
According to process 1800, an image of an object is captured (1801 ) by an image capture device - a camera in this example. The object may be any
appropriate apparatus, system, structure, entity, or combination of one or more of these that can be captured in an image. An example of an object is loader 102 of Fig. 1 . The camera that captures the image may be a still camera or a video camera. The camera may be part of a mobile computing device, such as a tablet computer or a smartphone, or it may be a stand-alone camera.
Process 1800 identifies (1802) the object instance in the captured image, and retrieves (1803) a DT for the object instance - in the example of Fig. 1 , loader 102. A DT is used in this examples presented herein; however, any appropriate computer- aided design (CAD) or other computer-readable construct may be used in addition to, or instead of, the DT. Any appropriate identifying information may be used to identify the object instance. The identifying information may be obtained from the object itself, from the image of the object, from a database, or from any other appropriate source. For example, the identifying information may be, or include, any combination of unique or semi-unique identifiers, such as a Bluetooth address, a media access control (MAC) address, an Internet Protocol (IP) address, a serial number, a quick response (QR) code or other type of bar code, a subnet address, a subscriber identification module (SIM), or the like. Tags, such as RFIDs, may be used for identification. For objects that do not move, the identifying information may be, or include, global positioning system (GPS) or other coordinates that define the location of the object. For objects that do not include intelligence or other specific identifiers (like bar codes or readable serial numbers), unique features of the object in the image may be used to identify the object instance. For example, a database may store information identifying markings, wear, damage, or other distinctive features of an object instance, together with a unique identifier for the object. The AR system may compare information from the captured image to the stored information or a stored image. Comparison may be performed on a mobile device or on a remote computer. The result of the comparison may identify the object. After the object is identified, the DT for that object may be located (e.g., in memory, a location on a network, or elsewhere) using the obtained object identifier.
Process 1800 performs (1804) a recognition process on the object. In some implementations, the recognition process includes identifying features, structures, locations, orientations, etc. of the object based on one or more images of the object captured using the camera. In some implementations, the recognition process requires that the camera be within a predefined location relative to the object during image capture. For example, as shown in Fig. 10, in order to perform initial object recognition, in some implementations, the camera needs to be within a predefined field of view (represented by a rectangle 510 defined by intersecting sets of parallel lines) relative to the object when the image is captured. In some implementations, recognition may be performed regardless of where the camera is positioned during image capture.
In some implementations, the recognition process (1804) includes
recognizing features of the object based, e.g., on regions of the object containing pixel transitions. For example, edges (example features) in the object may be recognized based on regions of the object that contain adjacent pixels having greater than a predefined difference. In an example, adjacent pixels regions may be analyzed to determine differences in the luminance and/or chrominance of those pixel regions. Adjacent regions having more than a predefined difference in luminance and/or chrominance may be characterized as edges of the object. In some implementations, the pixels may be analyzed (e.g., averaged) over a region that spans at least at least a predefined minimum number of pixels. In some examples, the change in pixel characteristics may be averaged or otherwise processed over a distance before a feature of the object is confirmed. In some implementations, dark-light transitions may be identified; sharp edges may be identified; corners may be identified; changes in contrast may be identified; and so forth. In some examples, sets of pixels of the image may be compared to known images in order to identify features. Any appropriate feature identification process may be used.
During recognition (1804), process 1800 identifies parts of the DT or model that have one or more attributes, such as reflectivity, transparency, or flexibility, or more than a predefined amount of one or more of these attributes. Because the initial recognition process is performed within a predefined location (e.g., field of view 510) relative to the object, the approximate locations of these parts of the object may be obtained based on the DT for the object. For example, in the case of a loader, a DT for the loader may specify the locations of windows, the reflectivity of its paint, and the locations of any other parts that are likely to adversely impact the recognition process. Because the camera is within the predefined location, the location(s) of these parts may be estimated, and information from those parts may be deemphasized when performing recognition, including identifying object edges. Furthermore, features identified from edges of the object may be correlated to expected locations of the parts of the object that have one or more attributes. For example, an edge may be detected at a location where a window is expected. This edge may represent, for example, the structure of the loader that holds the window. By detecting this edge, the location of the window may be confirmed.
As noted, in some implementations, object recognition includes identifying edges or other distinguishing features of objects based on pixel transitions. In some implementations, the recognition processes may identify those features (e.g., based on pixel transitions) in all parts of an object or image. Features identified in parts of the object that contain uncertain content may be weighted less than features identified in other parts of the object containing no, or less, uncertain content. Any appropriate weighting factor or technique may be used to weight the edges.
In some implementations, the recognition process (1804) also identifies an orientation of the object in the image. The orientation of the object in the image may be determined based on the edges (or other features) of the object identified in the image. For example, the edges may be compared to edges represented by 3D graphics data in the DT. To make the comparison, one or more 3D features from the DT may be projected into two-dimensional (2D) space, and their resulting 2D projections may be compared to one or more features of the object identified in the image. Edges of the object from the image and the 3D graphical model (from the DT) that match are aligned. That is, the 3D graphical model is oriented in 3D coordinate space so that its features align to identified features of the image. In a state of alignment with the object in the image, the 3D graphical model may be at specified angle(s) relative to axes in the 3D coordinate space. These angles(s) define the orientation of the 3D graphical model and, thus, also define the orientation of the object in the image relative to the camera that captured the image. Other appropriate methods of identifying the orientation of the object in the image may also be used, or may be used in conjunction with those described herein. In some implementations, the recognition process compares identified edges or other features of the object edges or other features defined in the DT for that object. Based on a number and weighting of the matches, the recognition process is able to recognize the object in the image. Process 1800 stores (1805) data representing the features (e.g., edges) of the object in computer memory, and uses the data in subsequent applications, including for generating AR content.
In some implementations, the recognition process (1804) can also be used to perform the identification process (1802). For example, a plurality of different candidate objects, each with a different associated DT can be compared to the captured image of step 1802. The recognition process (1804) can be applied to the image for each DT, taking into account, for example, the one or more attributes of each DT, in attempt to recognize and therefore identify one of the candidate objects. The identification, in this case, can be a general object or a specific object instance depending on whether the DT defines a general class of objects or a specific object.
Process 1800 maps (1806) the 3D graphical model defined by the DT to the object in the image. In some implementations, mapping may include associating data from the DT, such as 3D graphics data and text, with recognized parts of the object in the image. In the example of loader 102 of Fig. 1 , data from the DT relating to its arm (covered by graphics 103) may be associated with the arm; data from the DT relating to front-end 108 may be associated with front-end 108; and so forth. In this example, the associating process may include storing pointers or other constructs that relate data from the DT with corresponding pixels in the image of the object. This association may further identify where, in the image, data from the DT is to be rendered when generating AR content. In the example of the loader of Fig. 1 , data from the DT - such as 3D graphics or text - is mapped to the image. The DT comprising the 3D graphics data may be stored in association with the image of the loader, and that association may be used in obtaining information about the loader from the image. The mapping may include associating parts of an object having attributes that result in uncertain content with corresponding information from the DT, and using those associations to track movement of the object between image frames. In the case of a loader, for example, the DT may contain information including, but not limited to, locations of windows, locations of chrome fixtures, the reflectivity of the loader's paint, and any other appropriate information about attributes that may affect the ability of the system to recognize the loader.
During movement, a location of the camera relative to the object may change as the relative positions between the object and the camera change. For example, the camera may be controlled to capture video of the object moving; the camera may be moved and capture video while the object remains stationary; or both the camera and the object may move while the camera captures video. In this regard, in some implementations, the relative motion between the camera and the object includes the object remaining stationary while the camera moves. In some implementations, the relative motion between the camera and the object includes the object moving while the camera remains stationary. In some implementations, the relative motion between the camera and the object includes both the object and the camera moving. In any case, the relative motion is evident by the object occupying, in different (e.g., first and second) images, different locations in the image frame. Multiple images may be captured during the relative motion and, as described below, the same DT may be mapped to (e.g., associated with) the object in each image. In some implementations, the motion of the object may be tracked between image frames in real-time, and the DT may track the object's motion in realtime, thereby allowing for interaction with the object via an image from different perspectives and in real-time. In this regard, in some implementations, real-time may not mean that two actions are simultaneous, but rather may include actions that occur on a continuous basis or track each other in time, taking into account delays associated with processing, data transmission, hardware, and the like. As previously explained, tablet computer 101 may be used to capture the image of loader 102 at a first time, TV For example, the image may be part of a video stream comprised of frames of images that are captured by walking around the loader. In another example, the image may be part of a video stream comprised of frames of images that are captured while the camera is stationary but the loader moves. Referring also to Fig. 3, in this example, the tablet computer 101 may be used to capture a different image of loader 102 at a second, different time, T2. As is clear from Figs. 1 and 3, the two images were taken from different perspectives.
To track (1808) movement of the object between a first image and a second, subsequent image that follows the first image in time, a recognition process is performed (1808a) to recognize the object in the first image. Recognition of the object in the first image may include identifying the position and orientation of the object in the first image. The recognition process may contain operations included in, or identical to, those performed in recognition process 1804. In this regard, the first image from which movement of the object may be tracked may be the original image upon which initial recognition was based, or it may be an image that follows the original image in a sequence of images (e.g., in frames of video) and that contains the object moved from its original position. If the first image is the original image, recognition process 1808a need not be performed, since recognition has already been performed for the original image in operation 1804. In this regard, tracking may be performed between consecutive images in the sequence or between non-consecutive images in the sequence. For example, if the sequence contains frames A, B following immediately from A, C following immediately from B, and D following immediately from C, tracking may be performed from frame A to frame B, from frame A to frame D, and so forth.
To perform recognition process 1808a, features, such as edges, in the first image, are identified based on a region in the first image that contains pixels having greater than a predefined difference. Using these features, the 3D graphical model is mapped to the object at its new location. As explained above, the DT that represents the instance of the object is retrieved from computer memory. The information includes, among other things, information identifying parts of the object that contain uncertain content. For example, the information may include parts of the object, such as locations of windows, locations of chrome fixtures, the reflectivity of the loader's paint, and any other appropriate information about attributes that may affect the ability of the system to recognize the loader.
Because process 1800 knows the location of the object within the image based on the features already recognized, process 1800 also knows the locations of the parts containing uncertain content. Thus, for each frame of an image that is used in tracking, the process transforms information from the DT into the image coordinate space as described herein, and then use that information to identify the regions of the image that can be deemed problematic because these regions contain uncertain content, such as specular or flexible items. With these regions identified, the process may remove these points from a pass of the tracking process, or weight them less. Accordingly, as described above, the recognition process deemphasizes information from regions of the image deemed to contain uncertain content, e.g., by weighting that information less in its recognition analysis or by ignoring that information. Regions of the first image deemed not to contain uncertain content, or less than a threshold amount of uncertain content, are weighted more heavily in the recognition analysis.
Process 1800 tracks movement of the object from a first location in the first image to a second location in a second, different image. The tracking process includes recognizing that the object has moved from the first image to the second image. Recognizing motion of the object includes performing (1808b), for the second image, a recognition process that places more importance on parts of the object that do not include uncertain content or that include less uncertain content than other parts of the object. To perform recognition process 1808b, features, such as edges, in the second image, are identified based on a region in the second image that contains pixels having greater than a predefined difference. Using these features, the 3D graphical model is mapped to the object at its new location. As explained above, the DT that represents the instance of the object is retrieved from computer memory. The information includes, among other things, information identifying parts of the object that contain uncertain content. For example, the information may include parts of the object, such as locations of windows, locations of chrome fixtures, the reflectivity of the loader's paint, and any other appropriate information about attributes that may affect the ability of the system to recognize the loader.
Because process 1800 knows the location of the object within the image based on the features already recognized, process 1800 also knows the locations of these parts. Thus, for each frame of an image that is used in tracking, the process transforms information from the DT into the image coordinate space as described herein, and then uses that information to identify the regions of the image that can be deemed problematic because these regions contain uncertain content, such as specular or flexible items. With these regions identified, the process may remove these points from a pass of the tracking process, or weight them less. Accordingly, as described above, the recognition process deemphasizes information from regions of the image deemed to contain uncertain content, e.g., by weighting that information less in its recognition analysis or by ignoring that information. Regions of the second image deemed not to contain uncertain content, or less than a threshold amount of uncertain content, are weighted more heavily in the recognition analysis.
Referring to Figs. 1 and 3, for example, the loader may move from the position shown in Fig. 1 to the position shown in Fig. 3. Movement of the object between positions may be tracked as described herein. Also, during the movement, the AR system may be configured so that the DT - e g., 3D graphics data and information defined by the DT - also tracks that relative movement. That is, the DT may be moved so that appropriate content from the DT tracks corresponding features of the moving object. In some implementations, the DT may be moved continuously with the object by adjusting the associations between data representing the object in an image frame and data representing the same parts of the object in the DT. For example, if a part of the object moves to coordinate XY in an image frame of video, the AR system may adjust the association between the DT and the image to reflect that data representing the moved part in the DT is also associated with coordinate XY. Accordingly, during tracking processes, recognition occurs as described herein based on features, such as a edges of the object, and the DT, which moves along with the object is used to identify uncertain content. This uncertain content is deemphasized or ignored in the recognition process.
In some implementations, the prior location of an object in a prior image may also be used to predict a current location of the object. This information, along with features, such as edges, that are weighted based on the amount of uncertain content they contain, may be used to determine the current location of the object, and to recognize the object. Thus, in some examples, movement of the object can predict its future location in a series of images - e.g., in frame-by-frame video - and the associations between DT data and image data may be adjusted to maintain correspondence between parts of the object in the image and their counterparts in the DT. Take arm 113 of Fig. 3 as an example. In this example, movement of the camera may result in relative motion of arm 113 in the image frame. Movement in one direction may be a factor in determining future movement of the object in that same direction, and thus in recognizing a future location of the arm. The system may also predict how to adjust the associations based on the prior movement.
Process 1800 provides (1809) 3D graphical content for rendering, on a graphical user interface, in association with the recognized object. For example, as the object moves from a first location to a second location, process 1800 also provides appropriate 3D graphical content for rendering relative to the object at the second location, as described herein. For example, the content may overlay the image of the object or otherwise augment the image of the object.
Fig. 19 shows an example process 1900 for treating rigid content differently than flexible content during recognition and tracking. In process 1900, operations 1901 incorporate all or some features of, or are identical to, operations 1801 to 1807 of process 1800. In some implementations, the recognition process (1904) may include recognizing rigid components of the object based on the object's DT. The rigid components include parts of the object that have less than a predefined degree of flexibility. The identification and recognition may be performed in the same manner as described above with respect to process 1800. In some
implementations, the recognition process (1904) may include recognizing flexible or movable parts of the object based on the object's DT. The flexible or movable components include parts of the object that have more than a predefined degree of flexibility are or movable within a range of motion. The recognition may be performed in the same manner as described above.
The DT for the object contains information identifying the rigid components of the object, and identifying the flexible or movable of the object. In some
implementations, process 1900 tracks (1908) movement of the object primarily by tracking movement of the rigid components individually from first locations in the first image to second locations in a second image. The tracking process includes recognizing that the rigid components have moved from the first image to the second image. Recognizing motion of the rigid components includes performing a recognition process of the type described herein to identify the rigid components based on identified edges and content included in the DT for the object. When tracking (1908) the motion, more importance is placed on movement of the rigid components than on movement of the flexible components. Placing more
importance may include deemphasizing impacts on motion of the flexible parts or connection mechanisms of the object and/or emphasizing impacts on motion the rigid parts of the object.
To track (1908) movement of the object between a first image and a second, subsequent image that follows the first image in time, a recognition process is performed (1908a) to recognize the object in the first image. Recognition of the object in the first image may include identifying the position and orientation of the rigid components in the first image. The recognition process may contain operations included in, or identical to, those performed in recognition process 1904. In this regard, the first image from which movement of the object may be tracked may be the original image upon which initial recognition was based, or it may be an image that follows the original image in a sequence of images (e.g., in frames of video) and that contains the object moved from its original position. If the first image is the original image, recognition process 1908a need not be performed, since recognition has already been performed for the original image in operation 1904. Furthermore, tracking may be performed between consecutive images in the sequenced or between on-consecutive images in the sequence, as described.
To perform recognition process 1908a, features, such as edges, in the rigid components, are identified based on a region in the first image that contains pixels having greater than a predefined difference. Using these features, the 3D graphical model is mapped to the object at its new location. For example, constituents of the DT representing the rigid components may be mapped to locations of the rigid components. As explained above, the DT that represents the instance of the object is retrieved from computer memory. The information includes, among other things, information identifying parts of the object that contain uncertain content. For example, the information may include parts of the object, such as flexible
connections (e.g., wires or hoses).
Because process 1908a knows the location of the rigid components within the image based on the features already recognized, process 1908a can predict the locations of any flexible or connector components based on information from the DT. Thus, for each frame of an image that is used in tracking, the process transforms information from the DT into the image coordinate space as described herein, and then uses that information to identify the regions of the image that can be deemed problematic because these regions contain uncertain content. Accordingly, as described above, the recognition process deemphasizes information from regions of the image deemed to contain uncertain content, e.g., by weighting that information less in its recognition analysis or by ignoring that information. Regions of the first image deemed not to contain uncertain content, or less than a threshold amount of uncertain content, are weighted more heavily in the recognition analysis.
Process 1900 tracks movement of the object from a first location in the image to a second location in a second, different image. The tracking process includes recognizing that the object has moved from the first image to the second image. Recognizing motion of the object includes performing (1908b), for the second image, a recognition process that places more importance on parts of the object that do not include uncertain content (e.g., rigid components) or that include less uncertain content than other parts of the object. To perform recognition process 1908b, features, such as edges, in the second image, are identified based on a region in the second image that contains pixels having greater than a predefined difference.
Using these features, the 3D graphical model is mapped to the object at its new location. As explained above, the DT that represents the instance of the object is retrieved from computer memory. The information includes, among other things, information identifying parts of the object that contain uncertain content. For example, the information may include parts of the object, such as flexible
components.
Because process 1908b knows the location of the object within the image based on the features already recognized, process 1908b also knows the locations of these parts. Thus, for each frame of an image that is used in tracking, the process transforms information from the DT into the image coordinate space as described herein, and then use that information to identify the regions of the image that can be deemed problematic because these regions contain uncertain content, such as flexible components. With these regions identified, the processes may remove these points from a pass of the tracking process, or weight them less.
Accordingly, as described above, the recognition process deemphasizes information from regions of the image deemed to contain uncertain content, e.g., by weighting that information less in its recognition analysis or by ignoring that information. Regions of the second image deemed not to contain uncertain content, or less than a threshold amount of uncertain content, are weighted more heavily in the
recognition analysis.
In some implementations involving components that are movable to mask or hide other parts of an object, a region of uncertainty may be defined on the area of the object that may be at least partially obscured as a result of motion. Such area(s) may be treated as containing uncertain content, which may be deemphasized during recognition as described herein.
As noted, Fig. 7 shows an example computer/network architecture 400 on which the example AR system and the example processes, may be implemented. The AR system and processes, however, are not limited to use with the Fig. 7 architecture, and may be implemented on any appropriate computer architecture and/or network architecture. As noted, Fig. 8 shows an example process 500 for producing AR content from image data and 3D graphics data in the DT. Process 500 may be performed, e.g., on the architecture of Fig. 7.
COMPUTER-BASED IMPLEMENTATIONS
Computing systems that may be used to implement all or part of the front-end and/or back-end of the AR system may include various forms of digital computers. Examples of digital computers include, but are not limited to, laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, smart televisions and other appropriate computers. Mobile devices may be used to implement all or part of the front-end and/or back-end of the AR system. Mobile devices include, but are not limited to, tablet computing devices, personal digital assistants, cellular telephones, smartphones, digital cameras, digital glasses and other portable computing devices. The computing devices described herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the technology. All or part of the processes described herein and their various modifications (hereinafter referred to as "the processes") can be implemented, at least in part, via a computer program product, e.g., a computer program tangibly embodied in one or more information carriers, e.g., in one or more tangible machine-readable storage media, for execution by, or to control the operation of, data processing apparatus, e.g., a programmable processor, a computer, or multiple computers
A computer program can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, part, subroutine, or other unit suitable for use in a computing environment. A computer program can be deployed to be executed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a network.
Actions associated with implementing the processes can be performed by one or more programmable processors executing one or more computer programs to perform the functions of the calibration process. All or part of the processes can be implemented as, special purpose logic circuitry, e.g., an FPGA (field
programmable gate array) and/or an ASIC (application-specific integrated circuit).
Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read-only storage area or a random access storage area or both. Elements of a computer (including a server) include one or more processors for executing instructions and one or more storage area devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from, or transfer data to, or both, one or more machine-readable storage media, such as mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. Non-transitory machine- readable storage media suitable for embodying computer program instructions and data include all forms of non-volatile storage area, including by way of example, semiconductor storage area devices, e.g., EPROM, EEPROM, and flash storage area devices; magnetic disks, e.g., internal hard disks or removable disks; magneto- optical disks; and CD-ROM and DVD-ROM disks.
Each computing device, such as a tablet computer, may include a hard drive for storing data and computer programs, and a processing device (e.g., a
microprocessor) and memory (e.g., RAM) for executing computer programs. Each computing device may include an image capture device, such as a still camera or video camera. The image capture device may be built-in or simply accessible to the computing device.
Each computing device may include a graphics system, including a display screen. A display screen, such as an LCD or a CRT (Cathode Ray Tube) displays, to a user, images that are generated by the graphics system of the computing device. As is well known, display on a computer display (e.g., a monitor) physically transforms the computer display. For example, if the computer display is LCD- based, the orientation of liquid crystals can be changed by the application of biasing voltages in a physical transformation that is visually apparent to the user. As another example, if the computer display is a CRT, the state of a fluorescent screen can be changed by the impact of electrons in a physical transformation that is also visually apparent. Each display screen may be touch-sensitive, allowing a user to enter information onto the display screen via a virtual keyboard. On some computing devices, such as a desktop or smartphone, a physical QWERTY keyboard and scroll wheel may be provided for entering information onto the display screen. Each computing device, and computer programs executed thereon, may also be configured to accept voice commands, and to perform functions in response to such commands. For example, the example processes described herein may be initiated at a client, to the extent possible, via voice commands.
Elements of different implementations described herein may be combined to form other implementations not specifically set forth above. Elements may be left out of the processes, computer programs, user interfaces, etc. described herein without adversely affecting their operation or the operation of the system in general. Furthermore, various separate elements may be combined into one or more individual elements to perform the functions described herein.
Other implementations not specifically described herein are also within the scope of the following claims.
What is claimed is:

Claims

1 . A method performed by a computing system, comprising:
obtaining an image of an object captured by a device during relative motion between the object and the device;
determining a location of the device relative to the object during image capture based on one or more attributes of the object in the image;
mapping a three-dimensional (3D) graphical model representing the object to the object in the image based, at least in part, on the location of the device, and the 3D graphical model comprising information about the object;
receiving a selection of a part of the object; and
outputting, for rendering on a user interface, at least some information from the 3D graphical model based on the part selected.
2. The method of claim 1 , wherein determining the location comprises:
obtaining a first size of the object shown in the image, the first size being among the one or more attributes;
obtaining a second size of the object from the 3D graphical model; and comparing the first size to the second size to determine a distance between the device and the object, the distance being part of the location.
3. The method of claim 1 , wherein determining the location comprises:
identifying a feature of the object shown in the image, the feature being among the one or more attributes; and
determining an orientation of the object relative to the device based on the feature and based on the information about the object in the 3D graphical model, the orientation being part of the location.
4. The method of claim 1 , wherein determining the location of the device comprises accounting for a difference between a position of a camera on the device used to capture the image and a predefined reference point on the device.
5. The method of claim 1 , wherein determining the location of the device comprises updating the location of the device as relative positions between the object and the device change; and
wherein mapping the 3D graphical model to the object in the image is performed for updated locations of the device.
6. The method of claim 1 , wherein mapping the 3D graphical model to the object in the image comprises associating parts of the 3D graphical model to corresponding parts of the object shown in the image, with a remainder of the 3D graphical model representing parts of the object not shown in the image being positioned relative to the parts of the 3D graphical model overlaid on the parts of the object shown in the image.
7. The method of claim 1 , further comprising:
identifying the at least some information based on the part selected, wherein the at least some information comprises information about the part selected.
8. The method of claim 1 , further comprising:
identifying at least some information based the part selected, wherein the at least some information comprises information about parts internal the object relative to the part selected.
9. The method of claim 1 , wherein receiving the selection comprises:
receiving a selection of a point on the image, the point corresponding to the part as displayed in the image; and
mapping the selected point to the 3D graphical model.
10. The method of claim 9, wherein mapping the selected point comprises: tracing a ray through the 3D graphical model based on a mapping of the 3D graphical model to the image and based on the location of the device relative to the object; and
identifying an intersection between the ray and the part.
1 1 . The method of claim 10, further comprising obtaining at least some information about one or more parts of the object that intersect the ray.
12. The method of claim 1 1 , wherein the at least some information comprises data representing the one or more parts graphically, the data enabling rendering of the one or more parts relative to the object.
13. The method of claim 1 1 , wherein the at least some information comprises data representing one or more parameters relating to the one or more parts, the data enabling rendering of the one or more parameters relative to the object.
14. The method of claim 1 , wherein the information about the object in the 3D graphical model comprises information about parts of the object, the information about the parts indicating which of the parts are selectable and indicating which of the parts are selectable individually or as a group.
15. The method of claim 14, further comprising:
enabling configuration, through a user interface, of the information about the parts indicating which of the parts are selectable and indicating which of the parts are selectable individually or as a group.
16. The method of claim 1 , further comprising:
based on the selection, drawing a color graphic version of the part into a buffer; and using the color graphic version to identify the part.
17. The method of claim 1 , wherein the at least some information comprises computer graphics that is at least partially transparent.
18. The method of claim 1 , wherein receiving the selection comprises:
outputting data representing a menu containing the part; and
receiving the selection based on selection of the part in the menu.
19. The method of claim 1 , wherein receiving the selection comprises:
outputting data representing computer graphics showing the part; and receiving the selection based on selection of the computer graphics.
20. One or more non-transitory machine-readable storage media storing instructions that are executable by one or more processing devices to perform operations comprising:
obtaining an image of an object captured by a device during relative motion between the object and the device;
determining a location of the device relative to the object during image capture based on one or more attributes of the object in the image;
mapping a three-dimensional (3D) graphical model representing the object to the object in the image based, at least in part, on the location of the device, and the 3D graphical model comprising information about the object;
receiving a selection of a part of the object; and
outputting, for rendering on a user interface, at least some information from the 3D graphical model based on the part selected.
21 . The one or more non-transitory machine-readable storage media of claim 20, wherein determining the location comprises: obtaining a first size of the object shown in the image, the first size being among the one or more attributes;
obtaining a second size of the object from the 3D graphical model; and comparing the first size to the second size to determine a distance between the device and the object, the distance being part of the location.
22. The one or more non-transitory machine-readable storage media of claim 20, wherein determining the location comprises:
identifying a feature of the object shown in the image, the feature being among the one or more attributes; and
determining an orientation of the object relative to the device based on the feature and based on the information about the object in the 3D graphical model, the orientation being part of the location.
23. The one or more non-transitory machine-readable storage media of claim 20, wherein determining the location of the device comprises accounting for a difference between a position of a camera on the device used to capture the image and a predefined reference point on the device.
24. The one or more non-transitory machine-readable storage media of claim 20, wherein determining the location of the device comprises updating the location of the device as relative positions between the object and the device change; and
wherein mapping the 3D graphical model to the object in the image is performed for updated locations of the device.
25. The one or more non-transitory machine-readable storage media of claim 20, wherein mapping the 3D graphical model to the object in the image comprises associating parts of the 3D graphical model to corresponding parts of the object shown in the image, with a remainder of the 3D graphical model representing parts of the object not shown in the image being positioned relative to the parts of the 3D graphical model overlaid on the parts of the object shown in the image.
26. The one or more non-transitory machine-readable storage media of claim 20, wherein the operations comprise:
identifying the at least some information based on the part selected, wherein the at least some information comprises information about the part selected.
27. The one or more non-transitory machine-readable storage media of claim 20, wherein the operations comprise:
identifying the at least some information based the part selected, wherein the at least some information comprises information about parts internal the object relative to the part selected.
28. The one or more non-transitory machine-readable storage media of claim
27, wherein receiving the selection comprises:
receiving a selection of a point on the image, the point corresponding to the part as displayed in the image; and
mapping the selected point to the 3D graphical model.
29. The one or more non-transitory machine-readable storage media of claim
28, wherein mapping the selected point comprises:
tracing a ray through the 3D graphical model based on a mapping of the 3D graphical model to the image and based on the location of the device relative to the object; and
identifying an intersection between the ray and the part.
30. The one or more non-transitory machine-readable storage media of claim 29, wherein the operations comprise obtaining at least some information about one or more parts of the object that intersect the ray.
31 . The one or more non-transitory machine-readable storage media of claim 29, wherein the at least some information comprises data representing the one or more parts graphically, the data enabling rendering of the one or more parts relative to the object.
32. The one or more non-transitory machine-readable storage media of claim 29, wherein the at least some information comprises data representing one or more parameters relating to the one or more parts, the data enabling rendering of the one or more parameters relative to the object.
33. The one or more non-transitory machine-readable storage media of claim 20, wherein the operations comprise:
based on the selection, identifying the part based on one or more attributes of a pixel in the image that correspond to the selection.
34. The one or more non-transitory machine-readable storage media of claim 20, wherein the information about the object in the 3D graphical model comprises information about parts of the object, the information about the parts indicating which of the parts are selectable and indicating which of the parts are selectable
individually or as a group.
35. The one or more non-transitory machine-readable storage media of claim 34, wherein the operations comprise: enabling configuration, through a user interface, of the information about the parts indicating which of the parts are selectable and indicating which of the parts are selectable individually or as a group.
36. The one or more non-transitory machine-readable storage media of claim 20, wherein the operations comprise:
based on the selection, drawing a color graphic version of the part into a buffer; and
using the color graphic version to identify the part.
37. The one or more non-transitory machine-readable storage media of claim 20, wherein the at least some information comprises computer graphics that is at least partially transparent.
38. The one or more non-transitory machine-readable storage media of claim 20, wherein receiving the selection comprises:
outputting data representing a menu containing the part; and
receiving the selection based on selection of the part in the menu.
39. The one or more non-transitory machine-readable storage media of claim 20, wherein receiving the selection comprises:
outputting data representing computer graphics showing the part; and receiving the selection based on selection of the computer graphics.
40. A system comprising:
one or more non-transitory machine-readable storage media storing instructions that are executable; and
one or more processing devices to execute the instructions to perform operations comprising: obtaining an image of an object captured by a device during relative motion between the object and the device;
determining a location of the device relative to the object during image capture based on one or more attributes of the object in the image;
mapping a three-dimensional (3D) graphical model representing the object to the object in the image based, at least in part, on the location of the device, and the 3D graphical model comprising information about the object; receiving a selection of a part of the object; and
outputting, for rendering on a user interface, at least some information from the 3D graphical model based on the part selected.
41 . A method performed by a computing system, comprising:
obtaining an image of an object captured by a device during relative motion between the object and the device;
determining a location of the device relative to the object during image capture based on one or more attributes of the object in the image;
storing, in computer memory, the image of the object and the location of the device during image capture;
mapping a three-dimensional (3D) graphical model representing the object to the object in the image based, at least in part, on the location of the device, the 3D graphical model comprising information about the object;
receiving, at a time subsequent to capture of the image, first data
representing an action to be performed for the object in the image; and
in response to the first data, generating second data for use in rending content on a display device, the second data being based on the image stored, the location of the device stored, and at least some of the information.
42. The method of claim 41 , wherein the second data is based also on the action to be performed for the object in the image.
43. The method of claim 41 , wherein the content comprises the image augmented based on the at least some of the information.
44. The method of claim 41 , further comprising:
receiving an update to the information; and
storing the update in the 3D graphical model as part of the information;
wherein the content comprises the image augmented based on the update and presented from a perspective of the device that is based on the location.
45. The method of claim 44, wherein the update is received from a sensor associated with the object, the sensor providing the update following capture of the image by the device.
46. The method of claim 44, wherein the update is received in real-time, and the second data is generated in response to receipt of the update.
47. The method of claim 41 , wherein the image is a frame of video capture by the device during the relative motion between the object and the device;
wherein the location comprises a position and an orientation of the device relative to the object for each of multiple frames of the video; and
wherein the content comprises the video augmented with at least some of the information and presented from a perspective of the device.
48. The method of claim 41 , wherein determining the location comprises: obtaining a first size of the object shown in the image, the first size being among the one or more attributes;
obtaining a second size of the object from the 3D graphical model; and comparing the first size to the second size to determine a distance between the device and the object, the distance being part of the location.
49. The method of claim 41 , wherein determining the location comprises: identifying a feature of the object shown in the image, the feature being among the one or more attributes; and
determining an orientation of the object relative to the device based on the feature and based on the information about the object in the 3D graphical model, the orientation being part of the location.
50. The method of claim 41 , wherein determining the location of the device comprises updating the location of the device as relative positions between the object and the device change; and
wherein mapping the 3D graphical model to the object in the image is performed for updated locations of the device.
51 . The method of claim 41 , wherein mapping the 3D graphical model to the object in the image comprises associating parts of the 3D graphical model to corresponding parts of the object shown in the image, with a remainder of the 3D graphical model representing parts of the object not shown in the image being positioned relative to the parts of the 3D graphical model overlaid on the parts of the object shown in the image.
52. The method of claim 41 , wherein the at least some information represents components interior to the object.
53. One or more non-transitory machine-readable storage media storing instructions that are executable by one or more processing devices to perform operations comprising: obtaining an image of an object captured by a device during relative motion between the object and the device;
determining a location of the device relative to the object during image capture based on one or more attributes of the object in the image;
storing, in computer memory, the image of the object and the location of the device during image capture;
mapping a three-dimensional (3D) graphical model representing the object to the object in the image based, at least in part, on the location of the device, the 3D graphical model comprising information about the object;
receiving, at a time subsequent to capture of the image, first data
representing an action to be performed for the object in the image; and
in response to the first data, generating second data for use in rending content on a display device, the second data being based on the image stored, the location of the device stored, and at least some of the information.
54. The one or more non-transitory machine-readable storage media of claim 53, wherein determining the location comprises:
obtaining a first size of the object shown in the image, the first size being among the one or more attributes;
obtaining a second size of the object from the 3D graphical model; and comparing the first size to the second size to determine a distance between the device and the object, the distance being part of the location.
55. The one or more non-transitory machine-readable storage media of claim 53, wherein determining the location comprises:
identifying a feature of the object shown in the image, the feature being among the one or more attributes; and determining an orientation of the object relative to the device based on the feature and based on the information about the object in the 3D graphical model, the orientation being part of the location.
56. The one or more non-transitory machine-readable storage media of claim 53, wherein determining the location of the device comprises updating the location of the device as relative positions between the object and the device change; and
wherein mapping the 3D graphical model to the object in the image is performed for updated locations of the device.
57. A system comprising:
one or more non-transitory machine-readable storage media storing instructions that are executable; and
one or more processing devices to execute the instructions to perform operations comprising:
obtaining an image of an object captured by a device during relative motion between the object and the device;
determining a location of the device relative to the object during image capture based on one or more attributes of the object in the image;
storing, in computer memory, the image of the object and the location of the device during image capture;
mapping a three-dimensional (3D) graphical model representing the object to the object in the image based, at least in part, on the location of the device, the 3D graphical model comprising information about the object;
receiving, at a time subsequent to capture of the image, first data representing an action to be performed for the object in the image; and
in response to the first data, generating second data for use in rending content on a display device, the second data being based on the image stored, the location of the device stored, and at least some of the information.
58. The system of claim 57, wherein determining the location comprises: obtaining a first size of the object shown in the image, the first size being among the one or more attributes;
obtaining a second size of the object from the 3D graphical model; and comparing the first size to the second size to determine a distance between the device and the object, the distance being part of the location.
59. The system of claim 57, wherein determining the location comprises: identifying a feature of the object shown in the image, the feature being among the one or more attributes; and
determining an orientation of the object relative to the device based on the feature and based on the information about the object in the 3D graphical model, the orientation being part of the location.
60. The system of claim 57, wherein determining the location of the device comprises updating the location of the device as relative positions between the object and the device change; and
wherein mapping the 3D graphical model to the object in the image is performed for updated locations of the device.
61 . A method performed by one or more processing devices, comprising: obtaining, from computer memory, information from a three-dimensional (3D) graphical model that represents an object;
identifying, based on the information, a first part of the object having an attribute;
performing a recognition process on the object based on features of the object, the recognition process attaching more importance to a second part of the object than to the first part, the second part either not having the attribute or having less of the attribute than the first part; and
providing data for rendering content on a graphical user interface based, at least in part, on recognition of the object performed by the recognition process.
62. The method of claim 61 , wherein attaching more importance to the second part of the object comprises ignoring information about the first part of the object during the recognition process.
63. The method of claim 61 , wherein attaching more importance to the second part of the object comprises deemphasizing information about the first part of the object during the recognition process.
64. The method of claim 61 , further comprising tracking movement of the object from a first location to a second location;
wherein tracking the movement comprises:
identifying, in the first image, a feature in the second part of the object, the feature being identified based on a region in the second image that contains pixels having greater than a predefined difference; and
identifying, in the second image, the feature in the second part of the object, the feature being identified based on the region in the second image that contains the pixels having greater than the predefined difference; and wherein the second location is based on a location of the feature in the second image.
65. The method of claim 64, wherein the feature is a first feature;
wherein the tracking further comprises:
identifying, in the first image, a second feature in the first part of the object, the second feature being identified based on a second region in the second image that contains pixels having greater than a predefined difference; and
identifying, in the second image, the second feature in the first part of the object, the second feature being identified based on the second region in the second image that contains the pixels having greater than the predefined difference;
wherein the second location is based on both the location of the first feature in the second image and the location of the second feature in the second image; and wherein deemphasizing comprises weighting the location of the second feature in the second image less heavily than the location of the first feature in the second image.
66. The method of claim 61 , wherein the attribute comprises an amount of reflectivity in the first part of the object.
67. The method of claim 61 , wherein the attribute comprises an amount of transparency in the first part of the object.
68. The method of claim 61 , wherein the attribute comprises an amount of flexibility in the first part of the object.
69. The method of claim 61 , wherein the attribute comprises an amount of the first part of the objected that is coverable based on motion of one or more other parts of the object.
70. The method of claim 61 , wherein the image is captured within a field specified for recognition of the object.
71 . A method performed by one or more processing devices, comprising: obtaining, from computer memory, information from a three-dimensional (3D) graphical model that represents an object;
identifying, based on the information, rigid components of the object that are connected by a flexible component of the object;
performing a recognition process on the object based on features of the rigid components, the recognition process attaching more importance to the rigid components than to the flexible components; and
providing data for rendering content on a graphical user interface based, at least in part, on recognition of the object performed by the recognition process.
72. The method of claim 71 , further comprising tracking movement of the object from a first location in the first image to a second location in a second image.
73. The method of claim 72, wherein tracking the movement of the object from the first location in a first image to the second location in a second image comprises ignoring the flexible component and not taking into account an impact of the flexible component when tracking the movement.
74. The method of claim 73, wherein tracking movement of the object from the first location in the first image to the second location in the second image comprises deemphasizing an impact of the flexible component when tracking the movement, but not ignoring the impact.
75. The method of claim 72, wherein tracking movement of the object from the first location in the first image to the second location in the second image comprises:
tracking movement of the rigid components individually; and
predicting a location of the flexible component based on locations of the rigid components following movement.
76. One or more non-transitory machine-readable storage media storing instructions that are executable by one or more processing devices to perform operations comprising:
obtaining, from computer memory, information from a three-dimensional (3D) graphical model that represents an object;
identifying, based on the information, a first part of the object having an attribute;
performing a recognition process on the object based on features of the object, the recognition process attaching more importance to a second part of the object than to the first part, the second part either not having the attribute or having less of the attribute than the first part; and
providing data for rendering content on a graphical user interface based, at least in part, on recognition of the object performed by the recognition process.
77. The one or more non-transitory machine-readable storage media of claim 76, wherein the operations comprise tracking movement of the object from a first location to a second location; and
wherein tracking the movement comprises:
identifying, in the first image, a feature in the second part of the object, the feature being identified based on a region in the second image that contains pixels having greater than a predefined difference; and
identifying, in the second image, the feature in the second part of the object, the feature being identified based on the region in the second image that contains the pixels having greater than the predefined difference; and wherein the second location is based on a location of the feature in the second image.
78. One or more non-transitory machine-readable storage media storing instructions that are executable by one or more processing devices to perform operations comprising:
obtaining, from computer memory, information from a three-dimensional (3D) graphical model that represents an object;
identifying, based on the information, rigid components of the object that are connected by a flexible component of the object;
performing a recognition process on the object based on features of the rigid components, the recognition process attaching more importance to the rigid components than to the flexible components; and
providing data for rendering content on a graphical user interface based, at least in part, on recognition of the object performed by the recognition process.
79. A system comprising:
one or more non-transitory machine-readable storage media storing instructions that are executable; and
one or more processing devices to execute the instructions to perform operations comprising:
obtaining, from computer memory, information from a three- dimensional (3D) graphical model that represents an object;
identifying, based on the information, a first part of the object having an attribute;
performing a recognition process on the object based on features of the object, the recognition process attaching more importance to a second part of the object than to the first part, the second part either not having the attribute or having less of the attribute than the first part; and
providing data for rendering content on a graphical user interface based, at least in part, on recognition of the object performed by the recognition process.
80. The system of claim 79, wherein the operations comprise tracking movement of the object from a first location to a second location; and
wherein tracking the movement comprises:
identifying, in the first image, a feature in the second part of the object, the feature being identified based on a region in the second image that contains pixels having greater than a predefined difference; and
identifying, in the second image, the feature in the second part of the object, the feature being identified based on the region in the second image that contains the pixels having greater than the predefined difference; and wherein the second location is based on a location of the feature in the second image.
81 . A system comprising:
one or more non-transitory machine-readable storage media storing instructions that are executable; and
one or more processing devices to execute the instructions to perform operations comprising:
obtaining, from computer memory, information from a three- dimensional (3D) graphical model that represents an object;
identifying, based on the information, rigid components of the object that are connected by a flexible component of the object;
performing a recognition process on the object based on features of the rigid components, the recognition process attaching more importance to the rigid components than to the flexible components; and
providing data for rendering content on a graphical user interface based, at least in part, on recognition of the object performed by the recognition process.
PCT/US2018/033385 2017-05-19 2018-05-18 Augmented reality system WO2018213702A1 (en)

Applications Claiming Priority (10)

Application Number Priority Date Filing Date Title
US201762508948P 2017-05-19 2017-05-19
US62/508,948 2017-05-19
US201762509359P 2017-05-22 2017-05-22
US62/509,359 2017-05-22
US15/789,341 2017-10-20
US15/789,329 US11030808B2 (en) 2017-10-20 2017-10-20 Generating time-delayed augmented reality content
US15/789,316 2017-10-20
US15/789,341 US10755480B2 (en) 2017-05-19 2017-10-20 Displaying content in an augmented reality system
US15/789,329 2017-10-20
US15/789,316 US10572716B2 (en) 2017-10-20 2017-10-20 Processing uncertain content in a computer graphics system

Publications (1)

Publication Number Publication Date
WO2018213702A1 true WO2018213702A1 (en) 2018-11-22

Family

ID=64274689

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2018/033385 WO2018213702A1 (en) 2017-05-19 2018-05-18 Augmented reality system

Country Status (1)

Country Link
WO (1) WO2018213702A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10431005B2 (en) 2015-05-05 2019-10-01 Ptc Inc. Augmented reality system
US10572716B2 (en) 2017-10-20 2020-02-25 Ptc Inc. Processing uncertain content in a computer graphics system
US10755480B2 (en) 2017-05-19 2020-08-25 Ptc Inc. Displaying content in an augmented reality system
WO2020190389A1 (en) * 2019-03-19 2020-09-24 Microsoft Technology Licensing, Llc Generation of digital twins of physical environments
US11030808B2 (en) 2017-10-20 2021-06-08 Ptc Inc. Generating time-delayed augmented reality content
CN113673894A (en) * 2021-08-27 2021-11-19 东华大学 Multi-person cooperation AR assembly method and system based on digital twin
CN115017137A (en) * 2022-06-30 2022-09-06 北京亚控科技发展有限公司 Digital twinning method, device and equipment for personnel full life cycle

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160005211A1 (en) * 2014-07-01 2016-01-07 Qualcomm Incorporated System and method of three-dimensional model generation
WO2016064435A1 (en) * 2014-10-24 2016-04-28 Usens, Inc. System and method for immersive and interactive multimedia generation
US20160328883A1 (en) 2015-05-05 2016-11-10 PTC, Inc. Augmented reality system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160005211A1 (en) * 2014-07-01 2016-01-07 Qualcomm Incorporated System and method of three-dimensional model generation
WO2016064435A1 (en) * 2014-10-24 2016-04-28 Usens, Inc. System and method for immersive and interactive multimedia generation
US20160328883A1 (en) 2015-05-05 2016-11-10 PTC, Inc. Augmented reality system

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10431005B2 (en) 2015-05-05 2019-10-01 Ptc Inc. Augmented reality system
US10922893B2 (en) 2015-05-05 2021-02-16 Ptc Inc. Augmented reality system
US11461981B2 (en) 2015-05-05 2022-10-04 Ptc Inc. Augmented reality system
US11810260B2 (en) 2015-05-05 2023-11-07 Ptc Inc. Augmented reality system
US10755480B2 (en) 2017-05-19 2020-08-25 Ptc Inc. Displaying content in an augmented reality system
US10572716B2 (en) 2017-10-20 2020-02-25 Ptc Inc. Processing uncertain content in a computer graphics system
US11030808B2 (en) 2017-10-20 2021-06-08 Ptc Inc. Generating time-delayed augmented reality content
US11188739B2 (en) 2017-10-20 2021-11-30 Ptc Inc. Processing uncertain content in a computer graphics system
WO2020190389A1 (en) * 2019-03-19 2020-09-24 Microsoft Technology Licensing, Llc Generation of digital twins of physical environments
CN113673894A (en) * 2021-08-27 2021-11-19 东华大学 Multi-person cooperation AR assembly method and system based on digital twin
CN113673894B (en) * 2021-08-27 2024-02-02 东华大学 Multi-person cooperation AR assembly method and system based on digital twinning
CN115017137A (en) * 2022-06-30 2022-09-06 北京亚控科技发展有限公司 Digital twinning method, device and equipment for personnel full life cycle

Similar Documents

Publication Publication Date Title
US11188739B2 (en) Processing uncertain content in a computer graphics system
US20200388080A1 (en) Displaying content in an augmented reality system
US11238644B2 (en) Image processing method and apparatus, storage medium, and computer device
US11030808B2 (en) Generating time-delayed augmented reality content
WO2018213702A1 (en) Augmented reality system
US11657419B2 (en) Systems and methods for building a virtual representation of a location
US11257233B2 (en) Volumetric depth video recording and playback
US10540812B1 (en) Handling real-world light sources in virtual, augmented, and mixed reality (xR) applications
US20130300740A1 (en) System and Method for Displaying Data Having Spatial Coordinates
Zollmann et al. Image-based ghostings for single layer occlusions in augmented reality
CN104871214A (en) User interface for augmented reality enabled devices
US9535498B2 (en) Transparent display field of view region determination
US11748937B2 (en) Sub-pixel data simulation system
US11682168B1 (en) Method and system for virtual area visualization
US11562545B2 (en) Method and device for providing augmented reality, and computer program
JP7475022B2 (en) Method and device for generating 3D maps of indoor spaces
WO2023102637A1 (en) Interactive visualizations for industrial inspections
Röhlig et al. Visibility widgets for unveiling occluded data in 3d terrain visualization
Eskandari et al. Diminished reality in architectural and environmental design: Literature review of techniques, applications, and challenges
WO2023047653A1 (en) Information processing device and information processing method
US20220343588A1 (en) Method and electronic device for selective magnification in three dimensional rendering systems
Buschmann et al. Challenges and approaches for the visualization of movement trajectories in 3D geovirtual environments
Agrawal Augmented Reality, an Emerging Technology and its View Management Problem.

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18730575

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 23.03.2020)

122 Ep: pct application non-entry in european phase

Ref document number: 18730575

Country of ref document: EP

Kind code of ref document: A1