NO343601B1 - Method and system for augmented reality assembly guidance - Google Patents

Method and system for augmented reality assembly guidance Download PDF

Info

Publication number
NO343601B1
NO343601B1 NO20180179A NO20180179A NO343601B1 NO 343601 B1 NO343601 B1 NO 343601B1 NO 20180179 A NO20180179 A NO 20180179A NO 20180179 A NO20180179 A NO 20180179A NO 343601 B1 NO343601 B1 NO 343601B1
Authority
NO
Norway
Prior art keywords
virtual
augmented reality
component
assembly
space
Prior art date
Application number
NO20180179A
Other languages
Norwegian (no)
Other versions
NO20180179A1 (en
Inventor
Angel Israel Losada Salvador
Conny Gillström
Original Assignee
Kitron Asa
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kitron Asa filed Critical Kitron Asa
Priority to NO20180179A priority Critical patent/NO343601B1/en
Priority to PCT/NO2019/050032 priority patent/WO2019151877A1/en
Publication of NO20180179A1 publication Critical patent/NO20180179A1/en
Publication of NO343601B1 publication Critical patent/NO343601B1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/042Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means
    • G06F3/0425Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means using a single imaging device like a video camera for tracking the absolute position of a single or a plurality of objects with respect to an imaged reference surface, e.g. video camera imaging a display or a projection screen, a table or a wall surface, on which a computer generated image is displayed or projected
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/18Numerical control [NC], i.e. automatically operating machines, in particular machine tools, e.g. in a manufacturing environment, so as to execute positioning, movement or co-ordinated operations by means of programme data in numerical form
    • G05B19/409Numerical control [NC], i.e. automatically operating machines, in particular machine tools, e.g. in a manufacturing environment, so as to execute positioning, movement or co-ordinated operations by means of programme data in numerical form characterised by using manual input [MDI] or by using control panel, e.g. controlling functions with the panel; characterised by control panel details, by setting parameters
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/32Operator till task planning
    • G05B2219/32014Augmented reality assists operator in maintenance, repair, programming, assembly, use of head mounted display with 2-D 3-D display and voice feedback, voice and gesture command

Description

METHOD AND SYSTEM FOR AUGMENTED REALITY ASSEMBLY GUIDANCE
TECHNICAL FIELD
[0001] The present invention relates to augmented reality systems for providing instructions to an operator performing a production task such as the assembly of parts. In particular, the invention relates to systems and methods for creating augmented reality instructions that can be used in an augmented reality workstation to provide such instructions.
BACKGROUND
[0002] Augmented reality (AR) is a presentation of a physical environment augmented by additional information generated by a computer system. The physical environment can be viewed directly or indirectly as video on a display, and this view is supplemented by additional information that is viewable and appears as if it is part of or present in the physical environment. The physical environment can be viewed directly through goggles or a window or windshield, where virtual information is added in a heads up type display (HUD), or the physical environment is viewed on a video display with the virtual elements added. The video display may be in the form of augmented reality (AR) goggles that are substantially virtual reality (VR) goggles with a forward looking camera attached, or it may be a separate display. A number of other types of displays are known in the art. The added information may be constructive or destructive. Constructive information is additional information that is added to the view of the physical environment, while destructive information is information that somehow masks or obscures information from the physical environment.
[0003] The visual information may be supplemented by audio, haptic, somatosensory and olfactory information or stimulus.
[0004] Augmented reality (AR) and virtual reality (VR) have been popularized for recreational purposes such as video games, provision of information about locations, buildings or sights to tourists, information about items in museums, virtual presentation of proposed architecture in a real environment, etc. However, development of AR solutions for more practical purposes have accelerated, and one area where AR is expected to find extended use in the years to come is in the field of manufacture and maintenance. In particular, it is believed that AR assisted production and assembly will facilitate training of personnel and increase efficiency, quality and quality control.
[0005] However, most AR workstations that have been developed are designed for specific purposes both with respect to physical environment and the AR instructions themselves.
Consequently, it is expensive to configure AR workstations, program them for a specific task or set of tasks, and reconfigure and reprogram for new tasks. Such operations often require expertise in computer graphics, 3D modeling, presentation design, etc. A person who is skilled in the required task, for example the assembly of a mechanical part from individual components, cannot be expected to be able to share knowledge and skills by programming an AR workstation.
[0006] US patent application with publication number 2017/0316610 describes an assembly instruction system with at least one depth camera, database, processor and display. The database stores multiple images of known objects and an assembly tree. Object images are compared with stored known object images, objects are identified and displayed as augmented reality images with virtual arrows added.
[0007] US patent 8,817,047 describes a portable device with a camera unit configured to capture an image and a display unit configured to display a virtual image. The device detects a marker object from the image and displays the virtual image corresponding to the marker object based on a position of the marker object.
[0008] European patent application published as EP 2642451A2 discloses a system and a method of augmented reality interaction for repositioning a virtual object on an image of a surface.
[0009] European patent application published as EP3239933A2 discloses a method for specifying a position of a virtual object based on a position of a map point that is defined in a first map and indicates three-dimensional coordinates of a feature point.
[0010] EP2945374A2 discloses a method of displaying augmented reality content on a physical surface.
[0011] Consequently there is a need for AR workstations that are more readily programmable and reconfigurable to assist in different tasks.
SUMMARY OF THE DISCLOSURE
[0012] The present invention provides systems and methods that address and alleviate the needs outlined above.
[0013] In a first aspect the invention provides an augmented reality workstation for creating augmented reality guidance for an assembly operation in a manufacturing environment. The augmented reality workstation includes a camera configured to capture a work area of the workstation, a computer system configured to receive a video feed from the camera and to combine the video feed with virtual graphical elements to generate an augmented reality video signal, a display configured to receive the augmented reality video signal from the computer system and display an augmented reality view of the work area, and a user interface.
[0014] The computer system includes an augmented reality management module configured to detect a characteristic feature in the video feed, the characteristic feature being associated with a known position in the work area and a reference position in the virtual 3D space, receive 3D descriptions of assembly components, associate the 3D descriptions of assembly components with respective positions in the virtual 3D space, change the position in the virtual 3D space based on parameters received from the user interface, and generate the virtual graphical elements based on the 3D representation and position them in the video signal based on their respective positions in the virtual 3D space. The computer system also includes an augmented reality instruction file creator configured to save the 3D descriptions of assembly components, positions in the virtual 3D space associated with the 3D descriptions at defined points of an assembly operation, and a position in the virtual 3D space associated with the characteristic feature.
[0015] The characteristic feature may be a 2D marker such as a QR code or some other matrix code, or it may be a contrasting color or shape that is part of the first component.
[0016] In an embodiment, the user interface is configured to receive user input identifying a 3D description of a specific assembly component, user input specifying a change in the position in the virtual 3D space with which the 3D description of a specific assembly component is associated, and user input indicative of a completion of an assembly step.
[0017] In some embodiments, the augmented reality workstation further comprises a 3D model generator configured to generate a negative representation of at least a part of a 3D description and output the negative representation in a format that is usable by a 3D printer.
[0018] In some embodiments the augmented reality workstation comprises an image recognition module configured to detect and identify the characteristic feature and to calculate at least one of a distance and a viewing angle from the camera to the characteristic feature based on at least one of the size and the proportions of the characteristic feature in the video feed.
[0019] In some embodiments the augmented reality management module is further configured to receive 2D descriptions of assembly components, associate the 2D descriptions with respective positions in the virtual 3D space and change the position in virtual 3D space based on parameters received from the user interface. This may allow an operator to provide pictures or capture pictures of a component with the camera when 3D descriptions are not available. In some embodiments the augmented reality management module is further configured to generate an estimated 3D description based on the 2D description.
[0020] According to another aspect of the invention an augmented reality workstation for providing augmented reality guidance in a manufacturing environment is provided. The augmented reality workstation includes a camera configured to capture a work area of the workstation, a computer system configured to receive a video feed from the camera and to combine the video feed with virtual graphical elements to generate an augmented reality video signal, a display configured to receive the augmented reality video signal from the computer system and display an augmented reality view of the work area, and a user interface.
[0021] The computer system includes an augmented reality management module configured to detect a characteristic feature in the video feed, the characteristic feature being associated with a known position in the work area and a reference position in the virtual 3D space, receive 3D descriptions of assembly components, associate the 3D descriptions of assembly components with a position in the virtual 3D space, and change the position in the virtual 3D space based on parameters received from the user interface, and generate video output constituting a combination of the video feed and graphical representations of the 3D descriptions of assembly components as the augmented reality video signal.
[0022] The computer system also includes an augmented reality instruction file interpreter configured to extract information from an augmented reality instruction file and provide the extracted information to the augmented reality management module, the extracted information including 3D descriptions of assembly components, positions in the virtual 3D space associated with the 3D descriptions at defined points of an assembly operation, and a position in the virtual 3D space associated with the characteristic feature.
[0023] The augmented reality workstation may further comprise a user interface configured to receive user input indicating the completion of a current assembly step and the transition to a next assembly step.
[0024] In some embodiments the augmented reality workstation also includes a log file creator module configured to store at least one of: a time stamp associated with a beginning of an assembly step, a time stamp associated with a completion of an assembly step, a video segment of the work area during performance of an assembly step, a user identity of an operator logged into the system during performance of an assembly step, a product code associated with an assembly component added to an assembly during the performance of an assembly step, a serial number associated with the completed assembly, and a version number associated with an augmented reality instruction file.
[0025] In another aspect of the invention, a method is provided for creating augmented reality guidance for an assembly process in a manufacturing environment. The method, which is performed by a computer system and connected devices, includes providing a video feed of a work area including a characteristic feature and a first component, maintaining a representation of a virtual 3D space in memory, detecting the characteristic feature in the video feed, associating the characteristic feature with a reference position in the virtual 3D space, loading a 3D description of the first component, associating the 3D description of the first component with a first position in virtual 3D space, creating a combined video signal from the video feed and the 3D description of the first component, wherein the position of the 3D description of the first component in the combined video signal is determined from its position relative to the reference position in the virtual 3D space. At this point the system may receive input from a user interface representing an adjustment of the position of the 3D description of the first component to a second position in the virtual 3D space corresponding to the position of the first component in the video feed, and updating the video signal based on the adjustment. This enables an operator to determine the appropriate position of the 3D position in virtual 3D space simply by viewing the augmented reality view on a display and making appropriate adjustments.
[0026] After the first component has been positioned, the method includes, for a next component, receiving input from the user interface representing an identification of the next component, loading a 3D description of the next component, associating the 3D description of the next component with a starting position in the virtual 3D space, receiving user input defining a final position in the virtual space, wherein the final position in the virtual 3D space is a position where the additional component is attached to the first component, receiving input from a user interface indicating the completion of an assembly step, saving a description of the assembly step including at least the identification of the next component, the 3D description of the next component, the starting position, and the final position.
[0027] The steps for a next component may be performed for additional next components until receiving input from a user interface indicating the completion of an assembly operation. At that time the method will save a description of the entire assembly operation including at least saved descriptions of assembly steps, the 3D description of a first component, the first position, and the reference position associated with the characteristic feature in the virtual 3D space. It should be noted that the steps for a next component, or the steps in general, do not have to be performed in the sequence described. For example may all components be identified and their 3D descriptions may be loaded before they are handled in individual assembly steps.
[0028] In some embodiments the step of loading a 3D representation of a next component includes loading a 2D representation of the component, and converting the 2D representation to a 3D representation. The conversion of a 2D representation to a 3D representation may include generating a flat 3D representation with a position and an orientation in virtual 3D space, or generating a 3D estimate based on the 2D representation.
[0029] In some embodiments the characteristic feature is one of a 2D marker and a contrasting color or shape that is part of the first component.
[0030] According to yet another aspect of the invention, a method in a computer system is providing augmented reality guidance for an assembly process in a manufacturing environment. The method comprises providing a video feed of a work area including a characteristic feature and a first component, maintaining a representation of a virtual 3D space in memory, detecting the characteristic feature in the video feed, associating the characteristic feature with a reference position in the virtual 3D space, loading an augmented reality instruction file, extracting from the augmented reality instruction file a predefined reference position in the virtual 3D space and associating the characteristic feature with the reference position, extracting a 3D description of a first component and a position relative to the reference position from the augmented reality instruction file and associating the 3D description of the first component with a first position in the virtual 3D space based on the position relative to the characteristic feature, and creating a combined video signal constituting a combination of the video feed and the 3D description of the first component wherein the position of the 3D description of the first component in the combined video signal is determined from its position relative to the reference position in the virtual 3D space.
[0031] After the first component has been positioned, the method proceeds by, for a next component, extracting a 3D description of the next component from the augmented reality instruction file and positioning the 3D description in a starting position in the virtual 3D space, adding the 3D description of the next component to the combined video signal, extracting from the augmented reality instruction file a predefined final position relative to the reference position in the virtual 3D space, generating an animation illustrating the movement of the next component from the starting position to the final position in the combined video signal by gradually updating the position of the next component in the virtual 3D space, receiving input from a user interface (601) representing the completion of an assembly step, and saving information relating to the completion of the assembly step in a log file.
[0032] The steps for a next component may be repeated until input indicating the completion of an assembly operation is received from a user interface, at which time information relating to the completion of the assembly operation in the log file may be saved. It should, however, be noted that the steps for a next component, or the methods steps in general, do not have to be performed in the sequence described. For example, the step of extracting can be performed for all components before the individual assembly steps are performed.
[0033] Input from the user interface representing the completion of an assembly step may include the recognition of a 1D or a 2D code.
[0034] Saving information relating to the completion of the assembly step may include storing at least one of: a time stamp associated with a beginning of an assembly step, a time stamp associated with a completion of an assembly step, a video segment of the work area during performance of an assembly step, a user identity of an operator logged into the system during performance of an assembly step, a product code associated with an assembly component added to an assembly during the performance of an assembly step, a serial number associated with the completed assembly, and a version number associated with an augmented reality instruction file.
BRIEF DESCRIPTION OF THE DRAWINGS
[0035] FIG.1 is an illustration of an embodiment of an AR workstation;
[0036] FIG.2 is an example of how AR information can be presented on a display during an assembly operation;
[0037] FIG.3 is a flowchart illustrating how an assembly operation can be guided by an AR workstation in accordance with the invention;
[0038] FIG.4 is a flowchart illustrating how an AR instruction file can be created in accordance with the invention;
[0039] FIG.5 is a block diagram illustrating the modules included in and embodiment of a workstation according to the invention;
[0040] FIG.6A is a block diagram illustrating modules and information flow in a workstation operating in an AR creation mode in accordance with the invention; and
[0041] FIG.6B is a block diagram illustrating modules and information flow in a workstation operating in an AR workstation mode in accordance with the invention.
DETAILED DESCRIPTION
[0042] Reference is first made to FIG.1, which is an illustration of an embodiment of a workstation 100 consistent with the principles of the invention. The workstation 100 includes a table 101 with a work area 102 upon which an operator 103 can perform assembly tasks in accordance with instructions provided by the AR system. The work area 102 includes a number of markers 104, the purpose of which will be described in further detail below. In the center of the work area a main structural component 105 has been positioned. The main structural component 105 can be thought of as the part to which all other components are attached. However, it will be realized that many assemblies do not include one part that can readily be identified as a main component, and in such cases the component positioned in the center of the work area at the beginning of an assembly operation is simply a first component 105.
[0043] The illustrated embodiment further includes a display 106 and a camera 107. The display 106 and the camera 107 are connected to a computer 108, which is configured to receive video from the camera 107 and combine this video with augmented reality information that provides the operator 103 with the required assembly instructions. In some embodiments the computer 108 may be integrated into the display 106, which may be a tablet computer. The display 106 may have a touch screen. However, other forms of user input is possible instead of or in addition to the touch screen, such as, but not limited to, a mouse, a 3D mouse, graphics tablet, a microphone combined with voice recognition, a trackball, and a keyboard. The exemplary embodiments described herein will concentrate on use of a display 106 with a touch screen for user input, but all the embodiments described herein can be modified to include different types of user input devices.
[0044] The workstation 100 further includes a number of parts distributed in a number of boxes, trays or bins 109. The parts bins or component bins 109 may be provided with markers similar to the markers 104 provided on the work area 102, as will be described further below.
Also included are a number of tools 110. As will be described in further detail below, the computer 108 may identify different parts or components, their location and also the required tool or tools for a particular task by showing relevant information on the display 106.
Supplementary instructions may be provided in other ways, for example by audio.
[0045] In some embodiments of the invention a fixture or jig 111 may also be provided. This jig 111 may be positioned and shaped such that it ensures the correct positioning of the main structural component relative to the markers 104. A jig may, for example, be manufactured through 3D printing of a negative representation of the volume of the main structural component from a CAD representation of that component.
[0046] It will be readily understood by those with skill in the art that the invention is not limited to the specific configuration illustrated in FIG.1. As such, it is consistent with the principles of the invention to arrange the various elements differently, to include additional workstation components such as shelves, drawers, work lights, additional displays, projectors, etc. The workstation may also include additional cameras in order to provide different viewing angles, close up details, to capture gestures performed by the operator 103, etc. In particular, in embodiments where the operator is wearing goggles, the goggles may be provided with one or more cameras that capture the view of the workstation as seen by the operator 103.
[0047] Turning now to FIG.2 an exemplary configuration of information on the display 106 during an assembly operation will be described. The display 106 shows the view of the work area 102 captured by the camera 107. In the center of the work area 102 the main structural component 105 is shown positioned in the jig 111 and surrounded by the markers 104. The display 106 also shows the component bins 109. The component bins 109 are provided with their own markers 204. Various components 205a-205e are provided in the various bins.
[0048] In addition to the real items that are shown in the live video of the work area 102, the display shows a number of additional graphical elements which are virtual in the sense that they are not captured by the camera 107. Instead, they are generated by the computer 108 and combined with the video provided by the camera 107 in order to provide the operator 103 with assembly instructions. These elements may include traditional user input controls, for example, such as forward to next task 210, back to previous task 211, audio on/off/repeat 212, pause 213, and reload or repeat step 214, are simply touch elements that can be invoked by the operator 103 simply by touching the screen of the display 106. However, it is consistent with the principles of the invention to replace or supplement the display 106 with augmented reality goggles, in which case the user control elements 210-214 could be heads up elements that appeared to float in the air somewhere in the operator’s field of view and that could be invoked for example through hand gestures.
[0049] In a side bar 215 on the left of the display 106 the tools to be used as part of the current operation or task can be shown. In this example, an image of pliers 216 is shown, and the operator should be able to identify this as the appropriate tool 216 to select from the available selection of tools 110 next to the work area 102 for the current assembly step. The tools may be shown as a picture or an illustration, or even as an animated 3D representation, or it may simply identified by name. It may also be presented together with a specific tool id number, a setting to use with the tool such as a torque value setting for a torque screwdriver.
[0050] Three elements in FIG.2 are shown inside the video image as if they were present in or adjacent to the work area 102. An arrow 217 points at the component bin 205a that holds the components of the type that is to be attached to the main structural component 105 as part of the current task. A 3D graphical representation 218 of that part is shown superimposed on top of the component bin 205a. A corresponding 3D representation 218 is shown inside the work area 102 next to the main structural component 105 and in a position and orientation that corresponds to how the actual component should be positioned and oriented when it is attached to the main structural component 105. This 3D representation may be animated. It may, for example, be shown as moving from the component bin 205a to the position shown in the drawing, and from there to a position where it is attached to the main structural component 105. This animation may be repeated, for example at regular intervals or as a result of the operator 103 invoking the repeat control 214.
[0051] When the operator 103 clicks next task 210, this is interpreted by the AR workstation as a confirmation that the current task has been completed. This may then be stored in an event log. The log may include such information as article number, serial number, operator id logged in, time stamp and revision number for the AR workstation application. A different selection of data items is, of course, possible. In some embodiments the AR workstation 100 may even store the actual video sequence of the assembly operation as captured by the camera 107. Such an event log may be entered in a database and used for traceability of the manufacturing process, quality control, analysis of the efficiency of the assembly operation and identification of bottlenecks and possible improvements.
[0052] Some embodiments of the invention may enable remote access from the workstation 100 to a remote location. This may, for example, provide the operator 103 with the ability to consult a technical specialist or a supervisor or to look up additional information in an online repository of relevant information. Some embodiments may also enable connections to be established from a remote location to the AR workstation 100, for example by a supervisor who wishes to check in on and evaluate the work performed by the operator 103.
[0053] Communication between the AR workstation 100 and the remote location may typically involve audio and video. The operator at the AR workstation may in some embodiments have the possibility to highlight or draw or by other means show or indicate an area of interest for the remote access. If the display 106 is a touch screen the operator 103 may be able to draw directly on the screen using a finger or a stylus. Additional user interface controls may be provided for this purpose. These user interface control elements are not shown in the drawing, but may include elements for establishing a connection, elements for selecting drawing tools, an element for taking a screen shot, on screen tools for editing an image, etc. Provided that the computer at the remote location has corresponding capabilities, a remote operator may be able to perform the same operations from the remote location.
[0054] In some embodiments, drawings made on the screen or through the use of input devices in the manner described above may be interpreted by the AR application on the computer 108 as 3D virtual objects and given a position and orientation in 3D space. In other words, some embodiments of the invention may allow an operator 103 with the required access rights to draw new components and include them in the assembly instructions. This will be described in further detail below as part of the description of the AR creator mode.
[0055] An AR workstation with the capabilities described above will be suitable for training of new operators, particularly with the remote access that gives easy access to the advice of a mentor. Furthermore, the generic interface that can be used for any product assembly provided that the parts are not too big to fit on the workspace and captured by the camera 107, too small to be seen on the display 106 and worked on without special tools, or having other special requirements that cannot be provided by the illustrated embodiment of an AR workstation, such as protection from heat, extreme cold, or radiation, or use of specialized tools that require a specialized environment. Modifications may, however, be made to the workstation to expand the range of possible operations, or to prepare the workstation for more specialized work.
[0056] The generic nature of the workstation further creates the possibility of cost efficient creation of new assembly instructions, something that will be described in further detail below.
[0057] The AR workstation computer 108 uses markers 104 to determine positioning. The virtual objects described in the instructions have their positions defined relative to these markers, and they will be shown on the display 106 in positions determined relative to the markers 104. The main structural component 105 may be shown as a transparent representation of itself in a position that corresponds to the position the component will have when it is positioned in the jig 111. The transparent representation will enable the operator 103 to verify that the position of the component 105 in the jig 111 is consistent with the representation in virtual 3D space. If the video representation of the physical component and the virtual representation of the same are not perfectly aligned, the operator 103 should adjust the position the physical main structural component 105 such that it overlaps with the transparent version on the display 106. In embodiments where a jig 111 is made as a negative shape as described above, it may be possible to ensure that the main structural component 105 can only be positioned in one way and that the jig 111 has a fixed position relative to the markers 104, for example by having the markers 104 actually attached to or printed on the jig 111. In this way the main structural component 105 will automatically be in the correct position with respect to the markers 104 when it is placed in the jig 111.
[0058] It should be noted that one marker is sufficient for calibration of the virtual 3D space relative to the physical space, but the provision of additional markers will make the system more robust, e.g. to situations where one or more markers are obscured from view by the operator 103 either by accident or by necessity.
[0059] In some embodiments, or for some AR assembly instructions, the main structural component 105 may have a marker attached to it and this marker may be used as a reference. In this case, i.e. if all calibration of the virtual 3D space is done relative to the marker on the component 105, there will be no incorrect position for the main structural component, and all other virtual representations (the augmented part of the reality) will be positioned relative to the position of the video representation of the actual physical main structural component on the display 106. In this case there may be no need for a jig, at least not for purposes of correct positioning of the main structural component 105.
[0060] These two alternatives may, of course, be combined such that there is a primary marker on the main structural component 105 and secondary markers 104 on the work area 102.
[0061] In order to be able to identify the markers 104 the computer 108 may be programmed to have image or pattern recognition capabilities. These capabilities may in some embodiments also be used to recognize markers placed on the bins 109.
[0062] When starting the AR workstation the operator 103 will scan a barcode or 2D matrix code representing the article number of the product to be assembled or otherwise provide this information to the computer 108. The AR assembly instruction for that actual product will be loaded from a database of assembly instructions. The database may be installed in the computer 108 or it may be accessible from a remote location. The latest revision of these instructions will be loaded, but previous revisions may be stored as well for reference, for example in order to compare with information in an event log. The event log may include the revision number of the AR instructions that were used for that particular actual assembly.
[0063] The scanning of the bar code or 2D matrix code holding the article and/or serial number of the product may in some embodiments be performed using the camera 107 and image recognition software. In other embodiments, a separate bar code reader or RFID/NFC reader may be provided, or the user may be required to enter the serial number or select the product from a menu.
[0064] During performance of an assembly operation, the completion of every step may be logged as a result of the operator 103 clicking or touching the next task user interface control element 210. Such a log entry may include article number, serial number, operator id of the operator that is currently logged in, time stamp, revision number of AR workstation application. The logged data may be transferred to a manufacturing execution system (MES) or some other database for traceability.
[0065] In some embodiments, video of the actual assembly operation as captured by the camera 107 may be included in the log.
[0066] FIG.3 is a flowchart illustrating the performance of an assembly operation using the workstation described above.
[0067] In a first step 301, the prepared AR instructions are loaded. How the AR instruction file is created will be described in further detail below.
[0068] In a next step, the main structural component 105 is positioned in the jig 111, or directly on the work area 102 in embodiments where no jig 111 is provided. In some embodiments, additional features may assist in the correct placement of the main structural component 105, for example a transparent representation of that component on the display 104 with which the video image must be aligned.
[0069] When this has been done, the system is ready to start presentation of assembly steps and will do so upon receipt of user input representing an instruction to start in step 303. This will typically be provided by the operator 103 touching the correct user input control 210. In response, the system will initiate a next step 304 wherein virtual elements that are relevant to the current assembly step are positioned in their correct positions in the representation of 3D space in the memory of the computer 108. The virtual 3D representations of the various components are extracted from the AR instructions and they may all be displayed from the beginning of the assembly operation, e.g. in association with their respective component bins 109, or they may be become visible only when they are needed in the appropriate process step. The relationship between real space on the top of the workstation table 101 and the virtual 3D space described in the computer is determined by the position of the markers 104. All virtual elements are positioned in 3D space relative to them. How this is done will be described in further detail below.
[0070] When virtual components have been positioned in the virtual 3D space, their position on the display 104 can be determined such that they can be rendered in an AR representation of the table surface on the display 104 in step 305. This allows the operator 103 to view the work area with virtual components, metadata and other information elements included during performance of the assembly step. With reference to FIG.2 these elements may include the representation of the appropriate tool 216, the arrow 217 pointing at the bin 109a holding the relevant component 205a, and the virtual representation of that component 218. At the same time, or subsequent to the generation of the augmented reality representation on the display, any corresponding animation, audio, and other output of sensory information can be played or otherwise delivered as output in step 306.
[0071] Based on the guidance this provides the operator 103 may now perform the assembly step. As soon as the step has been performed, the operator may provide user input by again touching user input control element 210. This input is received as step 307, and it may then be determined in step 308 whether this completes the assembly operation, or if additional steps remain. If there are remaining steps the assembly step that has just been performed may be logged in step 309 and the process may return to step 304 where virtual elements related to the next step are positioned in virtual space relative to the positions of the markers 104, as described above. If, on the other hand, it is determined in step 308 that the final assembly step has been finished and the assembly has been completed, the process proceeds to step 310 where the last step is logged along with a log of the complete assembly operation. The log may include such information as article or product number, serial number, the operator ID of the operator that is currently logged in to the workstation, one or more time stamps (e.g. for start, finish of each step, and finish of the entire assembly operation), and revision number of the assembly instructions that were used during the assembly. In some embodiments additional information may also be logged, such as video captured by the camera 107, and a log of any interaction with instructors, mentors, or remote systems.
[0072] As mentioned above, an AR workstation according to the invention may be configured for use to create AR assembly instructions. FIG.4 is a flowchart illustrating a process of creating such instructions. In this situation the operator 103 is not a person performing an assembly operation with guidance from the AR workstation 100, but person entering appropriate information into the AR workstation and designing the final set of instructions that are stored in one or several AR instruction files.
[0073] It should be noted that the flowcharts illustrated herein represent possible embodiments and are intended to illustrate conceptually how the invention may be worked. The flowcharts, and their accompanying descriptions, should therefore not be interpreted as limitations but rather as illustrations. It may, for example, be possible to perform certain steps in a different order, to perform some steps simultaneously, or in other ways modify the flow of information, decisions and activities.
[0074] For the following example it is assumed that the markers 104 are already correctly positioned on the surface of the work area 102, that the correct component bins 109 are placed on the table 101, and that parts lists, bill of work, and data files with graphical representations of the various components 105, 205 are already present on or accessible from the computer 108. The computer should also include a library of tools that can be listed as required in an assembly operation and represented in the side bar 215. Making sure these things are in order may be part of the preparations the operator 103 has to make prior to the actual design of the assembly instructions. These preparations may be seen as an integral part of the entire process, but in order to simplify the drawing and focus on the steps that directly contribute to the construction of the AR assembly instructions, they are not included in the drawing.
[0075] In the embodiment illustrated in FIG.4 the process starts in step 401 when the AR workstation 100 is activated and starts to receive a video feed from the camera 107. The received video images will include the work area 102 and the component bins 109, and therefore also the markers 104, 204. The markers 104 on the work area 102 are interpreted by a 2D image recognition module in the computer 108 and their respective positions in the video image are used as reference points when the 3D representation of the work area in the computer is created. The markers themselves have an orientation, so it is sufficient for the system to identify only one marker to determine position and orientation. The 2D image recognition module may also be configured to determine the size of the markers 104 any proportional distortion in the markers 104 resulting from the viewing angle and thus determine the camera’s height above the work area 102 and the camera’s viewing angle towards the work area 102. The markers 104 can thus be used to define the plane of the work area 102 surface in the virtual 3D space. Everything in the virtual 3D space is positioned relative to these markers and as a height above this surface.
[0076] It will be understood that the virtual 3D space may already exist as a representation of a space in the computer 108. In this context, creation of the 3D representation of the work area is intended to refer to the specific association of a 3D space representation in a computer with a real physical space and the calibration of that representation through positioning of certain reference points in real space with coordinates in virtual space.
[0077] The markers 204 are also interpreted by the 2D image recognition module. In some embodiments the markers 204 are permanently assigned to specific components such that when the 2D recognition module has recognized a marker the information extracted from that marker can be used to perform a look up in a parts list and retrieve information about the component from that list. In other embodiments the markers 204 are only representative of a position of a bin and the operator 103 will have to associate each marker with a specific part number.
[0078] In a next step 402, the operator 103 positions the main structural component 105 in the jig 111. If the jig 111 is not already a part of the work area 102 the operator 103 will first have to position the jig in the correct position relative to the markers 104. This step means that the computer will start receiving video images with the main structural component 105 in the correct position. In some embodiments the computer is configured to detect the presence of the main structural component based on image recognition. In some embodiments the main structural component includes its own marker (not shown in the example in the drawings). This marker can be used to identify the component, and in some embodiments also to detect the component’s position relative to the markers 104, and thereby also to position it as a representation in virtual 3D space. A marker that is attached to the main structural component 105 may be a marker similar to the other markers 104, 204. However, it is consistent with the principles of the invention to use some other feature with sufficient contrast and shape to be utilized by the image recognition software to fully identify the 3D position and orientation of the product, in which case no additional markers explicitly provided for this purpose will be required.
[0079] In a next step 403 a 3D representation of the main structural component is loaded by the computer 108. This may be performed based on user input from the operator 103. In embodiments where the main structural component 105 is recognized by identification of a 2D marker attached to it or, for example, by shape recognition, the representation may be automatically loaded as soon as the component has been recognized.
[0080] 3D representations and file formats will be discussed in further detail below.
[0081] After the 3D representation of the main structural component 105 has been loaded it is associated with a position and an orientation in the virtual 3D space and represented on the display in that position and with that orientation. The process then continues to step 404 where the operator 103 moves the position and orientation of the main structural component in virtual 3D space until it has been aligned with the video image of the physical main structural component 105.
[0082] In a next step 405 the operator loads a data file describing the next component 205 to be added to the assembly. The file describing the next component 205 may be a 3D representation such as a Step file as described above. However, in some embodiments other file formats may be acceptable, including 2D image file formats such as image files like JPG, PNG, GIF, etc. Some embodiments may also allow the operator 103 to capture an image of a specific part using the camera 107 and store that image as the data file describing the next component. Some embodiments of the invention may include a 3D scanner that allows the operator 103 to scan a component in order to create the 3D representation. Some embodiments may also be configured to create a 3D approximation based on several 2D images obtained from different angles.
[0083] The representation of the next component 205 may be associated with a position in the virtual 3D space, for example a default position from where it may be moved by the operator 103. This applies to both 2D and 3D representations. 2D representations will be shown as 2D images on the display 106 and it will not be possible to provide the same degree of realism in the presentation of how and where the component should be mounted or attached to the main structural component 105.
[0084] The process of loading this data file may include loading metadata describing the component, including a unique identification in form of an article number or serial number. Based on this information the computer may be able to determine in step 406 whether that part type has already been associated with a component bin 209. If it has the process may jump ahead to step 408, which is described below. Otherwise, the process will move to step 407, where the AR workstation will request and receive user input associating the component with an appropriate bin 209. When the computer 108 receives the association with the appropriate bin 209, whether this is done by user input or as a result of metadata, the virtual representation of the component in virtual 3D space may in some embodiments be automatically updated to position the component in or above the appropriate bin 209.
[0085] The process then moves to step 408, where user input is received representing the final position and orientation of the component 205. Based on this the system may, in some embodiments, generate an animation showing a path from the component’s position in (or superimposed on) the appropriate bin 109 and to the component’s final position and orientation when it has been attached or mounted to the main structural component.
[0086] In a next step 409, the operator 103 can input additional metadata. This may be done by loading text files, loading audio files, adding text using a keyboard or by some other text input means (e.g. using the touch screen of display 106 or a digital pen or stylus), adding graphical elements such as the arrow 217 shown in FIG.2, drawing circles or other symbols or figures using the touchscreen of the display 106 or a graphics tablet connected to computer 108. This step may also include adding additional sensory output information such as haptic information or audible information that is not vocal (e.g. the sound of a tool being used).
[0087] It should be noted that since the computer 108 includes a complete 3D description of the work area 102, the jig 111, the main structural component 105 and the next component 205 to be added to the assembly, some embodiments of the invention may replace the entire video feed with a 3D representation, for example at the request of the operator 103. It may then be possible for the operator to view the assembly operation from different angles, at different zoom levels, and in different level of detail. The operator 103 may add sequences of such representation as a part of the instructions, or request different views during assembly.
[0088] When all information has been added to the current step, the step is saved in step 410 and it is determined din step 411 whether this was the final step and the assembly operation has been completed. This is typically determined as a result of user input received from the operator 103. If the assembly is not completed the process returns to step 405 where data describing the next component to be added is loaded using one of the alternatives described above and the subsequent steps are repeated for that part. When it is finally determined in step 411 that the assembly operation has been completed the process moves to step 412 where the instruction file is saved. This file can now be used on this or another AR workstation 100 to guide assembly operations.
[0089] The file may be one single file holding all relevant information in some convenient file structure. However, the file may also be a collection of several files that may reference each other or information stored externally from the file or files themselves.
[0090] It should be noted that the several alternatives described above to a large extent are independent of each other in the sense that they may be freely added or excluded from different embodiments of the invention irrespective of which other alternatives are included. As such, an embodiment may include all or any combination of input means for component representations, including a 3D file, a 2D image file, a 2D image obtained from the camera 107, a 3D representation from a 3D scanner and a 3D approximation generated from several 2D images. And any combination of these may be freely combined with one or more of the different user input means such as a touch screen, a keyboard, a mouse, a digital pen, a graphics board, and gesture input. The extent to which an alternative requires additional hardware or software modules does not limit this possibility of combining various features in many ways, since none of the required hardware or software modules or modification to such modules would be mutually exclusive.
[0091] It should also be noted that whether a particular part of information should be considered metadata or not is not essential. The AR representation will include a combination of information that is directly obtained from the camera 107 (video of the physical workspace) and information that is augmenting the information obtained from the camera 107.
Conceptually, all this information can be thought of as metadata, or as virtual representations of additional information. The information will in some sense relate to the various components 105, 205 or the assembly operation itself, whether the information is text describing a component, text describing an activity, spoken instructions, sound effects that enhance the understanding of how a step is to be performed, haptic information (e.g. vibrations transmitted through a tool or a chair, symbols shown on the display, or anything else that can be added to the presentation on the display 106.
[0092] An alternative to the overall flow described with reference to FIG.4 is one where a complete assembly is imported in step 403. Instead of importing additional components in step 405, the various components may be distributed to the various bins 209 either by manual user input from the operator 103, for example by drag and drop operations, or they can be distributed automatically based on metadata associated with each component and corresponding metadata associated with each bin 209. Yet another alternative is that all components are imported separately, but prior to any further manipulation or AR creation, and distributed to the bins 209 in accordance with one of the alternatives described above. It will be realized that conceptually these alternatives simple represent alternative sequences of performance of the steps illustrated in FIG.4, and that various embodiments of the invention may implement any one of them or all of them (i.e. components may be imported at any time during AR construction, either prior to or during construction of the AR instructions). A combination of these alternatives are also possible, for example, one where a substantially complete assembly is imported, but some additional parts are imported separately. For example, an assembly where all major parts are included, but standard components such as screws, washers and o-rings are not. These standard parts may then be imported separately.
[0093] A description of an exemplary embodiment of the various software modules and some hardware components that may be included in an AR workstation 100 according to the invention will now be given with reference to FIG.5. This presentation is intended to facilitate the understanding of the invention to those with skill in the art, so no detailed description of generic computer components will be given.
[0094] FIG.5 gives an overview of modules that may be included in an AR workstation 100 according to the invention. Two main modules are included in the form of an AR Creator module 501 and an AR Workstation module 502. These modules represent the main functionality associated with creation of AR instructions and control of the progress of the AR instructions during an assembly operation. In some embodiments of the invention an AR workstation may include only the modules required to perform assembly operations, i.e. only the AR workstation module 502 and the modules that may be controlled by it, while other embodiments may also include the AR creator module 501 and the additional module or modules that are controlled by the AR creator module 501, but not by the AR workstation module 502. In some embodiments the AR creator module 501 may be present but deactivated. Yet another possibility is that access to the AR creator module 501 depends on the access rights of the operator 103 that is currently logged in.
[0095] Some embodiments of the invention may not have the architecture shown in FIG.5, and in particular, the AR creator module 501 and the AR workstation module 502 does not have to be two separate modules, but may instead be one module that constitutes the main body of software and hardware components configured to receive and handle use input and administrate an utilize the remaining modules in accordance with the tasks to be performed. An embodiment may include additional modules, or some of the modules illustrated in FIG.5 may not be present. The following description will be based on an exemplary embodiment where the AR creator module 501 and the AR workstation modules 502 are separate modules. For the purposes of generality the set of functions related to control of the augmented reality environment may be considered an augmented reality management module which may be implemented as one or more modules. As such the term augmented reality management module is intended to cover any combination of one or more modules that implement specified functionality.
[0096] The 3D placement and visualization module 503 handles the positioning, orientation and movement of virtual 3D components and objects in the virtual 3D space. This module may receive user input from the AR creator module 501 or the AR workstation module 502 and representing movement of a particular virtual object and change the position and orientation of that object accordingly.
[0097] The user management module 504 controls login and determines the access rights of the user.
[0098] The display manager module 505 controls the display 106 and delivers the combined AR video stream to the display 106.
[0099] The camera manager module 506 controls the camera 107. In addition to receiving and forwarding video signals, this module may also control zoom level, panning or tilting if the camera has that capability, or select between several cameras if several cameras are connected.
[00100] The 3D file importer module 507receives 3D files describing components. This module may implement all file conversion capabilities of the workstation. In order for a system operating in accordance with the invention to be capable of allowing users to create their own AR assembly instructions based on 3D component descriptions created in an arbitrary CAD system, the 3D file importer should be able to import as many common CAD file formats as possible, extract the 3D descriptions from those files and create a 3D representation according to a format used internally by the system. This module may then deliver the 3D representation to the 3D placement and visualization module 503.
[00101] The 1D marker recognizer module 508 is a module that is capable of interpreting 1D codes, for example bar codes. The 1D marker may receive its input from an external bar code reader or it may receive the video feed from the camera 107 and perform feature extraction processing on the video images in order to extract and interpret the barcode.
[00102] The 2D marker recognizer module 509 may be a module that is configured to detect, extract and interpret 3D matrix codes such as for example QR codes from the video stream received from the camera 107. This module can be used to recognize the markers 104, 204 and as such the module may also be capable of determining distance and orientation based as described above.
[00103] The 3D base generator module 510 may be configured to create a negative shape from the 3D representation of the main structural component, from the completed assembly, or from an intermediate stage of the assembly process. This negative 3D representation can be used to create a file usable for 3D printing of a fixture or jig 111 as described above. As such, this module may only be part of the AR creator functionality. However, it is consistent with the principles of the invention to include this functionality in the AR workstation module 502 as well, if it is desirable to allow an operator 103 that only has access to assembly operation and not AR creation, to make a replacement jig 111 if one is broken or for other reasons.
[00104] A manufacturing execution system (MES) may be connected to the AR workstation module 502, but in most embodiments this system will not be part of the workstation as such, and it may be located remotely, for example on a server.
[0105] Reference is now made to FIG.6A, which illustrates the flow of information between the various modules during AR creation. As already mentioned, the various modules may be configured differently, for example with different distribution of functionality between modules, or with inclusion of additional modules or exclusion of some of the illustrated modules. The illustration is therefore intended to be explanatory for the concepts of the invention, but not limiting on the scope of the invention.
[0106] Furthermore, the illustration in FIG.6A is conceptual in the sense that it illustrates the most important information flow between modules, but it does not include all possible communication between modules. As such, information may flow directly between modules that are not shown as being directly interconnected in the drawing and information that is not discussed or described herein may be communicated between the various modules. In some embodiments of the invention functionality may be distributed differently between the modules, some modules may be combined into one, or some modules may be subdivided into a plurality of distinct modules. Functionality may also be distributed between several computer systems. The respective modules may be combinations of hardware and software, or substantially only hardware or only software. The modules or parts that correspond to modules or parts that have been described with reference to earlier drawings are given the same reference number. The AR creator module 501 and AR workstation module 502 are not shown in this drawing since their functionality to a large extent is administrative or general. Thus, one of them will be involved in most of the information processing and information flow illustrated. Some of the modules that are introduced in FIG.6 may in some embodiments be part of one or both of those modules. Some of them may also be part of the operating system of the computer system 108.
[0107] When an operator 103 first activates the AR workstation 100 he or she will have to use a user input device 601 to log in or register. The system will verify the user’s identity with a user management module 502, which may be installed locally or which may be a remote authentication server. User management is well known in the art and will not be discussed further. Suffice to say that the functionality available to the operator 103 may depend on user rights that are granted according to a user profile that is part of the user management module 501. For example, some users may only be authorized to use the AR workstation 100 to perform assembly operations under control of the AR workstation module 502, while other users may be authorized to operate the workstation with access to the AR creation module 501, i.e. to create assembly instructions. Embodiments that do not implement user management and access rights are, of course, within the scope of the invention.
[0108] The user input device 601 may be one or more of a keyboard, a mouse, a touch screen, a digital pen, a graphics board and gesture recognition. If the user input device 601 includes a touch screen it may be the screen of the display 106, and if it includes gesture recognition the gestures may be captured by the camera 107. Other user input devices known in the art may also be used.
[0109] When operating the AR workstation 100 in AR creation mode the workstation may now receive user input representing instructions to load a particular 3D description of a main structural component 105, i.e. the first component of the assembly process. Subsequent components may be components that will be attached to the main structural component, either directly or by being attached to a subsequent component that has already been attached to the main structural component 105.
[0110] If the 3D description of the main structural component 105 is already present in the AR workstation it may be loaded from local storage 603. If it is not it may be accessed over a communication interface 604 from a remote database 505, or from some other device, for example a USB stick (not shown). In general files may be loaded and stored by the file Input/output module 605.
[0111] In some embodiments of the invention the AR workstation 100 is capable of importing a wide variety of 3D files, particularly files that are in one of the more common Computer-aided Design (CAD) file formats. The AR workstation 100 may therefore include a 3D file importer that extracts the required 3D information and coverts it to the format handled internally by the AR workstation 100. The internal file format, or 3D graphics format, may be any one of the many existing 3D formats that are known in 3D modelling (e.g. from CAD or 3D gaming), a modified version of such a 3D format, or a proprietary format developed particularly for the AR workstation 100.
[0112] In some embodiments of the invention the AR workstation 100 includes a 3D base model generator 510. This module may be configured to generate a negative shape of the main structural component 105, or more specifically a negative shape of one side of the main structural component 105, extended by sides going away from the negative shape and ending with a plane surface. This negative shape can be outputted as a file in a file format suitable for 3D printing, for example an STL file or a FRML file, or it may be sent directly to a 3D printer connected or otherwise in communication with the AR workstation 100. The resulting component can serve as a cradle, fixture or jig 111 in which the main structural component can be positioned on the work area 102 during assembly. The printing process may add markers 104 directly to the jig 111 or provide distinct areas where such markers can be attached.
[0113] The 3D description extracted from the imported 3D file may then be forwarded to the 3D placement and visualization module 503. This module maintains a representation of a virtual 3D space and stores information associating each imported 3D representation with a position and an orientation in 3D space. The 3D placement and visualization module 503 is also capable of receiving user input from the user input module 601 specifying and adjustment of the position in virtual 3D space associated with a virtual component or object. This can be used by an operator to change the position of a main structural component 105 to a first position corresponding to the position in the virtual 3D space corresponding to the position of the main structural component 105 in the video feed when the main structural component 105 is positioned on the work area 102 where it will be positioned during assembly. In embodiments/processes that include the jig 111 this position will be in the jig 111. The operator 103 may also adjust the position of additional components in order to define a starting position, i.e. the position where the additional component should appear in the virtual 3D space and hence on the screen before it is attached to the main structural component 105, and in order to define the position in 3D space after the additional component has been attached. This module may also handle insertion and adjustment of additional virtual objects such as arrows 217, metadata, and user control elements.
[0114] The graphic representations of the virtual elements controlled by the 3D placement and visualization module 503 is forwarded to display manager 505 where it is combined with the video feed from the camera 107.
[0115] The camera 107 captures video of the work area 102 and feeds that video to a camera manager 506. The camera manager 506 receives the video feed and forwards it to a 1D/2D marker recognition module 598/509. The camera manager 506 may also be configured to control the camera 107, for example by controlling zoom, pan and tilt, or by selecting between several cameras if the AR workstation 100 are provided with more than one camera 107.
[0116] The 1D/2D marker recognition module 508/509 is configured to interpret 1D or 2D code in the video image, for example 1D bar codes and 2D matrix codes. The 1D code recognition may be used to identify user identities from ID cards, product or serial numbers for assemblies, part numbers for individual components etc. As such the 1D marker recognition functionality may be used during a login process, in order to initiate or conclude an assembly process or an AR creation process, or to identify parts or represent the transition from one step to the next in the assembly process. One or more of these functions may also be provided by a separate bar code reader, or by user input of text or selection from a menu. Those with skill in the art will realize that the bar codes may be replaced by QR codes or some other form of 2D code or even RFID chips, and the 1D code recognition module may be replaced by the 2D code recognition module or an RFID reader.
[0117] The 2D code recognition may be used to interpret the markers 104, 204 in the video and determine the position and orientation of the work area 102 as well as those of the component bins 109. This information may then be forwarded to the 3D placement and visualization module. The 3D placement and visualization module keeps track of the position of the recognized markers in the video image and uses this to establish a correspondence between positions on the actual work area 102 and positions in virtual 3D space.
[0118] The video feed is also forwarded to the display manager 505 where it is combined with the virtual graphical elements from the 3D placement and visualization module 503 to create the combined AR image which can then be displayed on the display 107.
[0119] The various positions for virtual components and other virtual objects that are positioned in virtual 3D space may also be forwarded from the 3D placement and visualization module 503 to an AR file creator 606 at well-defined points during the assembly process, for example at the beginning and/or the end of each assembly step. This file creator tracks and registers all user input that is intended for inclusion in the final AR instruction file and stores associated information in one or more files. In addition to 3D descriptions of virtual objects and their positions and orientation at well-defined points during the assembly process, the file creator may register metadata and other information, for example audio, that is not associated with a position in 3D space. In addition the file creator 606 may register 3D descriptions and positions of additional virtual objects such as user controls, arrows and other indicators, and representations of tools. The file creator 606 does not have to register 3D descriptions of shapes and user controls that are described and already present in the AR workstation as a standard component. The same may be the case for positions that are determined by the system, for example for standard controls 210, 211, 212, 213, 214 and information that is positioned in the sidebar 215, not in the virtual 3D space.
[0120] The completed AR instruction file may be stored in local storage 603 or sent to be stored in a remote database 610.
[0121] In some embodiments of the invention, if 3D descriptions of a particular component is missing, the operator 103 may input a 2D representation in the form of an image. This may be from an image file that is already available, or the camera 107 may be used to capture an image of a particular component. Some embodiments may include certain capabilities for generating 3D estimates for example based on a plurality of images captured from different angles or based on additional user input. In other embodiments the 2D image may be used as it is. The 2D representation may still be associated with a position and an orientation in 3D space, and it may be thought of as a completely flat 3D object. When this description and the appended claims refer to 3D objects this is intended to include 2D images represented with a position in 3D space.
[0122] The MES system 111 and the remote instructor 612 are illustrated in FIG.6, but they are primarily of relevance during assembly operations and will be described further below.
[0123] FIG.6B corresponds to FIG.6A but illustrates the AR workstation operating in assembly mode, i.e. under control of the AR workstation module 502.
[0124] Again a used may log in and receive access rights according to what is defined in the user management module 504. The user will then proceed to load an AR instructions file which has been created as described above. The appropriate file may be identified based on user input in the form of text or selection from a menu. However, some embodiments require reading of a 1D or 2D code identifying the assembly operation and selection of the appropriate file based on this. For this purpose the camera 107, camera manager 506 and 1D and/or 2D marker recognition module 508/509 may be used.
[0125] The identified file may then be loaded from local storage 603, or from a remote database 510 or in any other manner known in the art. After the file has been loaded the 3D file importer 507 may extract the 3D descriptions of components and other virtual objects and forward them to the 3D placement and visualization module 503. In the 3D placement and visualization module 503 the various virtual objects, whether they are representations of assembly components or virtual objects representing metadata, are associated with respective initial positions.
[0126] The initial positions of the various virtual objects may be determined in several ways. If the initial position is defined in the AR instruction file, this position will take precedence. If that is not the case, a virtual component may be positioned for example according to one of the following principles: a default position for the type of component, and a position determined based on the position of a marker 204 in the video feed image. Other possibilities, for example in the sidebar 215, are available as a design choice.
[0127] After the process has been initialized the 3D placement and visualization module 503 will forward the description, position and orientation of at least the main structural component 105 to the display manager 505 where it is combined with the video feed and forwarded to the display 107. The system may now receive user input to proceed to a first assembly step. At this point a next component will become visible on the display, or it is already visible it will be pointed out by a marker such as an arrow 217. In some embodiments a virtual representation of the component 218 while an arrow 217 points out the component’s position for example in a parts bin 109.
[0128] The 3D placement and visualization module 503 may then change the position of the component 218 to a position where it is mounted in its appropriate position on the main structural component 105. This change in position may be gradual such that the component 218 is shown as moving gradually from the place where it is stored to the place where it is to be mounted. This animation may follow a path that is automatically generated from the initial position to the final position, or the path may have been explicitly defined by the operator during creation of the AR instruction file, for example by dragging the component on a touch screen display.
[0129] In some embodiments the operator will have to identify a selected component by presenting a 1D or 2D product code associated with the component to the camera 107 for recognition by the 1D/2D marker recognition module 508/509. This may for example be done as a confirmation by the operator 103 that the step has been completed. A log file creator 614 may receive this information from the 3D placement and visualization module along with time stamps for when the step started and was completed, and any other information, for example the video feed as captured during the operator’s performance of the assembly step.
[0130] When all steps in an assembly operation have been performed the log file may be stored in local storage 603 and/or forwarded to a remote MES system 511 or some other database 610.
[0131] In some embodiments of the invention the AR workstation 100 has a communication interface 604 which not only enables communication with the remote database or the MES system, but also enables communication with a person located remotely from the workstation. This may for example be an instructor with his or her own computer system 612 which is able to receive the combined video feed from the display manager 505 and sent user input to the AR workstation. This user input may include the ability to transmit audio and transmit user input representing lines being drawn on the display 107.
[0132] In the above description each assembly step has been described as including one component, sometimes referred to as a next component or an additional component. This should be interpreted as at least one next component. In some cases, for example if a lid is to be attached to a main structural component with 20 screws, the AR instruction file will not include 20 assembly steps with one step for each screw. Instead the step may include all 20 screws. Since all steps that include one next component by necessity includes one next component, the description does not loose generality for reciting one next component when several next components are included.
[0133] It should be noted that in the above description various components or objects may have several representations. For example, the main structural component 105 exists in physical space, positioned on the work area 102. In addition it is captured by the camera 107 and is as such represented as a video image on the display 106. The same component is described as a 3D description associated with a position in virtual 3D space, and this virtual component may also be shown on the display 106. The various representations of the same object has not been given different reference numbers. By way of example, this means that the phrase “the 3D description of the main structural component 105” refers to the virtual representation inside the computer and not to the physical component on the work area 102.
[0134] Among the various aspects of the invention are
[0135] An augmented reality workstation for providing augmented reality guidance in a manufacturing environment, the augmented reality workstation comprising:
a camera 107 configured to capture a work area 102 of the workstation 100;
a computer system 108 configured to receive a video feed from the camera 107 and to combine the video feed with virtual graphical elements 210, 211, 212, 213, 214, 217, 218 to generate an augmented reality video signal;
a display system 106 configured to receive the augmented reality video signal from the computer system 108 and display an augmented reality view of the work area 102;
wherein the computer system 108 is further configured to:
detect at least one characteristic feature 104 in the video feed received from the camera 107;
maintain a representation of a virtual 3D space in memory;
assign a position in the virtual 3D space for the at least one characteristic feature and positioning at least one of the virtual graphical elements 210, 211, 212, 213, 214, 217, 218 in the virtual 3D space relative to the position of the characteristic feature;
associate a 3D representation of a first component with a position and an orientation in the virtual 3D space;
associate a 3D representation of an additional component with a second position and a second orientation in the virtual 3D space;
illustrating an assembly step by:
displaying on the display system 106 a video image of the work area 102 with the first component and the additional component superimposed in positions determined by their positions in the virtual 3D space;
performing an animation representing a movement of the additional component from the second position to a final position in the virtual 3D space, wherein the final position in the virtual 3D space is a position where the additional component is attached to the first component;
receiving user input confirming that an assembly operation step has been completed;
performing additional illustrations of assembly steps until an assembly operation has been completed.

Claims (19)

1. An augmented reality workstation for creating augmented reality guidance for an assembly operation in a manufacturing environment, the augmented reality workstation comprising:
a camera (107) configured to capture a work area (102) of the workstation (100);
a computer system (108) configured to receive a video feed from said camera (107) and to combine said video feed with virtual graphical elements (210, 211, 212, 213, 214, 217, 218) to generate an augmented reality video signal;
a display (106) configured to receive said augmented reality video signal from said computer system (108) and display an augmented reality view of said work area (102); and
a user interface (601);
wherein said computer system (108) includes:
an augmented reality management module configured to
detect a characteristic feature in said video feed, said characteristic feature being associated with a known position in said work area (102) and a reference position in said virtual 3D space;
receive 3D descriptions of assembly components, associate said 3D descriptions of assembly components with respective positions in said virtual 3D space, change said position in said virtual 3D space based on parameters received from said user interface; and
generate said virtual graphical elements (210, 211, 212, 213, 214, 217, 218) based on said 3D representation and position them in said video signal based on their respective positions in said virtual 3D space; and
an augmented reality instruction file creator (606) configured to save said 3D descriptions of assembly components, positions in said virtual 3D space associated with said 3D descriptions at defined points of an assembly operation, and a position in said virtual 3D space associated with said characteristic feature.
2. An augmented reality workstation according to claim 1, wherein said characteristic feature (104) is a 2D marker.
3. An augmented reality workstation according to claim 1, wherein said characteristic feature (104) is a contrasting color or shape that is part of said first component.
4. An augmented reality workstation according to one of the previous claims, wherein said user interface (601) is configured to receive user input identifying a 3D description of a specific assembly component, user input specifying a change in said position in said virtual 3D space with which said 3D description of a specific assembly component is associated, and user input indicative of a completion of an assembly step.
5. An augmented reality workstation according to one of the previous claims, further comprising a 3D model generator configured to generate a negative representation of at least a part of a 3D description and output said negative representation in a format that is usable by a 3D printer.
6. An augmented reality workstation according to one of the previous claims, further comprising an image recognition module (508, 509) configured to detect and identify said characteristic feature (104) and to calculate at least one of a distance and a viewing angle from said camera (107) to said characteristic feature (104) based on at least one of the size and the proportions of said characteristic feature (104) in said video feed.
7. An augmented reality workstation according to one of the previous claims, wherein said augmented reality management module is further configured to receive 2D descriptions of assembly components, associate said 2D descriptions with respective positions in said virtual 3D space and change said position in virtual 3D space based on parameters received from said user interface (601).
8. An augmented reality workstation according to claim 7, wherein said augmented reality management module is further configured to generate an estimated 3D description based on said 2D description.
9. An augmented reality workstation for providing augmented reality guidance in a manufacturing environment, the augmented reality workstation comprising:
a camera (107) configured to capture a work area (102) of the workstation (100);
a computer system (108) configured to receive a video feed from said camera (107) and to combine said video feed with virtual graphical elements (210, 211, 212, 213, 214, 217, 218) to generate an augmented reality video signal;
a display (106) configured to receive said augmented reality video signal from said computer system (108) and display an augmented reality view of said work area (102); and
a user interface (106);
wherein said computer system (108) includes:
an augmented reality management module configured to
detect a characteristic feature in said video feed, said characteristic feature being associated with a known position in said work area and a reference position in said virtual 3D space;
receive 3D descriptions of assembly components, associate said 3D descriptions of assembly components with a position in said virtual 3D space, and change said position in said virtual 3D space based on parameters received from said user interface (601); and
generate video output constituting a combination of said video feed and graphical representations of said 3D descriptions of assembly components as said augmented reality video signal; and
an augmented reality instruction file interpreter (608) configured to extract information from an augmented reality instruction file and provide said extracted information to said augmented reality management module, said extracted information including 3D descriptions of assembly components, positions in said virtual 3D space associated with said 3D descriptions at defined points of an assembly operation, and a position in said virtual 3D space associated with said characteristic feature.
10. The augmented reality workstation according to claim 9, further comprising a user interface (601) configured to receive user input indicating the completion of a current assembly step and the transition to a next assembly step.
11. The augmented reality workstation according to claim 9 or 10, further comprising a log file creator module (614) configured to store at least one of: a time stamp associated with a beginning of an assembly step, a time stamp associated with a completion of an assembly step, a video segment of the work area (102) during performance of an assembly step, a user identity of an operator (103) logged into the system during performance of an assembly step, a product code associated with an assembly component added to an assembly during the performance of an assembly step, a serial number associated with the completed assembly, and a version number associated with an augmented reality instruction file.
12. A method in a computer system for creating augmented reality guidance for an assembly process in a manufacturing environment, the method comprising:
providing a video feed of a work area (102) including a characteristic feature (104) and a first component (105);
maintaining a representation of a virtual 3D space in memory;
detecting said characteristic feature (104) in said video feed;
associating said characteristic feature (104) with a reference position in said virtual 3D space;
loading a 3D description of said first component (105);
associating said 3D description of said first component (105) with a first position in virtual 3D space;
creating a combined video signal from said video feed and said 3D description of said first component (105), wherein the position of said 3D description of said first component (105) in said combined video signal is determined from its position relative to said reference position in said virtual 3D space;
receiving input from a user interface (601) representing an adjustment of the position of said 3D description of said first component (105) to a second position in said virtual 3D space corresponding to the position of the first component (105) in the video feed, and updating said video signal based on said adjustment;
for a next component:
receiving input from said user interface (601) representing an identification of said next component;
loading a 3D description of said next component;
associating said 3D description of said next component with a starting position in said virtual 3D space;
receiving user input defining a final position in said virtual space, wherein the final position in said virtual 3D space is a position where the additional component is attached to the first component;
receiving input from a user interface (601) indicating the completion of an assembly step;
saving a description of the assembly step including at least said identification of said next component, said 3D description of said next component, said starting position, and said final position;
repeating the steps for a next component for an additional next component until receiving input from a user interface (601) indicating the completion of an assembly operation;
saving a description of the assembly operation including at least saved descriptions of assembly steps, said 3D description of a first component, said first position, and the reference position associated with said characteristic feature (104) in said virtual 3D space.
13. The method according to claim 12, wherein said step of loading a 3D representation of a next component includes loading a 2D representation of said component, and converting said 2D representation to a 3D representation.
14. The method according to claim 13, wherein said converting said 2D representation to a 3D representation includes generating a flat 3D representation with a position and an orientation in virtual 3D space.
15. The method according to claim 13, wherein said converting said 2D representation to a 3D representation includes generating a 3D estimate based on said 2D representation.
16. The method according to one of the claims 12 to 15, wherein said characteristic feature (104) is one of a 2D marker and a contrasting color or shape that is part of said first component.
17. A method in a computer system for providing augmented reality guidance for an assembly process in a manufacturing environment, the method comprising:
providing a video feed of a work area (102) including a characteristic feature (104) and a first component (105);
maintaining a representation of a virtual 3D space in memory;
detecting said characteristic feature (104) in said video feed;
associating said characteristic feature (104) with a reference position in said virtual 3D space;
loading an augmented reality instruction file;
extracting from said augmented reality instruction file a predefined reference position in said virtual 3D space and associating said characteristic feature (104) with said reference position;
extracting a 3D description of a first component (105) and a position relative to the reference position from said augmented reality instruction file and associating said 3D description of said first component (105) with a first position in said virtual 3D space based on said position relative to said characteristic feature (104);
creating a combined video signal constituting a combination of said video feed and said 3D description of said first component (105) wherein the position of said 3D description of said first component (105) in said combined video signal is determined from its position relative to said reference position in said virtual 3D space;
for a next component:
extracting a 3D description of said next component from said augmented reality instruction file and positioning said 3D description in a starting position in said virtual 3D space;
adding said 3D description of said next component to said combined video signal;
extracting from said augmented reality instruction file a predefined final position relative to said reference position in said virtual 3D space;
generating an animation illustrating the movement of said next component from said starting position to said final position in said combined video signal by gradually updating the position of said next component in said virtual 3D space;
receiving input from a user interface (601) representing the completion of an assembly step;
saving information relating to the completion of the assembly step in a log file;
repeating the steps for a next component until receiving input from a user interface (601) indicating the completion of an assembly operation; and
saving information relating to the completion of the assembly operation in said log file.
18. A method according to claim 17, wherein said input from said user interface (601) representing the completion of an assembly step includes the recognition of a 1D or a 2D code.
19. A method according to claim 17 or 18, wherein said saving information relating to the completion of the assembly step includes storing at least one of: a time stamp associated with a beginning of an assembly step, a time stamp associated with a completion of an assembly step, a video segment of the work area (102) during performance of an assembly step, a user identity of an operator (103) logged into the system during performance of an assembly step, a product code associated with an assembly component added to an assembly during the performance of an assembly step, a serial number associated with the completed assembly, and a version number associated with an augmented reality instruction file.
NO20180179A 2018-02-02 2018-02-02 Method and system for augmented reality assembly guidance NO343601B1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
NO20180179A NO343601B1 (en) 2018-02-02 2018-02-02 Method and system for augmented reality assembly guidance
PCT/NO2019/050032 WO2019151877A1 (en) 2018-02-02 2019-02-04 Method and system for augmented reality assembly guidance

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
NO20180179A NO343601B1 (en) 2018-02-02 2018-02-02 Method and system for augmented reality assembly guidance

Publications (2)

Publication Number Publication Date
NO20180179A1 NO20180179A1 (en) 2019-04-08
NO343601B1 true NO343601B1 (en) 2019-04-08

Family

ID=65635777

Family Applications (1)

Application Number Title Priority Date Filing Date
NO20180179A NO343601B1 (en) 2018-02-02 2018-02-02 Method and system for augmented reality assembly guidance

Country Status (2)

Country Link
NO (1) NO343601B1 (en)
WO (1) WO2019151877A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11302085B2 (en) 2020-09-15 2022-04-12 Facebook Technologies, Llc Artificial reality collaborative working environments
US11854230B2 (en) 2020-12-01 2023-12-26 Meta Platforms Technologies, Llc Physical keyboard tracking

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110716501A (en) * 2019-11-18 2020-01-21 康美包(苏州)有限公司 Data transmission method, equipment and device and computer storage medium
CN113126575B (en) * 2019-12-31 2022-07-26 捷普电子(无锡)有限公司 Guiding method and guiding system for assembly operation process
CN111381679A (en) * 2020-03-19 2020-07-07 三一筑工科技有限公司 AR-based assembly type building construction training method and device and computing equipment
US11762367B2 (en) 2021-05-07 2023-09-19 Rockwell Collins, Inc. Benchtop visual prototyping and assembly system
CN113673894B (en) * 2021-08-27 2024-02-02 东华大学 Multi-person cooperation AR assembly method and system based on digital twinning
CN115009398A (en) * 2022-07-08 2022-09-06 江西工业工程职业技术学院 Automobile assembling system and assembling method thereof

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2642451A2 (en) * 2012-03-21 2013-09-25 Sony Computer Entertainment Europe Limited Apparatus and method of augmented reality interaction
US8817047B1 (en) * 2013-09-02 2014-08-26 Lg Electronics Inc. Portable device and method of controlling therefor
EP2945374A2 (en) * 2014-05-13 2015-11-18 Canon Kabushiki Kaisha Positioning of projected augmented reality content
EP3239933A1 (en) * 2016-04-28 2017-11-01 Fujitsu Limited Authoring device and authoring method
US20170316610A1 (en) * 2016-04-28 2017-11-02 National Chiao Tung University Assembly instruction system and assembly instruction method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
SE525826C2 (en) * 2004-06-18 2005-05-10 Totalfoersvarets Forskningsins Interactive information display method for mixed reality system, monitors visual focal point indicated field or object in image obtained by mixing virtual and actual images
US9508146B2 (en) * 2012-10-31 2016-11-29 The Boeing Company Automated frame of reference calibration for augmented reality

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2642451A2 (en) * 2012-03-21 2013-09-25 Sony Computer Entertainment Europe Limited Apparatus and method of augmented reality interaction
US8817047B1 (en) * 2013-09-02 2014-08-26 Lg Electronics Inc. Portable device and method of controlling therefor
EP2945374A2 (en) * 2014-05-13 2015-11-18 Canon Kabushiki Kaisha Positioning of projected augmented reality content
EP3239933A1 (en) * 2016-04-28 2017-11-01 Fujitsu Limited Authoring device and authoring method
US20170316610A1 (en) * 2016-04-28 2017-11-02 National Chiao Tung University Assembly instruction system and assembly instruction method

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11302085B2 (en) 2020-09-15 2022-04-12 Facebook Technologies, Llc Artificial reality collaborative working environments
WO2022060813A3 (en) * 2020-09-15 2022-04-28 Facebook Technologies, Llc Artificial reality collaborative working environments
US11582245B2 (en) 2020-09-15 2023-02-14 Meta Platforms Technologies, Llc Artificial reality collaborative working environments
US11606364B2 (en) 2020-09-15 2023-03-14 Meta Platforms Technologies, Llc Artificial reality collaborative working environments
US11770384B2 (en) 2020-09-15 2023-09-26 Meta Platforms Technologies, Llc Artificial reality collaborative working environments
US11902288B2 (en) 2020-09-15 2024-02-13 Meta Platforms Technologies, Llc Artificial reality collaborative working environments
US11854230B2 (en) 2020-12-01 2023-12-26 Meta Platforms Technologies, Llc Physical keyboard tracking

Also Published As

Publication number Publication date
WO2019151877A1 (en) 2019-08-08
NO20180179A1 (en) 2019-04-08

Similar Documents

Publication Publication Date Title
WO2019151877A1 (en) Method and system for augmented reality assembly guidance
US11467709B2 (en) Mixed-reality guide data collection and presentation
US10157502B2 (en) Method and apparatus for sharing augmented reality applications to multiple clients
DK2996015T3 (en) PROCEDURE TO USE IMPROVED REALITY AS HMI VIEW
CN104461318B (en) Reading method based on augmented reality and system
US20190180506A1 (en) Systems and methods for adding annotations to virtual objects in a virtual environment
KR100930370B1 (en) Augmented reality authoring method and system and computer readable recording medium recording the program
CN101262586B (en) Information sharing support system, information processing device and computer controlling method
EP2549428A2 (en) Method and system for generating behavioral studies of an individual
JP6144364B2 (en) Work support data creation program
US10762706B2 (en) Image management device, image management method, image management program, and presentation system
US20130022947A1 (en) Method and system for generating behavioral studies of an individual
WO2014137337A1 (en) Methods and apparatus for using optical character recognition to provide augmented reality
MXPA05000208A (en) Graphical representation, storage and dissemination of displayed thinking.
CN102388362A (en) Editing of 2d software consumables within a complex 3d spatial application
Ferrise et al. Multimodal training and tele-assistance systems for the maintenance of industrial products: This paper presents a multimodal and remote training system for improvement of maintenance quality in the case study of washing machine
Pick et al. Design and evaluation of data annotation workflows for cave-like virtual environments
CN107851335A (en) For making the visual augmented reality equipment of luminaire light fixture
JP2020098568A (en) Information management device, information management system, information management method, and information management program
CN115482322A (en) Computer-implemented method and system for generating a synthetic training data set
GB2499024A (en) 3D integrated development environment(IDE) display
JP7381556B2 (en) Media content planning system
US11833761B1 (en) Optimizing interaction with of tangible tools with tangible objects via registration of virtual objects to tangible tools
GB2577611A (en) Methods and systems of providing augmented reality
Gimeno et al. An occlusion-aware AR authoring tool for assembly and repair tasks

Legal Events

Date Code Title Description
MM1K Lapsed by not paying the annual fees