WO2018175675A1 - Formation collaborative et interactive en réalité augmentée - Google Patents

Formation collaborative et interactive en réalité augmentée Download PDF

Info

Publication number
WO2018175675A1
WO2018175675A1 PCT/US2018/023687 US2018023687W WO2018175675A1 WO 2018175675 A1 WO2018175675 A1 WO 2018175675A1 US 2018023687 W US2018023687 W US 2018023687W WO 2018175675 A1 WO2018175675 A1 WO 2018175675A1
Authority
WO
WIPO (PCT)
Prior art keywords
trainee
trainer
training
virtual
wearable device
Prior art date
Application number
PCT/US2018/023687
Other languages
English (en)
Inventor
Mareike KRITZLER
Iori MIZUTANI
Original Assignee
Siemens Aktiengesellschaft
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Siemens Aktiengesellschaft filed Critical Siemens Aktiengesellschaft
Publication of WO2018175675A1 publication Critical patent/WO2018175675A1/fr

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/08Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations
    • G09B5/14Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations with provision for individual teacher-student communication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality

Definitions

  • the present disclosure is directed, in general, to the use of augmented reality technologies for training applications and more specifically, to a wearable augmented reality training system for remote and group training.
  • Augmented Reality is a technology that displays digital information in the field of view of a user, mostly using head-mounted devices such as Microsoft's HoloLens® or Meta Company's Meta 2®. AR tracks the environment of a user, and in this way, a user is enabled to place virtual objects and scenes at fixed 3D positions in any given real- world environment.
  • a skilled industrial workforce is essential for a successful and productive industrial operation. In order to ensure a skilled and educated workforce, training of new workers is necessary.
  • a method of training a first trainee includes positioning a first wearable device on the first trainee, positioning the first trainee in a first training space, positioning a second wearable device on the trainer, positioning the trainer in a second training space, and operating a computer to generate elements within a virtual space.
  • the method also includes communicating the virtual space and the elements to each of the first wearable device and the second wearable device, integrating the virtual space and the elements, the first training space, and the second training space, and utilizing the elements to facilitate the completion of a training task by the trainee under the guidance of the trainer.
  • a system that facilitates a trainer training a trainee to perform a task includes a first wearable device operable to project 3D holographic virtual elements that make up a virtual space, the first wearable device worn by the trainer and a second wearable device operable to project 3D holographic virtual elements that correspond to the virtual space, the second wearable device worn by the trainee.
  • a physical object is positioned in a field of view of one of the trainer and the trainee, and a computer is operable to integrate the physical device into the virtual space by generating a representation of the physical device.
  • the computer integrates the first wearable device and the second wearable device so that the first wearable device and the second wearable device display the virtual space, the 3D holographic virtual elements, and one of the physical device and the representation of the physical device from the perspective of the trainee and the trainer.
  • FIG. 1 is a schematic illustration of an interactive and collaborative augmented reality training system.
  • Fig. 2 is a trainer's view of a trainee using a virtual tool to perform a repair step on a virtual robot.
  • Fig. 3 is the trainee's view of the virtual tools and the virtual robot of Fig. 2.
  • Fig. 4 is a trainer and a trainee view of the same virtual space and virtual tools from their respective perspectives.
  • Fig. 5 is a trainer's view of a trainee performing a task when the trainer is remotely located.
  • association with and “associated therewith,” as well as derivatives thereof, may mean to include, be included within, interconnect with, contain, be contained within, connect to or with, couple to or with, be communicable with, cooperate with, interleave, juxtapose, be proximate to, be bound to or with, have, have a property of, or the like.
  • first”, “second”, “third” and so forth may be used herein to refer to various elements, information, functions, or acts, these elements, information, functions, or acts should not be limited by these terms. Rather these numeral adjectives are used to distinguish different elements, information, functions or acts from each other. For example, a first element, information, function, or act could be termed a second element, information, function, or act, and, similarly, a second element, information, function, or act could be termed a first element, information, function, or act, without departing from the scope of the present disclosure.
  • adjacent to may mean: that an element is relatively near to but not in contact with a further element; or that the element is in contact with the further portion, unless the context clearly indicates otherwise.
  • phrase “based on” is intended to mean “based, at least in part, on” unless explicitly stated otherwise.
  • an AR (Augmented Reality) system 10 (sometimes referred to as Mixed Reality or MR) can be used to enhance collaborative and interactive training between a trainer 15 and one or more trainees 20.
  • the system 10 and method described herein allows trainers 15 to train trainees 20 with 3D virtual representations of real- world objects as well as with real world objects.
  • Fig. 1 schematically illustrates a trainer 15 and a trainee 20 positioned in the same physical training space 25 (shown in Fig, 2) and wearing head-mounted devices 30, 35 such as Microsoft's HoloLens® or Meta Company's Meta 2®.
  • the head mounted devices 30, 35 operate to project 3D holographic elements or objects (objects and elements are used interchangeably) that define a virtual training space into a user's field of view.
  • the head- mounted devices 30, 35 also allow the wearer or user to see the actual physical environment around them such that the 3D objects or elements are projected onto or into the actual physical environment.
  • the head-mounted devices 30, 35 can also project 2D objects to one or both users.
  • Training is generally centered on fixing or building (assembly, disassembly, operation, maintenance) an object 40.
  • the object 40 is a robot or robot arm.
  • the object 40 is either a physical object 40a (i.e., it is a real object positioned in the room 25) or it is a virtual object 40b that is generated by the head-mounted devices 30, 35.
  • a physical object 40a i.e., it is a real object positioned in the room 25
  • a virtual object 40b that is generated by the head-mounted devices 30, 35.
  • a computer 45 that is separate from the head-mounted devices 30, 35 generates one or more virtual objects 40b and/or facilitates communication between head- mounted devices 30, 35.
  • typical head-mounted devices 30, 35 include a computer that is operable to run programs or apps that facilitate their operation.
  • the computer 45 if utilized, acts as a server between the head-mounted devices 30, 35 and/or facilitates
  • the term computer includes any computer or combinations of computers that operate to perform a desired task or function.
  • the computer is part of one of the head-mounted devices 30, 35 or is made up of computers in both head-mounted devices 30, 35 working together. Additionally, computers external to the head-mounted devices 30, 35 can work alone, or with computers in the head-mounted devices 30, 35 to perform the desired tasks or functions.
  • the head-mounted display 30, 35 integrates the object 40a into the virtual space.
  • the integration links the virtual space to the actual environment with the physical object 40a defining a perspective orientation with respect to both environments.
  • the object 40a defines the perspective of each user 15, 20 and allows the head mounted displays 30, 35 to better display both actual objects 40a and virtual objects 40b more accurately.
  • the head mounted displays 30, 35 build and position the object 40b in the virtual environment. In some situations, only one of the trainer 15 or the trainee 20 is in the room 25 with the physical object 40a with the other being remotely located. In this situation, the physical object 40a is rendered as a virtual object 40b for the remote user.
  • Virtual elements 50 (sometimes referred to as virtual tools or tools) such as tools are generated by the head mounted displays 30, 35 for use in the training.
  • Virtual tools 50 could include virtual screwdrivers, pliers, wrenches, hammers, etc.
  • physical tools could also be used with a physical object 40a or a virtual object 40b if desired.
  • the trainer 15 establishes a task to be completed.
  • the task is displayed in the trainee's field of view as a 2D note 55. It is preferable that the 2D note 55 be pinned to a fixed position in the trainee's field of view so that it moves with the trainee 20.
  • the task could be pinned to the object 40 or could be hidden by the trainee 20.
  • the trainer 15 can provide one or more tools as virtual elements 50 for the trainee 20 to use. In the illustrated construction, four virtual tools 50 are provided.
  • the training can be tailored to the level of the trainee 20.
  • the task could be broad (e.g., fix the robot) or could be a more narrow step by step process that guides the trainee 20.
  • the virtual tools 50 could be provided as a full tool kit including virtual tools 50 that are needed as well as virtual tools 50 that are not, or could be provided individually for each step to aid the trainee 20.
  • the trainee 20 can ask the trainer 15 questions through the head-mounted devices 30, 35 and the trainer 15 can answer the questions through the head- mounted devices 30, 35.
  • the trainer 15 can place virtual notes 55 or indicators 60 on or near the object 40 to further guide the trainee 20.
  • the use of text questioning within the virtual space is advantageous as it is possible to record the training session on a recording device 65, thereby capturing the text questions as well. Also the trainee can see the trainer's body language.
  • Fig. 2 illustrates the trainer's view 70 of a trainee 20 performing a task with both trainer 15 and trainee 20 in the same training space 25 or room.
  • the object 40 is a physical object 40a but a virtual object 40b is also provided. While this would not be typical, it does illustrate both ways of displaying the object 40.
  • the virtual tools 50 are visible and appear to be floating in the environment.
  • the trainee 20 is grasping a pair of pliers while the trainer 15 has placed indicators 60, in the form of arrows, on the object 40a, 40b to show the trainee 20 where to use the tool 50.
  • the trainee's view 75 of the arrangement of Fig. 2 is shown in Fig. 3 with the trainee 20 focused on the virtual object 40b rather than the physical object 40a.
  • the available tools 50 are visible and again appear to be floating in front of the object 40b.
  • the indicator 60 placed by the trainer 15 is clearly visible as a large arrow pointing at the area where the trainer 15 wants the trainee 20 to focus.
  • Fig. 4 illustrates the view through the head-mounted devices 30, 35 when the trainer 15 and the trainee 20 are in training spaces 25 that are separated from one another and there is no physical object 40a.
  • the head mounted display 30, 35 generate a virtual object 45b (the robot arm) and a virtual tool 50 in the form a screwdriver.
  • Both the trainer 15 and the trainee 20 see the virtual objects 50 but the perspectives are slightly different, as if the trainer 15 is watching over the shoulder of the trainee 20. Of course, the identical perspective could be provided to both the trainer 15 and the trainee 20 as well.
  • the trainer 15 has placed an indicator 60 near the screwdriver to aid the trainee 20. Because the trainer 15 and the trainee 20 are in different locations, the background (i.e., the physical environment) will be different.
  • Fig. 5 illustrates the situation of Fig. 4 after the trainer 15 has changed perspective.
  • the screwdriver is present as a virtual tool 50 that appears to be floating near the virtual object 40b.
  • the trainer 15 has placed a different indicator 60 on the environment to aid the trainee 20.
  • the trainee 20 is represented as a virtual person 80 reaching for the virtual tool 50.
  • the virtual image 80 of the trainee 20 is extrapolated from the image of the trainee's hand in the trainee's head-mounted device 35.
  • the trainee 20 would have a virtual view of the trainer 15 in some constructions.
  • the system 10 is flexible enough to allow multiple trainers 15 and/or multiple trainees 20 to simultaneously participate in a training session.
  • the most common situation would be a single trainer 15 and multiple trainees 20.
  • the trainer 15 would have the ability to communicate with each of the trainees 20 via the head-mounted devices 30, 35 either simultaneously as a group or individually.
  • the trainer 15 could communicate the task to the entire class of trainees 20 and then provide notes 55, virtual elements 50, or indicators 60 to individual trainees 20 as required to assist them, or to the entire class of trainees 20.
  • the recording device 65 allows each training session to be recorded from the perspective of any party involved. Thus, trainees 20 could record and review their sessions at their leisure. As they don't need to hold an extra device in their hands.
  • trainers 15 and trainees 20 can see the same virtual representation of the object 40b as well as their real-world partner (trainer 15 / trainee 20).
  • Feedback given verbally or through gestures by the trainer 15 during a task, is received by the trainee 20 in real-time.
  • the ability to interact naturally through gestures and voice with a virtual object 40b and the trainer 15 or trainee 20 makes the training very immersive and intuitive. Since the training typically uses a virtual object 40b and/or virtual tools 50, real-world objects 40a cannot be damaged and tasks can be repeated as often as necessary. Also, if a real object is not available or is expensive, training can still be performed.
  • a trainee 20 can also switch their vision to the point of view of the trainer 15. Training sessions can be recorded as a sight-tracking video through the head- mounted device 30, 35 and the trainee 20 can replay the recording multiple times and learn from the corrections made by the trainer 15.
  • the trainer 15 can also leave virtual notes 55 that can help and give correction or support at crucial steps.
  • the notes 55 will be stored in the individual session for the trainee 20 and the task can be revisited by the trainee 20 before repeating the same task.
  • two or more trainees 20 can learn collaboratively and work on exercises together. Scenarios include but are not limited to: maintenance, repair, replacement, replenishment, and exchange of parts.
  • the trainer 15 places the robotic arm with voice command and/or gestures in the training room 25 as an object 40b and starts a training session for the trainee 20.
  • Either the trainer 15 describes the task of the selected scenario verbally to the trainee 20 or the trainee 20 has access to a written task description in a note 55 that is visible in the head- mounted device 35 of the trainee 20.
  • Both trainer 15 and trainee 20 share the same view of the virtual object 40b and can walk around the object 40b, which allows them to see the object 40b or problem from various angles.
  • the trainer 15 can see in his or her head- mounted device 30 exactly what the trainee 20 is doing and how he/she is approaching the task, as well as which tools 50 the trainee 20 has selected. Different levels of support or guidance can be offered to the trainee 20, depending on his or her progress with the task or his or her level of experience or training.
  • Tools 50 can be either selected from a menu or inserted by voice command.
  • the trainer 15 can give feedback at any time during the task. If the trainee 20 is stuck, the trainer 15 can directly show the trainee 20 what to do or which tool 50 to use.
  • the trainee 20 can experience the performance of the task through the eyes of the trainer 15 by directly streaming the head- mounted device 30 of the trainer 15 to the trainee 20 while the trainer 15 performs the task. Both trainer 15 and trainee 20 can see the virtual representation of the robotic arm 40b, the inserted tools 50, and the real person including any actions and movements, such as pointing or rotation.
  • the trainee 20 can record the session with the head-mounted device 35 and watch the recording as part of the learning process.
  • the system 10 described herein enables two or more users to conduct collaborative and interactive training sessions for industrial use cases such as repair, replacement or maintenance.
  • Augmented Reality is utilized to display virtual objects 40b, machines 40a or industrial factory floors as well as tools 50 and instructions for virtual training sessions.
  • the terms "element” or "object” are sometimes used interchangeably and should be interpreted to refer to any virtual object generated by the computer 45 or the head- mounted devices 30, 35.
  • Elements or objects can include 2D, 3D, and holographic objects or elements that are placed in the training space or in view of the head- mounted devices 30, 35.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

L'invention concerne un procédé de formation d'un premier stagiaire consistant à positionner un premier dispositif pouvant être porté sur le premier stagiaire, à positionner le premier stagiaire dans un premier espace de formation, à positionner un second dispositif pouvant être porté sur un formateur, à positionner le formateur dans un second espace de formation, et à exploiter un ordinateur pour générer des éléments à l'intérieur d'un espace virtuel. Le procédé consiste également à communiquer l'espace virtuel et les éléments à chacun du premier dispositif pouvant être porté et du second dispositif pouvant être porté, à intégrer l'espace virtuel et les éléments, le premier espace de formation et le second espace de formation, et à utiliser les éléments pour faciliter l'achèvement d'une tâche de formation par le stagiaire sous la conduite du formateur.
PCT/US2018/023687 2017-03-23 2018-03-22 Formation collaborative et interactive en réalité augmentée WO2018175675A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201762475279P 2017-03-23 2017-03-23
US62/475,279 2017-03-23

Publications (1)

Publication Number Publication Date
WO2018175675A1 true WO2018175675A1 (fr) 2018-09-27

Family

ID=61913611

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2018/023687 WO2018175675A1 (fr) 2017-03-23 2018-03-22 Formation collaborative et interactive en réalité augmentée

Country Status (1)

Country Link
WO (1) WO2018175675A1 (fr)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110215373A (zh) * 2019-06-04 2019-09-10 北京虚实空间科技有限公司 一种基于沉浸视觉的训练系统和方法
CN110264818A (zh) * 2019-06-18 2019-09-20 国家电网有限公司 一种基于增强现实的机组进水阀拆装训练方法
CN111369846A (zh) * 2020-04-29 2020-07-03 厦门奇翼科技有限公司 一种5d沉浸式互动教学空间
US20210383716A1 (en) * 2020-06-09 2021-12-09 Silverback Consulting Group LLC Virtual certificate training and evaluation system using hands-on exercises

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090187389A1 (en) * 2008-01-18 2009-07-23 Lockheed Martin Corporation Immersive Collaborative Environment Using Motion Capture, Head Mounted Display, and Cave
US20090213114A1 (en) * 2008-01-18 2009-08-27 Lockheed Martin Corporation Portable Immersive Environment Using Motion Capture and Head Mounted Display
EP2600331A1 (fr) * 2011-11-30 2013-06-05 Microsoft Corporation Formation et éducation au moyen de casques de visualisation de réalité

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090187389A1 (en) * 2008-01-18 2009-07-23 Lockheed Martin Corporation Immersive Collaborative Environment Using Motion Capture, Head Mounted Display, and Cave
US20090213114A1 (en) * 2008-01-18 2009-08-27 Lockheed Martin Corporation Portable Immersive Environment Using Motion Capture and Head Mounted Display
EP2600331A1 (fr) * 2011-11-30 2013-06-05 Microsoft Corporation Formation et éducation au moyen de casques de visualisation de réalité

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110215373A (zh) * 2019-06-04 2019-09-10 北京虚实空间科技有限公司 一种基于沉浸视觉的训练系统和方法
CN110264818A (zh) * 2019-06-18 2019-09-20 国家电网有限公司 一种基于增强现实的机组进水阀拆装训练方法
CN110264818B (zh) * 2019-06-18 2021-08-24 国家电网有限公司 一种基于增强现实的机组进水阀拆装训练方法
CN111369846A (zh) * 2020-04-29 2020-07-03 厦门奇翼科技有限公司 一种5d沉浸式互动教学空间
US20210383716A1 (en) * 2020-06-09 2021-12-09 Silverback Consulting Group LLC Virtual certificate training and evaluation system using hands-on exercises

Similar Documents

Publication Publication Date Title
Werrlich et al. Comparing HMD-based and paper-based training
US11227439B2 (en) Systems and methods for multi-user virtual reality remote training
Büttner et al. Augmented reality training for industrial assembly work-are projection-based ar assistive systems an appropriate tool for assembly training?
Cao et al. An exploratory study of augmented reality presence for tutoring machine tasks
Gavish et al. Evaluating virtual reality and augmented reality training for industrial maintenance and assembly tasks
Bradley et al. A review of computer simulations in teacher education
Webel et al. An augmented reality training platform for assembly and maintenance skills
Gavish et al. Design guidelines for the development of virtual reality and augmented reality training systems for maintenance and assembly tasks
Monroy Reyes et al. A mobile augmented reality system to support machinery operations in scholar environments
Okimoto et al. User experience in augmented reality applied to the welding education
WO2018175675A1 (fr) Formation collaborative et interactive en réalité augmentée
Webel et al. Augmented reality training for assembly and maintenance skills
Ferrise et al. Multimodal training and tele-assistance systems for the maintenance of industrial products: This paper presents a multimodal and remote training system for improvement of maintenance quality in the case study of washing machine
Ipsita et al. Towards modeling of virtual reality welding simulators to promote accessible and scalable training
Funk et al. HoloCollab: a shared virtual platform for physical assembly training using spatially-aware head-mounted displays
Childs et al. An overview of enhancing distance learning through augmented and virtual reality technologies
Smith et al. Development and analysis of virtual reality technician-training platform and methods
Loch et al. Integrating haptic interaction into a virtual training system for manual procedures in industrial environments
KR101960815B1 (ko) 증강 현실과 가상 현실을 이용한 학습 지원 시스템 및 방법
Holley et al. Augmented reality for education
Gisler et al. Work-in-Progress-enhancing training in virtual reality with hand tracking and a real tool
Ford et al. Augmented reality and the future of virtual workspaces
Hořejší et al. Digital factory and virtual reality: Teaching virtual reality principles with game engines
Gavish The dark side of using augmented reality (AR) training systems in industry
Behmke et al. AR chemistry: an undergraduate, technology-based research and development initiative to incorporate AR molecular models in the chemistry curriculum

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18716786

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18716786

Country of ref document: EP

Kind code of ref document: A1