WO2018226452A1 - Apparatus and method for assessing and tracking user competency in augmented/virtual reality-based training in industrial automation systems and other systems - Google Patents
Apparatus and method for assessing and tracking user competency in augmented/virtual reality-based training in industrial automation systems and other systems Download PDFInfo
- Publication number
- WO2018226452A1 WO2018226452A1 PCT/US2018/034887 US2018034887W WO2018226452A1 WO 2018226452 A1 WO2018226452 A1 WO 2018226452A1 US 2018034887 W US2018034887 W US 2018034887W WO 2018226452 A1 WO2018226452 A1 WO 2018226452A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- user
- commands
- actions
- user actions
- space
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 54
- 230000003190 augmentative effect Effects 0.000 title claims abstract description 27
- 238000012549 training Methods 0.000 title claims description 36
- 230000009471 action Effects 0.000 claims abstract description 101
- 230000000007 visual effect Effects 0.000 claims abstract description 21
- 238000004458 analytical method Methods 0.000 claims abstract description 10
- 238000012545 processing Methods 0.000 claims description 33
- 238000010200 validation analysis Methods 0.000 claims description 13
- 238000004519 manufacturing process Methods 0.000 claims description 5
- 230000004931 aggregating effect Effects 0.000 claims description 3
- 238000007726 management method Methods 0.000 claims description 3
- 230000006870 function Effects 0.000 description 11
- 238000004891 communication Methods 0.000 description 10
- 238000003860 storage Methods 0.000 description 10
- 238000005259 measurement Methods 0.000 description 7
- 230000008569 process Effects 0.000 description 7
- 230000002452 interceptive effect Effects 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 5
- 241000288140 Gruiformes Species 0.000 description 4
- 208000037656 Respiratory Sounds Diseases 0.000 description 4
- 230000008901 benefit Effects 0.000 description 4
- 206010037833 rales Diseases 0.000 description 4
- 230000001419 dependent effect Effects 0.000 description 3
- 238000011161 development Methods 0.000 description 3
- 238000013507 mapping Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000004075 alteration Effects 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000012423 maintenance Methods 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 230000002085 persistent effect Effects 0.000 description 2
- 230000000704 physical effect Effects 0.000 description 2
- 230000001154 acute effect Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000003542 behavioural effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000001934 delay Effects 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000014759 maintenance of location Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/167—Audio in a user interface, e.g. using voice commands for navigating, audio feedback
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B19/00—Programme-control systems
- G05B19/02—Programme-control systems electric
- G05B19/18—Numerical control [NC], i.e. automatically operating machines, in particular machine tools, e.g. in a manufacturing environment, so as to execute positioning, movement or co-ordinated operations by means of programme data in numerical form
- G05B19/409—Numerical control [NC], i.e. automatically operating machines, in particular machine tools, e.g. in a manufacturing environment, so as to execute positioning, movement or co-ordinated operations by means of programme data in numerical form characterised by using manual data input [MDI] or by using control panel, e.g. controlling functions with the panel; characterised by control panel details or by setting parameters
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/06—Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
- G06Q10/063—Operations research, analysis or management
- G06Q10/0639—Performance analysis of employees; Performance analysis of enterprise or organisation operations
- G06Q10/06393—Score-carding, benchmarking or key performance indicator [KPI] analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/06—Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
- G06Q10/063—Operations research, analysis or management
- G06Q10/0639—Performance analysis of employees; Performance analysis of enterprise or organisation operations
- G06Q10/06398—Performance of employee with respect to a job function
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B5/00—Electrically-operated educational appliances
- G09B5/02—Electrically-operated educational appliances with visual presentation of the material to be studied, e.g. using film strip
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B9/00—Simulators for teaching or training purposes
Definitions
- This disclosure generally relates to augmented reality and virtual reality systems. More specifically, this disclosure relates to an apparatus and metliod for assessing and tracking user competency in augmented/virtual reality-based training in industrial automation systems and other systems.
- Augmented reality and virtual reality technologies are advancing rapidly and becoming more and more common in various industries.
- Augmented reality generally refers to technology in which computer-generated content is superimposed over a real-world environment.
- Examples of augmented reality include games that superimpose objects or characters over real-world images and navigation tools that superimpose information over real -world images.
- Virtual reality generally refers to technology that creates an artificial simulation or recreation of an environment, which may or may not be a real-world environment.
- An example of virtual reality includes games that create fantasy or alien environments that can be explored by users.
- This disclosure provides an apparatus and method for assessing and tracking user competency in augmented/virtual reality-based training in industrial automation systems and other systems.
- a method in a first embodiment, includes receiving one or more records containing commands, an association of the commands with visual objects in an augmented reality/virtual reality (AR VR) space, and an AR/VR environment setup.
- the commands correspond to user actions taken in the AR/VR space.
- the method also includes analyzing the user actions based on the one or more records and assessing the user actions based on the analysis.
- an apparatus in a second embodiment, includes at least one processing device configured to receive one or more records containing commands, an association of the commands with visual objects in an AR/VR space, and an AR VR environment setup.
- the commands correspond to user actions taken in the AR VR space.
- the at least one processing device is also configured to analyze the user actions based on the one or more records and assess the user actions based on the analysis.
- a method in a third embodiment, includes receiving data defining user actions associated with an AR/VR space. The method also includes translating the user actions into associated commands and identifying associations of the commands with visual objects in the AR/VR space. The method further includes aggregating the commands, the associations of the commands with the visual objects, and an AR/VR environment setup into at least one record. In addition, the method includes transmitting the at least one record for assessment of the user actions.
- an apparatus includes at least one processing device configured to perform the method of the third embodiment or any of its dependent claims.
- a non-transitory computer readable medium contains instructions that when executed cause at least one processing device to perform the method of the first embodiment or any of its dependent claims.
- a non-transitory computer readable medium contains instructions that when executed cause at least one processing device to perform the method of the third embodiment or any of its dependent claims.
- FIGURE 1 illustrates an example architecture for capturing user actions in augmented/virtual reality and assessing user competency according to this disclosure
- FIGURE 2 illustrates an example device that supports capturing user actions in augmented/virtual reality or assessing user competency according to this disclosure
- FIGURES 3 and 4 illustrate example methods for capturing user actions in augmented/virtual reality and assessing user competency according to this disclosure.
- FIGURES 1 through 4 discussed below, and the various embodiments used to describe the principles of the present invention in this patent document are by way of illustration only and should not be construed in any way to limit the scope of the invention. Those skilled in the art will understand that the principles of the invention may be implemented in any type of suitably arranged device or system.
- trainee competency is often assessed using questionnaires, pictorial/image/Flash-based evaluations, or multiple objective questions. These standard assessment techniques no longer suffice due to the lack of validating a trainee's skills or performance determined interactively "on the job.” Also, an assessment of a user's abilities is often based on end results, and it can be difficult to suggest improvement opportunities quickly.
- This disclosure provides techniques for tracking and assessing an industrial automation user or other user's actions in an augmented/Virtual environment, which overcomes challenges with respect to tracking unwanted steps, tracking impacts on underlying industrial systems or other systems, assessing intermediate steps, performing behavioral assessments, and identifying responses to panic situations or other situations.
- this disclosure describes a portable file format that captures content such as user inputs, data formats, and training setups.
- the portable file format allows for easier storage, computation, and distribution of content and addresses technical constraints with respect to space, computation, and bandwidth.
- FIGURE 1 illustrates an example architecture 100 for capturing user actions in augmented/virtual reality and assessing user competency according to this disclosure.
- the architecture 100 includes a training environment 102, which denotes a visualization layer that allows interaction with an augmented reality/Virtual reality (AR/VR) space.
- the training environment 102 can include one or more end user devices, such as at least one AR/VR headset 104, at least one computing device 106, or at least one interactive AR/VR system 108.
- Each headset 104 generally denotes a device that is worn by a user and that displays an AR VR space.
- the headset 104 in FIGURE 1 is a MICROSOFT HOLOLENS device, although any other suitable AR/VR device could be used.
- Each computing device 106 generally denotes a device that processes data to present an AR/VR space (although not necessarily in a 3D format) to a user.
- Each computing device 106 denotes any suitable computing device, such as a desktop computer, laptop computer, tablet computer, or smartphone.
- Each interactive AR/VR system 108 includes a headset and one or more user input devices, such as interactive or smart gloves. Although not shown, one or more input devices could also be used with the headset 104 or the computing device 106.
- the architecture 100 also includes at least one processor, such as in a server 1 10, that is used to record content.
- the server 110 generally denotes a computing device that receives content from the training environment 102 and records and processes the content.
- the server 110 includes various functions or modules to support the recording and processing of training or other interactive content. Each of these functions or modules could be implemented in any suitable manner, such as with software/tirmware instructions executed by one or more processors.
- the server 1 10 could be positioned locally with or remote from the training environment 102.
- the server 1 10 includes a user input receiver 1 12, which receives, processes, and filters user inputs made by the user.
- the user inputs could include any suitable inputs, such as gestures made by the user, voice commands or voice annotations spoken by the user, textual messages provided by the user, or pointing actions taken by the user using a pointing device (such as a smart glove). Any other or additional user inputs could also be received.
- the user inputs can be filtered in any suitable manner and are output to an input translator 114. To support the use of the architecture 100 by a wide range of users, input variants (like voice/text in different languages) could be supported.
- the user input receiver 112 includes any suitable logic for receiving and processing user inputs.
- the input translator 114 translates the various user inputs into specific commands by referring to a standard action grammar reference 116.
- the grammar reference 116 represents an actions-to-commands mapping dictionary that associates different user input actions with different commands.
- the grammar reference 1 16 could associate certain spoken words, text messages, or physical actions with specific commands.
- the grammar reference 116 could support one or multiple possibilities for commands where applicable, such as when different commands may be associated with the same spoken words or text messages but different physical actions.
- the grammar reference 1 16 includes any suitable mapping or other association of actions and commands.
- the input translator 114 includes any suitable logic for identifying commands associated with received user inputs.
- the input tra slator 1 14 outputs identified commands to an aggregator 118.
- the aggregator 118 associates the commands with visual objects in the AR/VR space being presented to the user into one or more records 12.0.
- the aggregator 118 also embeds an AR/VR environment setup into the one or more records 120.
- the AR/VR environment setup can define what visual objects are to be presented in the AR/VR space.
- the records 120 therefore associate specific commands (which were generated based on user inputs) with specific visual objects in the AR/VR space as defined by the environment setup.
- the aggregator 1 18 includes any suitable logic for aggregating data.
- the records 12,0 are created in a portable file format, which allows the records 120 to be used by various other devices.
- the data in the records 120 can be processed to assess the user's skills and identify whether additional training might be needed. This can be accomplished without requiring the transport of larger data files like video files.
- the portable file format could be defined in any suitable manner, such as by using XML or JSON.
- the records 120 could be used in various ways.
- the records 120 are provided (such as via a local intranet or a public network like the Internet) to a cloud computing environment 122, which implements various functions to support analysis of the records 120 and assessment of the user.
- a cloud computing environment 122 which implements various functions to support analysis of the records 120 and assessment of the user.
- the analysis and assessment functions could be implemented in other ways and need not be performed by a cloud computing environment.
- the analysis and assessment functions could be implemented using the server 10.
- an assessment service application programming interface (API) 124 is used to receive incoming records 120.
- the API 124 denotes a web interface that allows uploading of records 120.
- the records 120 received through the API 124 can be stored in a database 126 for analysis.
- Records 120 from the API 124 or the database 126 can be provided to an action validator 128, which has access to one or more sets of validation rales 130. Different sets of validation rules 130 could be provided, such as for different types of users, different types of equipment, or different types of operational scenarios.
- the validation rales 130 can therefore be configurable in order to provide the desired functionality based on the user actions being evaluated.
- the action validator 128 processes one or more records 120 based on the appropriate set of validation rales 130.
- the action validator 128 can also receive and use feedback from system software 132, which generally denotes software used to control one or more industrial processes (such as EXPERION software from HONEYWELL INTERNATIONAL INC. or safety system software) or other processes.
- the feedback can be used to verify whether an expected or desired outcome was achieved by the user. Based on this information, the action validator 128 determines a result for each action or group of actions taken by the user and identified in the record(s) 120. Example results could include correct, partially correct, wrong, invalid, or damaging.
- the action validator 128 includes any suitable logic for evaluating user actions.
- An assessment engine 134 uses the results from the action validator 128 to generate an assessment for the user.
- Hie assessment could take any suitable form, such as a pass/fail score for each action or collection of actions, reward points, or any other measurement for each action or collection of actions.
- the assessment engine 134 includes any suitable logic for assessing a user's competencies.
- the measurements from the assessment engine 134 can be provided to a learning management system (LMS) 136.
- LMS learning management system
- the user can be enrolled in the LMS 136 for competency development, and the LMS 136 can use the measurements to identify areas where the user is competent and areas where the user may require further training.
- An analytics engine 138 could use the measurements from the assessment engine 134, along with past historical performance of the user over a period of time, to gain insights into the user's competencies. The analytics engine 138 could then recommend training courses to help improve the user's skills.
- Tile LMS 136 includes any suitable logic for interacting with and providing information to users for training or other purposes.
- the analytics engine 138 includes any suitable logic for analyzing user information and identifying training information or other information to be provided to the user.
- a user initiates a training assessment module or other function in an AR/VR application, on a mobile device, or on any oilier suitable device.
- the application begins recording and sends the user input action details (such as gestures, voice, and textual messages) to the user input receiver 1 12.
- the user input receiver 1 12 detects and tracks the user input actions (such as gestures, voice, textual messages, and pointing device actions), filters the actions as needed, and passes the selected/filtered actions to the input translator 114.
- the input translator 114 converts the user actions into system-understandable commands by referring to the grammar reference 116, and the input translator 114 passes these commands to the aggregator 118.
- the aggregator 118 associates the system-understandable commands to visual objects, embeds the AR VR environment setup, and prepares one or more records 120 in a portable file format, which identifies the user actions against a task being assessed.
- the records 120 are transmitted for training assessment.
- the API 124 stores incoming records 120 in the database 126 for later review or reassessment.
- the records 120 are also passed from the API 124 or the database 126 to the action validator 128 for validation.
- the action validator 128 uses the validation rules 130 to validate each action or group of actions taken by the user.
- the action validator 128 can optionally use feedback from the system software 132.
- the action validator 128 determines a result for each step or collection of steps taken by the user.
- the results from the action validator 128 are provided to the assessment engine 134, which determines pass/fail scores, rewards points, or other measurements.
- the user is informed of the measurements through the LMS 136.
- the measurements can be used by the analytics engine 138 to gain insights into the user's competencies by analyzing his or her past performance over a period of time and to recommend any relevant training courses to "upskili" the user.
- the architecture 100 can be used to capture and store users' actions in AR/VR environments.
- data associated with the AR/VR environments can be easily captured, stored, and distributed in the records 120.
- Other devices and systems can use the records 120 to analyze the users' actions and possibly recommend training for the users.
- the records 120 can occupy significantly less space in memory and require significantly less bandwidth for transmission, reception, storage, and analysis compared to alternatives such as video/image recording.
- control and safety systems and related instrumentations used in industrial plants are often very complex in nature. It may take a lengthy period of time (such as more than five years) to train new system maintenance personnel to become proficient in managing plant and system upsets independently. Combining such long delays with a growing number of experienced personnel retiring in the coming years means that industries are facing acute skill shortages and increased plant upsets due to the lack of experience and skill,
- Simulating control and safety system hardware in the AR VR space, building dynamics of real hardware modules in virtual objects, and interfacing the AR/VR space with real supervisory systems can provide various benefits. For example, it can reduce or eliminate any dependency on real hardware for competency management. It can also "gamify" the learning of complex and mundane control and safety system concepts, which can help to keep trainees engaged. It can further decrease the time needed to become proficient in control and safety system maintenance through more hands-on practice sessions and higher retention of the training being imparted.
- FIGURE 1 illustrates one example of an architecture 100 for capturing user actions in augmented/virtual reality and assessing user competency
- the architecture 100 could support any number of training environments 102, headsets 104, computing devices 106, AR/VR systems 108, servers 110, or other components.
- the records 120 could be used in any other suitable manner.
- the architecture 100 while described as being used with or including a training environment 102 and generating records 120, the architecture 100 could be used with or include any suitable environment 102 and be used to generate any suitable records 120 containing interactive content (whether or not used for training purposes).
- FIGURE 2 illustrates an example device 200 that supports capturing user actions in augmented/virtual reality or assessing user competency according to this disclosure.
- the device 200 could, for example, represent a device that implements the functionality of the server 1 10 in FIGURE 1 and/or the functionality of the cloud computing environment 122 or any of its components in FIGURE- 1.
- the device 200 includes at least one processing device 202, at least one storage device 204, at least one communications unit 206, and at least one input/output (I/O) unit 208.
- the processing device 202 executes instructions that may be loaded into a memory 210, such as instructions that (when executed by the processing device 202) implement the functions of the server 1 10 and/or the cloud computing environment 122 or any of its components.
- the processing device 202 includes any suitable number(s) and type(s) of processors or other devices in any suitable arrangement.
- Example types of processing devices 202 include microprocessors, microcontrollers, digital signal processors, field programmable gate arrays, application specific integrated circuits, and discrete circuitry.
- the memory 210 and a persistent storage 212 are examples of storage devices 204, which represent any structure(s) capable of storing and facilitating retrieval of information (such as data, program code, and/or other suitable information on a temporary or permanent basis).
- the memory 210 may represent a random access memory or any other suitable volatile or non-volatile storage device(s).
- the persistent storage 212 may contain one or more components or devices supporting longer-term storage of data, such as a read only memory, hard drive, Flash memory, or optical disc.
- the communications unit 206 supports communications with other systems or devices.
- the communications unit 206 could include a network interface card or a wireless transceiver facilitating communications over a wired or wireless network (such as a local intranet or a public network like the Internet).
- the communications unit 206 may support communications through any suitable physical or wireless communication link(s).
- the I/O unit 208 allows for input and output of data.
- the I/O unit 208 may provide a connection for user input through a keyboard, mouse, keypad, touchscreen, or other suitable input device.
- the I/O unit 208 may also send output to a display, printer, or other suitable output device.
- FIGURE 2 illustrates one example of a device 200 that supports capturing user actions in augmented/virtual reality or assessing user competency
- various changes may be made to FIGURE 2.
- computing devices come in a wide variety of configurations, and FIGURE 2 does not limit this disclosure to any particular computing device.
- FIGURES 3 and 4 illustrate example methods for capturing user actions in augmented/virtual reality and assessing user competency according to this disclosure.
- FIGURE 3 illustrates an example method 300 for capturing user actions in augmented/virtual reality
- FIGURE 4 illustrates an example method 400 for assessing user competency based on captured user actions in augmented/virtual reality.
- the methods 300 and 400 are described as being performed using the device 300 operating as the server 1 10 in FIGURE. 1 (method 300) or as the cloud computing environment 122 or any of its components in FIGURE 1 (method 400).
- the methods 300 and 400 could be used with any suitable devices and in any suitable systems.
- a recording of user actions related to an AR/VR space is initiated at step 302.
- the user could be engaged in an AR/VR training session designed to identify the user's competency at performing one or more tasks or how the user responds to one or more situations.
- the user, a manager, or other personnel could initiate the recording before or after the user has initiated the AR/VR training session.
- Information defining an AR VR environment setup is received at step 304.
- Information defining user actions associated with the AR/VR environment is received at step 306.
- This information is used to detect, track, and filter the user actions at step 308.
- This could include, for example, the processing device 302 of the server 110 processing the received information to identify distinct gestures, voice commands, voice annotations, or textual messages that occur.
- the user actions are translated into commands at step 310.
- Specific commands are associated with specific visual objects presented in the AR/VR space at step 312.
- At least one file is generated that contains the commands, the associations of the commands with the visual objects, and the AR VR environment setup at step 314.
- the at least one file is output, stored, or used in some manner at step 316.
- At least one file associated with a user's actions in an AR/VR space is received at step 402.
- the record 120 could have been generated using the method 300 shown in FIGURE 3 and described above.
- Applicable validation rules are obtained at step 404. This could include, for example, the processing device 302 implementing the action validator 128 obtaining one or more sets of validation mles 130. Hie validation mles 130 could be selected in any suitable manner. Example selection criteria could include the type of activity being performed by the user in the AR/VR space, the type of user being evaluated, the type of equipment being simulated in the AR/VR space, or the type of operational scenario being simulated in the AR/VR space.
- One or more actions or group of actions identified by the received file are analyzed using the selected validation rules at step 406, and results assessing the user's actions are determined at step 408.
- the action validator 128 can use feedback, such as from one or more devices used for industrial process control, to determine whether the user's actions would have resulted in the desired outcome or result.
- the user can be informed of the results at step 410.
- the results can also be analyzed to determine whether the user might require or benefit from additional training at step 412, and the user can be informed of any additional training opportunities at step 414.
- FIGURES 3 and 4 illustrate examples of methods for capturing user actions in augmented/virtual reality and assessing user competency
- various changes may be made to FIGURES 3 and 4.
- steps in each figure could overlap, occur in parallel, occur in a different order or occur any number of times.
- various functions described in this patent document are implemented or supported by a computer program that is formed from computer readable program code and that is embodied in a computer readable medium.
- computer readable program code includes any type of computer code, including source code, object code, and executable code.
- computer readable medium includes any type of medium capable of being accessed by a computer, such as read only memory (ROM), random access memory (RAM), a hard disk drive, a compact disc (CD), a digital video disc (DVD), or any other type of memory.
- ROM read only memory
- RAM random access memory
- CD compact disc
- DVD digital video disc
- a “non-transitory” computer readable medium excludes wired, wireless, optical, or other communication links that transport transitory electrical or other signals.
- a non-transitory computer readable medium includes media where data can be permanently stored and media where data can be stored and later overoritten, such as a rewritable optical disc or an erasable storage device.
- application and “program” refer to one or more computer programs, software components, sets of instructions, procedures, functions, objects, classes, instances, related data, or a portion thereof adapted for implementation in a suitable computer code (including source code, object code, or executable code).
- program refers to one or more computer programs, software components, sets of instructions, procedures, functions, objects, classes, instances, related data, or a portion thereof adapted for implementation in a suitable computer code (including source code, object code, or executable code).
- communicate as well as derivatives thereof, encompasses both direct and indirect communication.
- the term “or” is inclusive, meaning and/or.
- phrases "associated with,” as well as derivatives thereof, may mean to include, be included within, interconnect with, contain, be contained within, connect to or with, couple to or with, be communicable with, cooperate with, interleave, juxtapose, be proximate to, be bound to or with, have, have a property of, have a relationship to or with, or the like.
- the phrases "at least one of and "one or more of,” when used with a list of items, mean that different combinations of one or more of the listed items may be used, and only- one item in the list may be needed. For example, "at least one of: A, B, and C" includes any of the following combinations: A, B, C, A and B, A and C, B and C, and A and B and C.
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Human Resources & Organizations (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Educational Administration (AREA)
- Strategic Management (AREA)
- Entrepreneurship & Innovation (AREA)
- Economics (AREA)
- Development Economics (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Quality & Reliability (AREA)
- General Business, Economics & Management (AREA)
- Game Theory and Decision Science (AREA)
- Marketing (AREA)
- Operations Research (AREA)
- Tourism & Hospitality (AREA)
- Automation & Control Theory (AREA)
- Manufacturing & Machinery (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- General Health & Medical Sciences (AREA)
- Educational Technology (AREA)
- Processing Or Creating Images (AREA)
Abstract
A method includes receiving (402) one or more records (120) containing commands, an association of the commands with visual objects in an augmented reality/virtual reality (AR/VR) space, and an AR/VR environment setup. The commands correspond to user actions taken in the AR/VR space. The method also includes analyzing (406) the user actions based on the one or more records and assessing (408) the user actions based on the analysis. The one or more records could have a portable file format. The commands could correspond to one or more gestures made by a user, one or more voice commands or voice annotations spoken by the user, one or more textual messages provided by the user, and/or one or more pointing actions taken by the user using at least one pointing device.
Description
APPARATUS AND METHOD FOR ASSESSING AND TRACKING USER COMPETENCY IN AUGMENTED/VIRTUAL REALITY -BASED TRAINING IN INDUSTRIAL AUTOMATION SYSTEMS AND OTHER SYSTEMS
CROSS-REFERENCE TO RELATED APPLICATIONS AND PRIORITY CLAIM
[0001] This application claims priority under 35 U.S.C. § 119(e) to U.S. Provisional Patent Application No. 62/517,006, U.S. Provisional Patent Application No. 62/517,015, and U.S. Provisional Patent Application No. 62/517,037, all filed on June 8, 2017. These provisional applications are hereby incorporated by reference in their entirety.
TECHNICAL FIELD
[0002] This disclosure generally relates to augmented reality and virtual reality systems. More specifically, this disclosure relates to an apparatus and metliod for assessing and tracking user competency in augmented/virtual reality-based training in industrial automation systems and other systems.
BACKGROUND
[0003] Augmented reality and virtual reality technologies are advancing rapidly and becoming more and more common in various industries. Augmented reality generally refers to technology in which computer-generated content is superimposed over a real-world environment. Examples of augmented reality include games that superimpose objects or characters over real-world images and navigation tools that superimpose information over real -world images. Virtual reality generally refers to technology that creates an artificial simulation or recreation of an environment, which may or may not be a real-world environment. An example of virtual reality includes games that create fantasy or alien environments that can be explored by users.
SUMMARY
[0004] This disclosure provides an apparatus and method for assessing and tracking user competency in augmented/virtual reality-based training in industrial automation systems and other systems.
[00Θ5] In a first embodiment, a method includes receiving one or more records containing commands, an association of the commands with visual objects in an augmented reality/virtual reality (AR VR) space, and an AR/VR environment setup. The commands correspond to user actions taken in the AR/VR space. The method also includes analyzing the user actions based on the one or more records and assessing the user actions based on the analysis.
[0006] In a second embodiment, an apparatus includes at least one processing device configured to receive one or more records containing commands, an association of the commands with visual objects in an AR/VR space, and an AR VR environment setup. The commands correspond to user actions taken in the AR VR space. The at least one processing device is also configured to analyze the user actions based on the one or more records and assess the user actions based on the analysis.
[0007] In a third embodiment, a method includes receiving data defining user actions associated with an AR/VR space. The method also includes translating the user actions into associated commands and identifying associations of the commands with visual objects in the AR/VR space. The method further includes aggregating the commands, the associations of the commands with the visual objects, and an AR/VR environment setup into at least one record. In addition, the method includes transmitting the at least one record for assessment of the user actions.
[0008] In a fourth embodiment, an apparatus includes at least one processing device configured to perform the method of the third embodiment or any of its dependent claims. In a fifth embodiment, a non-transitory computer readable medium contains instructions that when executed cause at least one processing device to perform the method of the first embodiment or any of its dependent claims. In a sixth embodiment, a non-transitory computer readable medium contains instructions that when executed cause at least one processing device to perform the method of the third embodiment or any of its dependent claims.
[0009] Other technical features may be readily apparent to one skilled in the
art from the following figures, descriptions, and claims.
BRIEF DESCRIPTION OF THE DRAWINGS
[0010] For a more complete understanding of this disclosure, reference is now made to the following description, taken in conjunction with the accompanying drawings, in which:
[0011] FIGURE 1 illustrates an example architecture for capturing user actions in augmented/virtual reality and assessing user competency according to this disclosure;
[0012] FIGURE 2 illustrates an example device that supports capturing user actions in augmented/virtual reality or assessing user competency according to this disclosure; and
[0013] FIGURES 3 and 4 illustrate example methods for capturing user actions in augmented/virtual reality and assessing user competency according to this disclosure.
DETAILED DESCRIPTION
[0014] FIGURES 1 through 4, discussed below, and the various embodiments used to describe the principles of the present invention in this patent document are by way of illustration only and should not be construed in any way to limit the scope of the invention. Those skilled in the art will understand that the principles of the invention may be implemented in any type of suitably arranged device or system.
[0015] In conventional training and skill development environments, trainee competency is often assessed using questionnaires, pictorial/image/Flash-based evaluations, or multiple objective questions. These standard assessment techniques no longer suffice due to the lack of validating a trainee's skills or performance determined interactively "on the job." Also, an assessment of a user's abilities is often based on end results, and it can be difficult to suggest improvement opportunities quickly.
[0016] With growing augmented/virtual reality solutions for skill development and training, in the absence of any external monitoring mechanism, it is typically difficult to monitor and assess the progress of a trainee and the impact of training in augmented/virtual space. Ideally, a system could validate a user's skills by tracking each user action and thereby assess the user's competency and real-world problem solving skills.
[0017] This disclosure provides techniques for tracking and assessing an industrial automation user or other user's actions in an augmented/Virtual environment, which overcomes challenges with respect to tracking unwanted steps, tracking impacts on underlying industrial systems or other systems, assessing intermediate steps, performing behavioral assessments, and identifying responses to panic situations or other situations. Among other things, this disclosure describes a portable file format that captures content such as user inputs, data formats, and training setups. The portable file format allows for easier storage, computation, and distribution of content and addresses technical constraints with respect to space, computation, and bandwidth.
[0018] FIGURE 1 illustrates an example architecture 100 for capturing user actions in augmented/virtual reality and assessing user competency according to this disclosure. As shown in FIGURE 1, the architecture 100 includes a training
environment 102, which denotes a visualization layer that allows interaction with an augmented reality/Virtual reality (AR/VR) space. In this example, the training environment 102 can include one or more end user devices, such as at least one AR/VR headset 104, at least one computing device 106, or at least one interactive AR/VR system 108. Each headset 104 generally denotes a device that is worn by a user and that displays an AR VR space. The headset 104 in FIGURE 1 is a MICROSOFT HOLOLENS device, although any other suitable AR/VR device could be used. Each computing device 106 generally denotes a device that processes data to present an AR/VR space (although not necessarily in a 3D format) to a user. Each computing device 106 denotes any suitable computing device, such as a desktop computer, laptop computer, tablet computer, or smartphone. Each interactive AR/VR system 108 includes a headset and one or more user input devices, such as interactive or smart gloves. Although not shown, one or more input devices could also be used with the headset 104 or the computing device 106.
[0019] The architecture 100 also includes at least one processor, such as in a server 1 10, that is used to record content. The server 110 generally denotes a computing device that receives content from the training environment 102 and records and processes the content. The server 110 includes various functions or modules to support the recording and processing of training or other interactive content. Each of these functions or modules could be implemented in any suitable manner, such as with software/tirmware instructions executed by one or more processors. The server 1 10 could be positioned locally with or remote from the training environment 102.
[0020] Functionally, the server 1 10 includes a user input receiver 1 12, which receives, processes, and filters user inputs made by the user. The user inputs could include any suitable inputs, such as gestures made by the user, voice commands or voice annotations spoken by the user, textual messages provided by the user, or pointing actions taken by the user using a pointing device (such as a smart glove). Any other or additional user inputs could also be received. The user inputs can be filtered in any suitable manner and are output to an input translator 114. To support the use of the architecture 100 by a wide range of users, input variants (like voice/text in different languages) could be supported. The user input receiver 112 includes any suitable logic for receiving and processing user inputs.
[0021] The input translator 114 translates the various user inputs into specific commands by referring to a standard action grammar reference 116. The grammar reference 116 represents an actions-to-commands mapping dictionary that associates different user input actions with different commands. For example, the grammar reference 1 16 could associate certain spoken words, text messages, or physical actions with specific commands. The grammar reference 116 could support one or multiple possibilities for commands where applicable, such as when different commands may be associated with the same spoken words or text messages but different physical actions. The grammar reference 1 16 includes any suitable mapping or other association of actions and commands. The input translator 114 includes any suitable logic for identifying commands associated with received user inputs.
[0022] The input tra slator 1 14 outputs identified commands to an aggregator 118. The aggregator 118 associates the commands with visual objects in the AR/VR space being presented to the user into one or more records 12.0. The aggregator 118 also embeds an AR/VR environment setup into the one or more records 120. The AR/VR environment setup can define what visual objects are to be presented in the AR/VR space. The records 120 therefore associate specific commands (which were generated based on user inputs) with specific visual objects in the AR/VR space as defined by the environment setup. The aggregator 1 18 includes any suitable logic for aggregating data.
[0023] The records 12,0 are created in a portable file format, which allows the records 120 to be used by various other devices. For example, the data in the records 120 can be processed to assess the user's skills and identify whether additional training might be needed. This can be accomplished without requiring the transport of larger data files like video files. The portable file format could be defined in any suitable manner, such as by using XML or JSON.
[0024] The records 120 could be used in various ways. In this example, the records 120 are provided (such as via a local intranet or a public network like the Internet) to a cloud computing environment 122, which implements various functions to support analysis of the records 120 and assessment of the user. Note, however, that the analysis and assessment functions could be implemented in other ways and need not be performed by a cloud computing environment. For instance, the analysis and
assessment functions could be implemented using the server 10.
[0025] As shown in FIGURE 1, an assessment service application programming interface (API) 124 is used to receive incoming records 120. The API 124 denotes a web interface that allows uploading of records 120. The records 120 received through the API 124 can be stored in a database 126 for analysis.
[0026] Records 120 from the API 124 or the database 126 can be provided to an action validator 128, which has access to one or more sets of validation rales 130. Different sets of validation rules 130 could be provided, such as for different types of users, different types of equipment, or different types of operational scenarios. The validation rales 130 can therefore be configurable in order to provide the desired functionality based on the user actions being evaluated. The action validator 128 processes one or more records 120 based on the appropriate set of validation rales 130. The action validator 128 can also receive and use feedback from system software 132, which generally denotes software used to control one or more industrial processes (such as EXPERION software from HONEYWELL INTERNATIONAL INC. or safety system software) or other processes. The feedback can be used to verify whether an expected or desired outcome was achieved by the user. Based on this information, the action validator 128 determines a result for each action or group of actions taken by the user and identified in the record(s) 120. Example results could include correct, partially correct, wrong, invalid, or damaging. The action validator 128 includes any suitable logic for evaluating user actions.
[0027] An assessment engine 134 uses the results from the action validator 128 to generate an assessment for the user. Hie assessment could take any suitable form, such as a pass/fail score for each action or collection of actions, reward points, or any other measurement for each action or collection of actions. The assessment engine 134 includes any suitable logic for assessing a user's competencies.
[0028] The measurements from the assessment engine 134 can be provided to a learning management system (LMS) 136. The user can be enrolled in the LMS 136 for competency development, and the LMS 136 can use the measurements to identify areas where the user is competent and areas where the user may require further training. An analytics engine 138 could use the measurements from the assessment engine 134, along with past historical performance of the user over a period of time,
to gain insights into the user's competencies. The analytics engine 138 could then recommend training courses to help improve the user's skills. Tile LMS 136 includes any suitable logic for interacting with and providing information to users for training or other purposes. The analytics engine 138 includes any suitable logic for analyzing user information and identifying training information or other information to be provided to the user.
[0029] Based on this, the following process could be performed using the various components of the server 110 in FIGURE I . A user initiates a training assessment module or other function in an AR/VR application, on a mobile device, or on any oilier suitable device. The application begins recording and sends the user input action details (such as gestures, voice, and textual messages) to the user input receiver 1 12. The user input receiver 1 12 detects and tracks the user input actions (such as gestures, voice, textual messages, and pointing device actions), filters the actions as needed, and passes the selected/filtered actions to the input translator 114. The input translator 114 converts the user actions into system-understandable commands by referring to the grammar reference 116, and the input translator 114 passes these commands to the aggregator 118. The aggregator 118 associates the system-understandable commands to visual objects, embeds the AR VR environment setup, and prepares one or more records 120 in a portable file format, which identifies the user actions against a task being assessed. The records 120 are transmitted for training assessment.
[0030] Moreover, based on this, the following process could be performed using the various components of the cloud computing environment 122 in FIGURE 1. The API 124 stores incoming records 120 in the database 126 for later review or reassessment. The records 120 are also passed from the API 124 or the database 126 to the action validator 128 for validation. The action validator 128 uses the validation rules 130 to validate each action or group of actions taken by the user. The action validator 128 can optionally use feedback from the system software 132. The action validator 128 determines a result for each step or collection of steps taken by the user. The results from the action validator 128 are provided to the assessment engine 134, which determines pass/fail scores, rewards points, or other measurements. The user is informed of the measurements through the LMS 136. The measurements can be used
by the analytics engine 138 to gain insights into the user's competencies by analyzing his or her past performance over a period of time and to recommend any relevant training courses to "upskili" the user.
[0031 ] In this way, the architecture 100 can be used to capture and store users' actions in AR/VR environments. As a result, data associated with the AR/VR environments can be easily captured, stored, and distributed in the records 120. Other devices and systems can use the records 120 to analyze the users' actions and possibly recommend training for the users. The records 120 can occupy significantly less space in memory and require significantly less bandwidth for transmission, reception, storage, and analysis compared to alternatives such as video/image recording. These features can provide significant technical advantages, such as in systems that collect and analyze large amounts of interactive data related to a number of AR/VR environments.
[0032] This technology can find use in a number of ways in industrial automation settings or other settings. For example, control and safety systems and related instrumentations used in industrial plants (such as refinery, petrochemical, and pharmaceutical plants) are often very complex in nature. It may take a lengthy period of time (such as more than five years) to train new system maintenance personnel to become proficient in managing plant and system upsets independently. Combining such long delays with a growing number of experienced personnel retiring in the coming years means that industries are facing acute skill shortages and increased plant upsets due to the lack of experience and skill,
[0033] Traditional classroom training, whether face-to-face or online, often requires personnel to be away from the field for an extended time (such as 20 to 40 hours). In many cases, this is not practical, particularly for plants that are already facing resource and funding challenges due to overtime, travel, or other issues. Also, few sites have powered-on and functioning control hardware for training. Due to the fast rate of change for technology, it may no longer be cost-effective to procure and maintain live training systems.
[0034] Simulating control and safety system hardware in the AR VR space, building dynamics of real hardware modules in virtual objects, and interfacing the AR/VR space with real supervisory systems (such as engineering and operator
stations) can provide various benefits. For example, it can reduce or eliminate any dependency on real hardware for competency management. It can also "gamify" the learning of complex and mundane control and safety system concepts, which can help to keep trainees engaged. It can further decrease the time needed to become proficient in control and safety system maintenance through more hands-on practice sessions and higher retention of the training being imparted.
[0035] This represents example ways in which the devices and techniques described above could be used. However, these examples are non-limiting, and the devices and techniques described above could be used in any other suitable manner. In general, the devices and techniques described in this patent document could be applicable whenever one or more user actions in an AR/VR space are to be recorded, stored, and analyzed (for whatever purpose).
[0036] Although FIGURE 1 illustrates one example of an architecture 100 for capturing user actions in augmented/virtual reality and assessing user competency, various changes may be made to FIGURE 1. For example, the architecture 100 could support any number of training environments 102, headsets 104, computing devices 106, AR/VR systems 108, servers 110, or other components. Also, the records 120 could be used in any other suitable manner. In addition, while described as being used with or including a training environment 102 and generating records 120, the architecture 100 could be used with or include any suitable environment 102 and be used to generate any suitable records 120 containing interactive content (whether or not used for training purposes).
[0037] FIGURE 2 illustrates an example device 200 that supports capturing user actions in augmented/virtual reality or assessing user competency according to this disclosure. The device 200 could, for example, represent a device that implements the functionality of the server 1 10 in FIGURE 1 and/or the functionality of the cloud computing environment 122 or any of its components in FIGURE- 1.
[0038] As shown in FIGURE: 2, the device 200 includes at least one processing device 202, at least one storage device 204, at least one communications unit 206, and at least one input/output (I/O) unit 208. The processing device 202 executes instructions that may be loaded into a memory 210, such as instructions that (when executed by the processing device 202) implement the functions of the server
1 10 and/or the cloud computing environment 122 or any of its components. The processing device 202 includes any suitable number(s) and type(s) of processors or other devices in any suitable arrangement. Example types of processing devices 202 include microprocessors, microcontrollers, digital signal processors, field programmable gate arrays, application specific integrated circuits, and discrete circuitry.
[0039] The memory 210 and a persistent storage 212 are examples of storage devices 204, which represent any structure(s) capable of storing and facilitating retrieval of information (such as data, program code, and/or other suitable information on a temporary or permanent basis). The memory 210 may represent a random access memory or any other suitable volatile or non-volatile storage device(s). The persistent storage 212 may contain one or more components or devices supporting longer-term storage of data, such as a read only memory, hard drive, Flash memory, or optical disc.
[0040] The communications unit 206 supports communications with other systems or devices. For example, the communications unit 206 could include a network interface card or a wireless transceiver facilitating communications over a wired or wireless network (such as a local intranet or a public network like the Internet). The communications unit 206 may support communications through any suitable physical or wireless communication link(s).
[0041] The I/O unit 208 allows for input and output of data. For example, the I/O unit 208 may provide a connection for user input through a keyboard, mouse, keypad, touchscreen, or other suitable input device. The I/O unit 208 may also send output to a display, printer, or other suitable output device.
[0042] Although FIGURE 2 illustrates one example of a device 200 that supports capturing user actions in augmented/virtual reality or assessing user competency, various changes may be made to FIGURE 2. For example, computing devices come in a wide variety of configurations, and FIGURE 2 does not limit this disclosure to any particular computing device.
[0043] FIGURES 3 and 4 illustrate example methods for capturing user actions in augmented/virtual reality and assessing user competency according to this disclosure. In particular, FIGURE 3 illustrates an example method 300 for capturing
user actions in augmented/virtual reality, and FIGURE 4 illustrates an example method 400 for assessing user competency based on captured user actions in augmented/virtual reality. For ease of explanation, the methods 300 and 400 are described as being performed using the device 300 operating as the server 1 10 in FIGURE. 1 (method 300) or as the cloud computing environment 122 or any of its components in FIGURE 1 (method 400). However, the methods 300 and 400 could be used with any suitable devices and in any suitable systems.
[0044] As shown in FIGURE 3, a recording of user actions related to an AR/VR space is initiated at step 302. This could include, for example, the processing device 302 of the server 110 receiving an indication from a user device 104-108 that a user wishes to initiate the recording. As a particular example, the user could be engaged in an AR/VR training session designed to identify the user's competency at performing one or more tasks or how the user responds to one or more situations. The user, a manager, or other personnel could initiate the recording before or after the user has initiated the AR/VR training session.
[0045] Information defining an AR VR environment setup is received at step 304. This could include, for example, the processing device 302 of the server 110 receiving information identifying the overall visual environment of the AR/VR space being presented to the user by the user device 104-1 8 and information identifying visual objects in the AR VR space being presented to the user by the user device 104- 108,
[0046] Information defining user actions associated with the AR/VR environment is received at step 306. This could include, for example, the processing device 302 of the server 110 receiving information identifying how the user is interacting with one or more of the visual objects presented in the AR VR space by the user device 104-108, The interactions could take on various forms, such as the user making physical gestures, speaking voice commands, speaking voice annotations, or providing textual messages. This information is used to detect, track, and filter the user actions at step 308. This could include, for example, the processing device 302 of the server 110 processing the received information to identify distinct gestures, voice commands, voice annotations, or textual messages that occur. This could also include the processing device 302 of the server 110 processing the received
information to identify visual objects presented in the AR/VR space that are associated with those user actions.
[0047] The user actions are translated into commands at step 310. This could include, for example, the processing device 302 of the server 110 using the standard action grammar reference 1 16 and its actions-to-commands mapping dictionary to associate different user actions with different commands. Specific commands are associated with specific visual objects presented in the AR/VR space at step 312. This could include, for example, the processing device 302 of the server 110 associating specific ones of the identified commands with specific ones of the visual objects presented in the AR/VR space. This allows the server 110 to identify which visual objects are associated with the identified commands.
[0048] At least one file is generated that contains the commands, the associations of the commands with the visual objects, and the AR VR environment setup at step 314. This could include, for example, the processing device 302 of the server 110 generating a record 120 containing this information. The at least one file is output, stored, or used in some manner at step 316. This could include, for example, the processing device 302 of the server 1 10 providing the record 120 to the API 124 for storage in the database 126 or analysis by the action validator 128.
[0049] As shown in FIGURE 4, at least one file associated with a user's actions in an AR/VR space is received at step 402. This could include, for example, the processing device 302 implementing the API 124 receiving a record 120 identifying commands, an association of the commands with visual objects in the user's AR/VR space, and an AR/VR environment setup for the user's AR/VR space. The record 120 could have been generated using the method 300 shown in FIGURE 3 and described above. This could also include the processing device 302 implementing the API 124 storing the record 120 in the database 126 and/or passing the record to the action validator 128.
[0050] Applicable validation rules are obtained at step 404. This could include, for example, the processing device 302 implementing the action validator 128 obtaining one or more sets of validation mles 130. Hie validation mles 130 could be selected in any suitable manner. Example selection criteria could include the type of activity being performed by the user in the AR/VR space, the type of user being
evaluated, the type of equipment being simulated in the AR/VR space, or the type of operational scenario being simulated in the AR/VR space.
[0051] One or more actions or group of actions identified by the received file are analyzed using the selected validation rules at step 406, and results assessing the user's actions are determined at step 408. This could include, for example, the processing device 302 implementing the action validator 128 using the validation rales to determine whether the user performed correct or incorrect actions within the user's AR/VR space. This could also include the processing device 302 implementing the action validator 128 determining whether the desired outcome or result was obtained by the user as a result of the user actions within the user's AR/VR space. In some cases, the action validator 128 can use feedback, such as from one or more devices used for industrial process control, to determine whether the user's actions would have resulted in the desired outcome or result.
[0052] The user can be informed of the results at step 410. This could include, for example, the action validator 128 providing the results to the LMS 136 for delivery to the user. The results can also be analyzed to determine whether the user might require or benefit from additional training at step 412, and the user can be informed of any additional training opportunities at step 414. This could include, for example, the processing device 302 implementing the analytics engine 138 analyzing the user's current results and possibly the user's prior results in order to recommend relevant training courses that might benefit the user. This could also include the analytics engine 138 providing the results to the LMS 136 for delivery to the user.
[0053] Although FIGURES 3 and 4 illustrate examples of methods for capturing user actions in augmented/virtual reality and assessing user competency, various changes may be made to FIGURES 3 and 4. For example, while each figure illustrates a series of steps, various steps in each figure could overlap, occur in parallel, occur in a different order or occur any number of times.
[0054] In some embodiments, various functions described in this patent document are implemented or supported by a computer program that is formed from computer readable program code and that is embodied in a computer readable medium. The phrase "computer readable program code" includes any type of computer code, including source code, object code, and executable code. The phrase
'"computer readable medium" includes any type of medium capable of being accessed by a computer, such as read only memory (ROM), random access memory (RAM), a hard disk drive, a compact disc (CD), a digital video disc (DVD), or any other type of memory. A "non-transitory" computer readable medium excludes wired, wireless, optical, or other communication links that transport transitory electrical or other signals. A non-transitory computer readable medium includes media where data can be permanently stored and media where data can be stored and later overoritten, such as a rewritable optical disc or an erasable storage device.
[0055] It may be advantageous to set forth definitions of certain words and phrases used throughout this patent document. The terms "application" and "program" refer to one or more computer programs, software components, sets of instructions, procedures, functions, objects, classes, instances, related data, or a portion thereof adapted for implementation in a suitable computer code (including source code, object code, or executable code). The term, "communicate," as well as derivatives thereof, encompasses both direct and indirect communication. The terms "include" and "comprise," as well as derivatives thereof, mean inclusion without limitation. The term "or" is inclusive, meaning and/or. The phrase "associated with," as well as derivatives thereof, may mean to include, be included within, interconnect with, contain, be contained within, connect to or with, couple to or with, be communicable with, cooperate with, interleave, juxtapose, be proximate to, be bound to or with, have, have a property of, have a relationship to or with, or the like. The phrases "at least one of and "one or more of," when used with a list of items, mean that different combinations of one or more of the listed items may be used, and only- one item in the list may be needed. For example, "at least one of: A, B, and C" includes any of the following combinations: A, B, C, A and B, A and C, B and C, and A and B and C.
[0056] The description in the present application should not be read as implying that any particular element, step, or function is an essential or critical element that must be included in the claim scope. The scope of patented subject matter is defined only by the allowed claims. Moreover, none of the claims invokes 35 U.S.C. § 112(f) with respect to any of the appended claims or claim elements unless the exact words "means for" or "step for" are explicitly used in the particular
claim, followed by a participle phrase identifying a function. Use of terms such as (but not limited to) "mechanism," "module," "device," "unit," "component," "element," "member," "apparatus," "machine," "system," "processor," or "controller" within a claim is understood and intended to refer to structures known to those skilled in the relevant art, as further modified or enhanced by the features of the claims themselves, and is not intended to invoke 35 U .S.C. § 112(f).
[0057] While this disclosure has described certain embodiments and generally associated methods, alterations and permutations of these embodiments and methods will be apparent to those skilled in the art. Accordingly, the above description of example embodiments does not define or constrain this disclosure. Other changes, substitutions, and alterations are also possible without departing from the spirit and scope of this disclosure, as defined by the following claims.
Claims
1. A method comprising:
receiving (402) one or more records (120) containing commands, an association of the commands with visual objects in an augmented reality /virtual reality (AR/VR) space, and an AR/VR environment setup, wherein the commands correspond to user actions taken in the AR/VR space;
analyzing (406) the user actions based on the one or more records: and assessing (408) the user actions based on the analysis.
2. The method of Claim 1, wherein the one or more records have a portable file format.
3. The method of Claim 1, wherein the commands correspond to at least one of:
one or more gestures made by a user;
one or more voice commands or voice annotations spoken by the user;
one or more textual messages provided by the user: and
one or more pointing actions taken by the user using at least one pointing device.
4. The method of Claim 1, wherein assessing the user actions comprises determining whether each user action or group of user actions was correct, partially correct, wrong, invalid, or damaging.
5. The method of Claim 1 , wherein assessing the user actions comprises using a set of validation rules (130), different sets of validation rules (130) used to validate different user actions or groups of user actions.
6. The method of Claim 1, wherein assessing the user actions comprises using feedback from system software ( 132) configured to manage an industrial process, the feedback used to verify whether an expected or desired outcome was achieved by a user.
7. The method of Claim 1, further comprising:
outputting (412) an assessment of the user actions to an analytics engine (138); analyzing (412) the assessment and past historical performance of a user with the analytics engine to identify recommended training for the user; and
outputting (414) an identification of the recommended training to a learning management system (136).
8. An apparatus comprising:
at least one processing device (202) configured to perform the method of any of Claims 1-7.
9. A non-transitor ' computer readable medium (204, 210, 212) containing instructions that when executed cause at least one processing device (202) to perform the method of any of Claims 1-7.
10. A method compri sing :
receiving (306) data defining user actions associated with an augmented reality/ virtual reality (AR/VR) space;
translating (310) the user actions into associated commands;
identifying (312) associations of the commands with visual objects in the AR/VR space;
aggregating (314) the commands, the associations of the commands with the visual objects, and an AR VR environment setup into at least one record (120); and transmitting (316) the at least one record for assessment of the user actions.
11. Tire method of Claim 10, wherein the data defining the user actions comprises one or more of:
data defining one or more gestures made by a user;
data defining one or more voice commands or voice annotations spoken by the user;
data defining one or more textual messages provided by the user; and data defining one or more pointing actions taken by the user using at least one
pointing device.
12. The method of Claim 10, wherein translating the user actions into the associated commands comprises using a grammar reference (1 16) that associates different user input actions with different commands.
13. The method of Claim 10, wherein:
the AR/VR space supports dynamics of hardware modules associated with control or safety system hardware used for industrial process control; and
the AR/VR space interfaces with at least one supervisory system used for industrial process control.
14. An apparatus comprising:
at least one processing device (202) configured to perform the method of any of Claims 10-13.
15. A non-transitory computer readable medium (204, 210, 212) containing instructions that when executed cause at least one processing device (202) to perform the method of any of Claims 10- 13.
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201762517015P | 2017-06-08 | 2017-06-08 | |
US62/517,015 | 2017-06-08 | ||
US15/941,545 | 2018-03-30 | ||
US15/941,545 US20180357922A1 (en) | 2017-06-08 | 2018-03-30 | Apparatus and method for assessing and tracking user competency in augmented/virtual reality-based training in industrial automation systems and other systems |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018226452A1 true WO2018226452A1 (en) | 2018-12-13 |
Family
ID=64567329
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2018/034887 WO2018226452A1 (en) | 2017-06-08 | 2018-05-29 | Apparatus and method for assessing and tracking user competency in augmented/virtual reality-based training in industrial automation systems and other systems |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2018226452A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111260990A (en) * | 2020-03-21 | 2020-06-09 | 国网电力科学研究院武汉南瑞有限责任公司 | Power equipment training system and method for enhancing virtual reality |
WO2023278926A1 (en) * | 2021-07-01 | 2023-01-05 | Tencent America LLC | Qualification test in subject scoring |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130203026A1 (en) * | 2012-02-08 | 2013-08-08 | Jpmorgan Chase Bank, Na | System and Method for Virtual Training Environment |
WO2014052158A2 (en) * | 2012-09-27 | 2014-04-03 | Immersive Touch, Inc. | Haptic augmented and virtual reality system for simulation of surgical procedures |
US20160049094A1 (en) * | 2014-08-13 | 2016-02-18 | Pitchvantage Llc | Public Speaking Trainer With 3-D Simulation and Real-Time Feedback |
US20160077547A1 (en) * | 2014-09-11 | 2016-03-17 | Interaxon Inc. | System and method for enhanced training using a virtual reality environment and bio-signal data |
US20160292925A1 (en) * | 2015-04-06 | 2016-10-06 | Scope Technologies Us Inc. | Method and appartus for sharing augmented reality applications to multiple clients |
-
2018
- 2018-05-29 WO PCT/US2018/034887 patent/WO2018226452A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130203026A1 (en) * | 2012-02-08 | 2013-08-08 | Jpmorgan Chase Bank, Na | System and Method for Virtual Training Environment |
WO2014052158A2 (en) * | 2012-09-27 | 2014-04-03 | Immersive Touch, Inc. | Haptic augmented and virtual reality system for simulation of surgical procedures |
US20160049094A1 (en) * | 2014-08-13 | 2016-02-18 | Pitchvantage Llc | Public Speaking Trainer With 3-D Simulation and Real-Time Feedback |
US20160077547A1 (en) * | 2014-09-11 | 2016-03-17 | Interaxon Inc. | System and method for enhanced training using a virtual reality environment and bio-signal data |
US20160292925A1 (en) * | 2015-04-06 | 2016-10-06 | Scope Technologies Us Inc. | Method and appartus for sharing augmented reality applications to multiple clients |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111260990A (en) * | 2020-03-21 | 2020-06-09 | 国网电力科学研究院武汉南瑞有限责任公司 | Power equipment training system and method for enhancing virtual reality |
WO2023278926A1 (en) * | 2021-07-01 | 2023-01-05 | Tencent America LLC | Qualification test in subject scoring |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20180357922A1 (en) | Apparatus and method for assessing and tracking user competency in augmented/virtual reality-based training in industrial automation systems and other systems | |
Langley et al. | Establishing the usability of a virtual training system for assembly operations within the automotive industry | |
US10684676B2 (en) | Simulating and evaluating safe behaviors using virtual reality and augmented reality | |
Stevens et al. | Taking participatory citizen science to extremes | |
US20190087831A1 (en) | Generating digital credentials based on sensor feedback data | |
Lawrence | Data readiness levels | |
Martinez-Maldonado et al. | LATUX: An iterative workflow for designing, validating, and deploying learning analytics visualizations | |
CN101595454B (en) | System and method for effecting control of remote computers | |
US10803766B1 (en) | Modular training of network-based training exercises | |
Toyoda et al. | VR-based health and safety training in various high-risk engineering industries: a literature review | |
KR20150081172A (en) | Method for providing learning guideline, server for providing learning guideline and user device | |
Zapata-Rivera et al. | Assessing science inquiry skills in an immersive, conversation-based scenario | |
Fontão et al. | Supporting governance of mobile application developers from mining and analyzing technical questions in stack overflow | |
WO2018226452A1 (en) | Apparatus and method for assessing and tracking user competency in augmented/virtual reality-based training in industrial automation systems and other systems | |
Zapata-Rivera et al. | xAPI-based model for tracking on-line laboratory applications | |
Kant Shankar et al. | A data value chain to model the processing of multimodal evidence in authentic learning scenarios | |
Mota et al. | Learning analytics in mobile applications based on multimodal interaction | |
EP3111389A1 (en) | Method for generating a support system for performance, decision, and learning, documentation and social networking management, contextualized for business control processes, and system utilizing said method | |
CN110678827B (en) | Apparatus and method for recording and playback of interactive content with augmented/virtual reality in industrial automation systems and other systems | |
Tsui et al. | Methodologies for measuring influence | |
EP3635521A1 (en) | Apparatus and method for visual-assisted training, collaboration, and monitoring in augmented/virtual reality in industrial automation systems and other systems | |
Pfeiffer et al. | Fostering Lab-Based Learning with Learning Analytics-a literature review | |
CN110869889B (en) | Method and apparatus for augmented reality and virtual reality | |
KR101245824B1 (en) | Method, system and computer-readable recording medium for providing study information | |
Wild et al. | Towards data exchange formats for learning experiences in manufacturing workplaces |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18814041 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18814041 Country of ref document: EP Kind code of ref document: A1 |