EP3114671A1 - Adaptive training system, method and apparatus - Google Patents
Adaptive training system, method and apparatusInfo
- Publication number
- EP3114671A1 EP3114671A1 EP14884541.5A EP14884541A EP3114671A1 EP 3114671 A1 EP3114671 A1 EP 3114671A1 EP 14884541 A EP14884541 A EP 14884541A EP 3114671 A1 EP3114671 A1 EP 3114671A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- data
- student
- training
- rules
- learning
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
- 238000012549 training Methods 0.000 title claims abstract description 249
- 238000000034 method Methods 0.000 title claims abstract description 45
- 230000003044 adaptive effect Effects 0.000 title description 12
- 230000009471 action Effects 0.000 claims abstract description 73
- 238000004088 simulation Methods 0.000 claims abstract description 65
- 238000013500 data storage Methods 0.000 claims abstract description 43
- 230000003993 interaction Effects 0.000 claims abstract description 8
- 238000007726 management method Methods 0.000 claims description 30
- 230000000246 remedial effect Effects 0.000 claims description 20
- 230000002452 interceptive effect Effects 0.000 claims description 12
- 238000004891 communication Methods 0.000 claims description 11
- 238000012360 testing method Methods 0.000 claims description 11
- 208000037656 Respiratory Sounds Diseases 0.000 claims description 9
- 206010037833 rales Diseases 0.000 claims description 9
- 238000006243 chemical reaction Methods 0.000 claims description 8
- 238000013499 data model Methods 0.000 claims description 8
- 241000288140 Gruiformes Species 0.000 claims description 7
- 230000000977 initiatory effect Effects 0.000 claims description 5
- 230000000007 visual effect Effects 0.000 claims description 4
- 238000012544 monitoring process Methods 0.000 claims description 3
- 230000003213 activating effect Effects 0.000 claims description 2
- 210000004556 brain Anatomy 0.000 claims description 2
- 238000012545 processing Methods 0.000 claims description 2
- 230000029058 respiratory gaseous exchange Effects 0.000 claims 1
- 230000015654 memory Effects 0.000 abstract description 17
- 230000008569 process Effects 0.000 description 16
- 238000010586 diagram Methods 0.000 description 14
- 238000011161 development Methods 0.000 description 9
- 230000008520 organization Effects 0.000 description 8
- 230000004044 response Effects 0.000 description 7
- 230000008901 benefit Effects 0.000 description 6
- 239000000463 material Substances 0.000 description 5
- 238000012986 modification Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- 230000006978 adaptation Effects 0.000 description 4
- 238000013461 design Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 230000014759 maintenance of location Effects 0.000 description 4
- 238000013507 mapping Methods 0.000 description 4
- 230000000875 corresponding effect Effects 0.000 description 3
- 230000007812 deficiency Effects 0.000 description 3
- 238000003780 insertion Methods 0.000 description 3
- 230000037431 insertion Effects 0.000 description 3
- 238000007689 inspection Methods 0.000 description 3
- 238000013459 approach Methods 0.000 description 2
- 230000001276 controlling effect Effects 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 230000000670 limiting effect Effects 0.000 description 2
- 238000002156 mixing Methods 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 238000012552 review Methods 0.000 description 2
- 230000008685 targeting Effects 0.000 description 2
- 206010048909 Boredom Diseases 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 230000000712 assembly Effects 0.000 description 1
- 238000000429 assembly Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000004397 blinking Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000001149 cognitive effect Effects 0.000 description 1
- 230000036992 cognitive tasks Effects 0.000 description 1
- 230000000052 comparative effect Effects 0.000 description 1
- 238000005094 computer simulation Methods 0.000 description 1
- 230000002844 continuous effect Effects 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 239000000446 fuel Substances 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 230000000926 neurological effect Effects 0.000 description 1
- 230000036961 partial effect Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000001766 physiological effect Effects 0.000 description 1
- 210000004258 portal system Anatomy 0.000 description 1
- 238000011867 re-evaluation Methods 0.000 description 1
- 230000002829 reductive effect Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 230000003319 supportive effect Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000001755 vocal effect Effects 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
- 230000003936 working memory Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B9/00—Simulators for teaching or training purposes
- G09B9/02—Simulators for teaching or training purposes for teaching control of vehicles or other craft
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B7/00—Electrically-operated teaching apparatus or devices working with questions and answers
- G09B7/02—Electrically-operated teaching apparatus or devices working with questions and answers of the type wherein the student is expected to construct an answer to the question which is presented or wherein the machine gives an answer to the question presented by a student
- G09B7/04—Electrically-operated teaching apparatus or devices working with questions and answers of the type wherein the student is expected to construct an answer to the question which is presented or wherein the machine gives an answer to the question presented by a student characterised by modifying the teaching programme in response to a wrong answer, e.g. repeating the question, supplying a further explanation
Definitions
- This invention relates to computerized training systems, and more particularly computerized training systems where the computer administers the training.
- the preferable environment is a computerized system with associated devices that immerse students in emotionally engaging and functional operational environments throughout the learning experience, such as those relying on simulation for the training, e.g.. for flight simulation or other vehicle simulator.
- Known training systems provide the trainee with classroom lessons and computer based training (CBT) delivered by computer or by a human instructor, followed by an after-action review that is given to the trainee from which the effectiveness of the training on the trainee can be determined. If the assessment is not positive for the trainee having been effectively trained by the course of instruction, the computer system either repeats the instruction process for the trainee, or initiates a remedial process to bring the trainee up to an effective level. This rigid sequential process is repeated for all trainees who follow the identical sequence of instruction until the assessment indicates adequate effectiveness of the training.
- CBT computer based training
- This process can result in wasteful or inefficient and costly use of the training resources, e.g., the simulator, because the varying skill levels of the trainees, and varying effectiveness of the course of instruction on each trainee.
- the most advanced student or trainee may be exposed to steps of training for less difficult aspects of the training, making that trainee bored, and also wasting the training time by trying to teach things that the trainee already knows.
- a less expert, moderately-skilled individual may be given additional instruction that is not necessary while at the same time being given less instruction in certain areas where he requires additional instruction and training, resulting in more repeat work.
- there is the very low- skilled trainee that needs to learn virtually everything, and has difficulties with addressing some of the more difficult aspects of the training, possibly missing basics, and therefore being unable to benefit from the remainder of the more advanced segment of the instruction set.
- a computerized learning system such as a computerized simulation system
- a trainee is efficiently provided with instructions that are appropriate to his skill level and his personal learning parameters as they are determined by the assessment of the ongoing instruction or by prior identified learning preferences of the trainee.
- the system supports self-paced learner-driven discovery while continuously targeting the learner's KSA (Knowledge, Skill, Ability) gap.
- KSA Knowledge, Skill, Ability
- Tne system may rely on full simulation, which may be real (i.e., using a real device in the training for its use), simulated (as with touch screen I/O devices that emulate the device being trained for) or based on a model (or dummy copy) of the device or devices, the use of which is being trained.
- a system for training a student comprises a simulation station configured to interact with the student and a computer system.
- the simulation system displays output to the student via at least one output device and receives input via at least one input device.
- the computer system has a rules engine operative on it and computer accessible data storage operatively associated with it and storing (i) learning object data including a plurality of learning objects each configured to provide interaction with the student at the simulation system, and (ii) rule data defining a plurality of rules accessed by the rules engine.
- the rules data includes, for each rule, respective (a) if-portion data defining a condition of data and (b) then-portion data defining an action to be performed at the simulation station.
- the respective action comprises output of a respective one of the learning objects so as to interact with the student.
- the rules engine causes the computer system to perform the action when the condition of data is present in the data storage.
- a method for providing computerized training to a student comprises providing a simulation station connected with a computer system with computer-accessible data storage supporting a rules engine thereon. Lesson data is stored in the data storage so as to be accessed by the rules engine.
- This lesson data comprises
- learning object data defining a number of learning objects that each, when activated by the rules engine, cause the simulation station to output visual imagery, audio or other output, and
- rules data defining a plurality of rules on which the rules engine operates so as to administer the computerized training.
- the rules each have a data condition part and an action pan.
- the data condition part defines a state of data in the data storage that, when present, causes the rules engine to direct the computerized system to take a predetermined action. At least some of the actions comprise activating at least some of the learning objects to interact with the student at the simulation station.
- Student state data is also stored in the data storage. The student state data includes data defining an assessment measure of training of the student.
- the computerized training is provided to the student at the simulation station with the rales engine administering the training according to the rules stored in the data storage.
- the assessment measure for the student is determined repeatedly or continually based on input received from the student at the simulation station, and the determined assessment measure is stored in the student state data.
- the rules data defines at least one rule that initiates the action thereof when a data condition that the student state data in the data storage defines an assessment measure below a predetermined value is present, and the action includes initiating operation on the simulation station of one of the stored learning objects.
- objects of the invention are accomplished using a computerized training interface system having input and output capabilities, and a computerized system connected with it that preferably operates using an inference engine or a rules engine.
- the rules engine is programmed with a set of rules as will be described herein that allow it or enable it to administer flexibly the training of a trainee in an immersive training station.
- An Intelligent Decision Making Engine is a data-driven computer system, preferably a rule based inference engine implemented using rules software, such as the Drools Expert software package from JBoss, a subsidiary of Red Hat, or a CLIPS software package, which is available as open-source public domain softv/are, that implements actions or procedures responsive to specified qualities of data being stored.
- the rules are continuously active once loaded, and are configured to allow for continuous adaptive modification of any instruction and other interactions with the trainee of the training station in real time, an interactive adaptive learning system, as will be described herein.
- the CLIPS software and its operation are described inter alia in the Third Conference on CLIPS Proceedings (Electronic Version) available publicly at http://clipsrales.soureeforge.nel docum NASA Conference pub.
- the adaptive process of the invention is especially efficient at delivering training. It may be said that the rules engine system provides for a higher-resolution or finer-grain adaptive learning than is available in the prior art due to the immediacy of the reaction of the rules-based system.
- the organization of rules is prepared by the training staff, and generally provides for at least one of
- the continuous performance assessment targets the individual learner lesson adaption to the state of the learner.
- the complexity and pace of the lesson are adapted to regulate learner engagement and maximize learning and retention.
- a training station and a computer system with the rules engine are connected by a network operating pursuant to communications software that controls the communication on the network such that computers on the network publish data that is transmitted only to other computers on the network that have subscribed to receive data from the publishing computer.
- the rules engine computer system subscribes to receive data published by the training system, and stores data received from it in the computer accessible data storage, so that rules of the rules engine computer system have if-portions based on the received data.
- F IG. 1 is a schematic view of the overall simulation system according to the invention
- FIG. 2 is a schematic view of the internal operation of a training system according to the invention.
- FIG. 3 is a more detailed schematic view of the operation of the computerized simulation system of the invention.
- FIG. 4 shows an example of an immersive platform station for use as the training station for the invention, together with a schematic illustration of the peripheral devices attached thereto and the associated software support from the computer controlling system.
- FIG. 5 is a perspective view of an exemplary simulation system using the present invention.
- FIG. 6 is an exemplary display showing an avatar, and some training field of view and equipment presented to a trainee as an example.
- FIG. 7 is an illustrated diagram of the operation of a rules engine to administer a training program for the HUD (Head-Up Display) and CCU (Cockpit Control Unit) of a vehicle.
- HUD Head-Up Display
- CCU Cockpit Control Unit
- FIG. 8 shows a diagram of a timeline of training of an ideal student using the training system of the present invention.
- FIG. 9 is a timeline diagram of a student requiring corrective or remedial actions being trained in this same material as in FIG. 8.
- FIG. 10 is an illustration of the development of the learning object database used for the present training method.
- FIG. 11 is a diagram of a data model by which the data is stored in a computer accessible memory device.
- FIG. 12 is a diagram illustrating the relative efficiencies of training for a number of different students at different skill levels.
- FIG. 13 is a diagram of an example showing lesson flow for an exemplary rules implementation.
- FIG. 14 is a diagram illustrating trainee insertion in the adaptive learning system of the present invention.
- FIG, 15 is a diagram illustrating trainee re-evaluation in the adaptive learning system of the present invention.
- FIG. 16 shows a story board style illustration of the training process for a UH60 attack helicopter simulation, illustrating various rules-implemented processes possible according to the present invention
- FIG. 17 is a diagram illustrating the structure of a multi-processor embodiment of the invention.
- FIG. 18 is a diagram of a portion of an embodiment of networked system according to the invention with engineer tools for creating and editing the rules and the System of Record (SOR) system database.
- SOR System of Record
- FIG. 19 is a diagram showing another portion of the networked system of FIG. 18 with a simulator and a relative geometry coprocessor on the network.
- FIG. 1 shows a diagram of an embodiment of the system architecture of the computer system controlling the operation of a LinkPodTM training system, which may be used for a variety of training purposes, especially for training in operation of vehicles, such as a flight simulator. It will be understood that many different systems and types of computer hardware and software with varied designs and components may be used advantageously in the invention as training systems.
- the system is implemented in a computer system, which may comprise one computer or a plurality of computers linked by a network or local connection over which system operation is distributed.
- the computer system or systems may run operating systems such as LINUX or Windows, and their operations are controlled by software in the form of computer-executable instructions stored in computer-accessible data memory or data storage devices, e.g., disk drives.
- the computers also include typical computer hardware, i.e., a central processor, co-processor or multi-processor, memory connected to the processors), and connected devices for input and output, including data storage devices that can be accessed by the associated computer to obtain stored data thereon, as well as the usual human operator interfaces, i.e., a user-viewable display monitor, a keyboard, a mouse, etc.
- the databases described herein are stored in the computer-accessible data storage devices or memor on which data may be stored, or from which data may be retrieved.
- the databases described herein may all be combined in a single database stored on a single device accessible by all modules of the system, or the database may be in several parts and stored in a distributed manner as discrete databases stored separate from one another, where each separate database is accessible to those components or modules of the system that require access to operate according to the method or system described herein.
- the overall LinkPodTM system comprises an immersive station 3, which is an adaptable training station with a number of input and/'or output devices accessible by user.
- the immersive station 3 in the preferred embodiment comprises a seat 4 for a user and displays, including a larger 3D HDTV resolution display 6 and two or more touch sensitive I/O screens 8 supported for adjusting movement.
- the touch screens can be used to display a cockpit of any vehicle or the specific device the training is for, so the station 3 can be used for a variety of possible training courses for a variety of different vehicles or aircraft.
- the immersive station 3 also has an eye tracker that detects the direction that the trainee is looking in and generates a data signal carrying that information. All the displays 6 and 8 are connected with and controlled by a local computer system that supports the immersive station 3 as a platform.
- the base of the station 3 is a frame supported on casters, which allow for easy mo vement of the station 3 as desired.
- the immersive platform station computer system 10 runs an immersive platform manager software module 14, which operates a selected configuration of the trainee station 3.
- the platform support includes support of the main display 6, the interactive displays 8, the eye tracker or gaze detector, a haptic system, a brain sensor system, which can detect certain neurological parameters of the trainee relevant to the training, sensors that can detect the trainee's posture, any other biometrie sensors that may be desirable to monitor the physical condition of the trainee, and also a 3D sound system, and a microphone, and any other hardware that is desired for trainee station.
- the various components of the system return electrical signal data that is processed by platform manager 10 and transmitted to other modules of the system.
- a plurality of immersive stations 3 can be supported in parallel by a system according to the invention.
- the immersive station 3 is electronically connected by a network data link or local connection with a computerized learning management system (LMS) 5.
- LMS computerized learning management system
- the LMS 5 is supported on a separate computer system via a network, and it may be connected to a number of training stations 3 locally or remote from its location.
- the LMS stores data, including videos and other information and media files used in the lessons, as well as data defining the students that use the system and data relating to administration of training with the various training stations 3 connected therewith via one or more networks or the Internet.
- the LMS is similar to training management systems known to those of skill in the art, in that it communicates with the immersive station 3 so as to display a prompt and it receives student log-in identification data, typically comprising an ID and a password, from the immersi ve station 3 entered by the trainee through an interactive screen 8 at the immersive station 3.
- the LMS then lists the possible courses of instruction available to the trainee, and receives a responsive communication through the interactive device 8 that selects a course.
- the LMS then loads the respective training station 3 with the necessary training data resources, media, software that supports hardware needed for the specific training selected, and other data as will be described herein, and also and initiates the system of the training station to present the course to the trainee.
- LMS 5 is connected with and accesses stored curriculum and records database 6.
- This database contains the data needed to administer training in the system, including history of the student or students. Selection of a course of training responsive to trainee log-in and other selection input, causes the LMS to load the requisite lessons, rules, and other data into the appropriate data storage or memory so as to be accessible by the components of the system that are involved in delivery of training at the immersive station 3.
- the system further includes an intelligent decision making engine (IDME) indicated at 7.
- IDME 7 which in the preferred embodiment is a rules-based inference engine supported on a computer system in the training station 3.
- the IDME rules run via an API of CLIPS rules-engine software running on the host computer.
- the IDME 7 has computer accessible memory that is loaded by the LMS 5 with the rales needed for the specific selected training operation.
- the IDME has access to a database shared with other components of the system that contains training data, as will be described herein.
- the IDME rules engine operates according to a set of rules that are loaded into the associated storage or memory so as to be accessible by the IDME.
- Each rule specifies a condition of data in the associated database, if the data value of a current measure of effectiveness for the current trainee is below a predetermined threshold value, etc.
- the rule also specifies an action that is to be taken whenever that condition of data is satisfied, such as, e.g., to display a question to the trainee and wait for a response.
- the rules engine is a data-driven system, in that the state of the data in the associated database immediately triggers prescribed actions when it satisfies the condition of the rule.
- the set of rules loaded in the IDME all operate continuously and simultaneously based on the state of data accessible to the IDME, and the rules trigger actions that will be taken in the training process at the immersive station 3 at whatever point in time the associated data condition of the rule is met.
- the IDME 7 passes, sends or otherwise transfers data to a content adaption module 9 that corresponds to actions, i.e., commands to perform integrated lesson actions.
- Content adaption module 9 is also implemented using a software system running on a computer, and the IDME and the content adaption module 9 may be both supported on the same computer.
- Content adaption module 9 also has access to a data storage device 11 storing data containing training content, e.g.. materials, recorded instruction and various other software and data that is used in providing simulation or training to the user at station 3, and it controls the operation of the instruction and/or simulation conducted at immersive station 3.
- the content adaption module 9 causes the immersive station displays and sound system to output multimedia training content, such as avatars delivering audible content, voice instruction, and other actions.
- Those other actions include interfacing with an external simulation or live device running a computerized simulation of the vehicle of the training by displaying the correct controls on the interactive screens and with an appropriate display on the main display 6 created by a computerized image generator, not shown, that renders real-time video based on virtual scene data, as is well known in the art of flight or other vehicle simulation.
- Content adaption module 9 uses training content 11 to provide to immersive station 3 the necessary training events.
- the various trainee sensors and input devices generally indicated at 13, e.g., eye-tracking, gaze or blink detection, neural detectors,
- touchscreens or other touch-based simulated control panel or cockpit input/output devices a microphone listening for speech, and optionally detectors from which body position or posture may be detected, detect actions or conditions of the trainee and transmit data therefrom to continuous assessment module 15.
- the continuous assessment module 15 is also implemented using a software system running on a computer.
- the IDME and the continuous assessment module 15 are both supported on the same computer located geographically at the simulation station 3.
- the assessment module 15 may be independent of the IDME, or more preferably, the assessment module 15 may be constitute as set of Assessment Rules (see FIG. 10) incorporated into the rules data as a subset of the total rules data on which the IDME operates.
- the assessment activities may be seamlessly interwoven with the activation of learning objects transmitting output that triggers input of the trainee that may be used to assess a measure of performance (MOP ) of the student, or a measure of effectiveness (MOE ) of the training as it is given.
- MOP measure of performance
- MOE measure of effectiveness
- Continuous assessment module 15 provides continuous assessment of the trainee such as by analysis of responses or activities of the trainee at the immersive station 3.
- the continuous assessment module 15 generally produces data that is an assessment of the knowledge, skill and ability (KSA) of the trainee.
- KSA knowledge, skill and ability
- Knowledge is the retention by the trainee of certain information necessary to operate the vehicle, e g, the location of the switch for the landing gear on an aircraft.
- Skill is the use of knowledge to take some action, e.g., to operate the landing gear properly in simulation.
- Ability is the application of knowledge and/or skill to operate properly in a more complex mission scenario, such as in a simulation using the knowledge and skill.
- the assessment module 15 can assess the trainee based on frequency of errors and correct actions in a simulation exercise, with corresponding weighting or scoring factors from severe errors at -5 to perfect operation at +5. Assessment can also he based on the trainee's visual scan pattern using techniques such as Hidden Markov Model (HMM) to assess the trainee s skill level while executing tasks. Interactive quizzes or pop-up questions may also be employed, where the response is either a verbal response picked up by a microphone or selection of a multiple choice question response through some other input device such as a touchscreen. Some biometrics may be used as well.
- HMM Hidden Markov Model
- the KSA assessments made by the continuous assessment module 15 are stored as data in a student state data area in a database accessible to both the continuous assessment module 9 and the IDME 7. It will be understood that the student state data may be numerical values linked to identify the associated area of knowledge, skill or ability, and may be a flag of 1 or 0 indicative of the presence or absence in the student of the knowledge, skill or ability, or a numerical variable in a range that is indicative of the degree of presence of the KSA.
- quality e.g., a score from a test on a scale of 0 to 100
- quality may be a string of characters that is indicative of some level of KSA or expertise of the student, e.g., with respect to successful completion of some aspect of training, a " YES” or “NO”, or a detailed definition of a familiarity with an instructional area, any character string, e.g., "BEGINNER”, "EXPERT”, or "BASIC”, etc.
- platform state data that defines the current state of the platform, and is indicative of what training is being displayed or the status of the delivery of training to the trainee on the immersive station 3.
- This data may also be numerical or character strings.
- the rules of the IDME define conditions for action that are based on the student state data or the platform data.
- the rules cause the system to react to the data produced by the continuous assessment so that the immediate decision making of the system improves the efficacy and efficiency of the use of the simulation device or immersive station 3.
- immersive station 3 is occupied by a student that interacts with the immersive station 3.
- Student actions at the immersive station 3 are processed by continuously- running assessment program 9.
- the assessment program continuously or continually develops an assessment of the knowledge, skill and ability (KSA) of the student from the student actions, and also from the stored LMS model of the student, which has already been obtained or supplied to the system or developed over time to derive, and defines certain training attributes of the trainee, such as whether the trainee is better trained by visual or auditory instruction.
- KSA knowledge, skill and ability
- the continuous assessment determines the student KSA 17.
- the student KSA is compared to a desired or required level of KSA appropriate to the level of instruction or simulation that the student is receiving.
- the difference between the desired KSA value and the actual student KSA may be referred to as a KSA gap 19, this being either a quantified value or a value that can be derived from the determined student KSA and compared with the specific expectations of the student as pre-determined by data in the system.
- the student KSA is part of the student state data that is available to the IDME 7, and as such the rules are preferably written so as to take instructional actions targeting the current KSA gap of the trainee.
- the IDME rules operate continuously, and they take instructional actions immediately based on the data in reaction to the KSA gap or KSA values, providing optimal training directed at the areas where the trainee requires instruction.
- the instructional actions are sent from the IDME 7 to the learning content adaptation module 5,
- the learning content adaptation module 5 accesses training content data stored on a computer accessible data storage device 21 and this material is transmitted to the immersive station 3, adjusting the training of the trainee.
- a rule is composed of an // portion and a then portion.
- the then portion of a rule is the set of actions to be executed when the rule is applicable, i.e., when the if portion of the rule is present in the database.
- the inference engine or IDME 7 automatically matches data against predetermined patterns and determines which rules are applicable.
- the //portion of a rule is actually a whenever portion of a rule, because partem matching occurs whenever changes are made to the data associated with the IDME,
- the inference engine selects a rule, and if the data conditions of the / / portion are present in the data, then the actions of the then portion of the selected rule are executed.
- the inference engine selects another rule and executes its actions. This process continues until no applicable rules remain.
- the // " portion, or the contingent data precondition portion, of each of the rules may be any aspect of the data student state or the platform state.
- the then portion of the rule may include action to be taken in response to the satisfaction of the conditional requirement for the student or platform data may be any action that can be done by the immersive station 3.
- the IDME may be programmed with a rule that if the student KSA determined during a simulated aircraft training exercise indicates a poor understanding (either by a flag or a scale of effectiveness that is below a predetermined threshold) of an aspect of the operation of an instrument panel, e.g., an altimeter, then a special avatar is to be displayed and a an instructional statement made via the sound system of the immersive system 3.
- the instruction is transmitted to the learning content adaption 5 directing display of the avatar and playing of the audio.
- FIG. 6 shows a main display screen view, wherein a human-appearing avatar is giving audio instruction regarding an aspect of flight training.
- the avatar may be displayed as part of the rendered imagery shown to the trainee, e.g., as a person standing in the environment displayed and speaking to the trainee.
- the rules- based system can make the avatar interactive with the trainee, responding to the trainee's reactions to the avatar's statements or commands.
- the IDME may have a rule that if the eye tracker data indicates that the trainee has not blinked for thirty seconds, then the LCA is to schedule a break or discontinue the process and request assistance from the human trainer.
- the then portion or action specified by the rules to a KSA deficiency relative to an acceptable KSA level may be as simple as repeating a previous course of instruction when a trainee shows a lack of proficiency in one particular area.
- the action may involve an immediate modification of the training presently being given to the trainee so as to enhance certain aspects of the training so as to offset a shortfall in training that is detected.
- Another possible rule is one wherein the z/portion of the rule is that the data indicates that the trainee is doing extremely well, has very high performance assessment and a low or zero KSA gap, possibly coupled with a biometric data having an indication of physiological effects of low stress or disinterest, such as blinking longer than usual, then additional complexity or difficulty is introduced into the ongoing training.
- the internal software-based computer architecture of an embodiment of the system is illustrated in the diagram of FIG. 1.
- the host computer system generally indicated at 23 supports the operation of the training station 3, and preferably is connected via a network, e.g., the Internet, with the computer system that supports the LMS 5, allowing for the individual trainee to sign in, be recognized by the system, and to have his personal data, if on file, restored to the local system(s) of the training station 3 to assist in his training.
- a network e.g., the Internet
- the host interface 25 also provides interface of the training station 3 to external simulation state data, and allows training station 3 interactions to be applied to an external simulation, i.e., a simulation program running on a connected computer system. For example, when a st udent turns on power to a virtual HUD by touching one of the touch screens of training station 3, this this action generates an input signal that is communicated to the connected simulation. Responsive to the input, the simulation changes the switch position in the HUD, and the data defining the switch state in the simulation data base, and the power lamp changes color.
- an external simulation i.e., a simulation program running on a connected computer system. For example, when a st udent turns on power to a virtual HUD by touching one of the touch screens of training station 3, this this action generates an input signal that is communicated to the connected simulation. Responsive to the input, the simulation changes the switch position in the HUD, and the data defining the switch state in the simulation data base, and the power lamp changes color.
- the new state is communicated through host interface 25 to the virtual learning object (VLO), meaning the display in the training station 3, e.g., one of the touch displays, that is configured by the lesson data to look like a HUD control.
- VLO changes the displayed appearance of the virtual device, e.g., the HUD, to match the host state data for the simulation of the device.
- One or more processors in the training station administer the operation of the training platform, which is initiated with all programs and data needed for the selected course of instruction.
- the platform state data 33 is initialized and made available to the IDME 7, which accesses both the platform state data and the student state model data.
- the platform state 29 indicates the state of operation of the simulator attached to the system, and the student state model 35 reflects just data that has been stored based on the student's conduct and prior history as a trainee. Together these two groups of data are treated as "facts' 5 , the data to which the rules of the CLIPS inference engine 31 are applied.
- the output of the IDME 7 is actions 39 that are transmitted to the LCA, the learning content adaptation service.
- These actions 39 are usually data that is transmitted to the learning content adaptation system 9, which in turn accesses the lesson database 41 accessible to the LinkPodTM core computer so that it can automatically obtain data stored therein.
- the LCA 9 transmits to the immersive platform service tasks that are to be executed by the simulator platform system, including avatar tasks, and other platform tasks for display or interaction with the trainee. This includes directing rendering of 3D imagery by an image generator computer system based on a database of virtual environment data, including models of vehicles and other objects, textures, and other aspects of display or presentation such as fonts and VOF. Data is returned from the simulation platform in a raw form, and that data is then processed to be converted into student state data or platform state data and stored in the relevant data areas for access by the IDME 7.
- FIG. 11 shows a diagram of the data model according to which data for the learning management system is preferably stored and utilized within the system of the invention. All of the elements and objects shown herein constitute data stored electronically on data storage devices that are accessible by a computer.
- the data model illustrates the organization of the stored data in the database, and is reflected in the database by stored database organizational data, e.g., pointers pointing to the location of data corresponding to records, which is used by software accessing the database to retrieve or store data therein on the data storage device or devices containing the database, as is well known in the art.
- the LMS 5 identifies each course of instruction as a lesson record.
- the lesson record contains pointers or lists that include
- the objectives are each stored as a record 53 with a list of steps to be performed by the trainee in the process of the lesson. These are each a discrete action, such as "identify landing gear control", and they can be satisfied by a test question given to the trainee. In addition to the identification of the steps, there are a set of measurements of effecti veness of completion of the steps by the trainee, either a flag set to 1 (completed) or 0 (not completed), or a range of effectiveness of the step completion.
- the learning objects are each stored as a record 55 that defines a series of actions to be taken, i.e., displays of imagery or avatars or administration of tests, generally all outputs to the trainee through the immersive system.
- the virtual objects are records 57 that define virtual controls, such as cockpit controls that are displayed in interactive viewing displays 8 so as to appear similar to the controls of the real vehicle that is being simulated.
- the resources are identified as a data record 59 that lists the hardware of the immersive station that is to be used in the lesson, e.g., whether the microphone and voice recognition is to be employed, whether the eye tracking system or other biometric system is to be employed, etc.
- the simulation environment record 61 identifies a specific database of scene data defining a virtual world that is used for the given lesson. There may be a large number of virtual environments defined in the system, such as mountains, desert, oceans, each of which maybe selected by a lesson for use as the training mission environment.
- the rales record 63 contains the set of rules for the lesson 51, written in CLIPS language. These rules are loaded into the IDME when the lesson is started. Individual learning object records may also reference rules records 55 as well, which are loaded when the learning object is loaded, and deleted from the IDME when the learning object is completed.
- FIG. 7 illustrates a simple rule based process of training in which a lesson involving training in learning objects 71 having to do with operation of the CCU of an aircraft and learning objects having to do with HUD operation of the aircraft are combined.
- Learning objects for the training are selected, step 75, based on student state data at startup, i.e., the level of training or skill of the student according to the LMS records.
- the general rules are loaded, and the set of learning objects are loaded.
- the rules control the presentation of the learning objects to the student so that a student will not be given a more advanced lesson until the student has completed the necessary prerequisites.
- the order of completing those prerequisites may vary from student to student, but the rule will not permit the display of the advanced learning object until the student state data indicates that the prerequisite learning objects have been completed.
- an agenda of learning objects is selected for the student, and the rules cause them to be presented to the student (step 77), and once the material has been presented to the student, the student state model data is updated to reflect the fact (step 78).
- a rule 79 is applied to the extant student state data: "IF (1) student has proven knowledge of X, and (2) student has proven knowledge of Y, and (3) student has not yet been presented module Z (another learning object), THEN present module Z" as reflected by values stored in the student state data.
- This rule is active, but its IF-part is not satisfied until the student state data indicates that the student has knowledge of X and Y.
- the rule causes Z to be presented.
- the student model or student state data is updated to reflect that Z has been presented, as by, e.g., setting data as a flag corresponding to completion of the Z module.
- the student model or data indicates that Z has been presented, and the IF-part of the rule, which includes the determination "(3) student has not yet been presented module Z" is not satisfied, and the rale does not cause any action from then on.
- FIG. 8 shows a timeline flow for a lesson as applied to a student that is an ideal student, meaning that the student completes the objectives of each learning object without creating conditions in the student state data that cause the IDME rules to trigger remedial actions.
- the lesson is loaded, and this includes loading of the lesson rules.
- the lesson starts, and the first rule to activate is the Intro Rules 102, which trigger the action of lntro Content Playback 103.
- this rule is not satisfied by the data because a flag or content complete for the intro learning object ("LO") is set at 105.
- the HUD LO Description Rules 108 then are satisfied and become active, the action being to load the HUD content and play the HUD playback 109.
- the HUD rules direct an adjustment task for the student to perform at 111. This task is successfully completed and the HUD rules then direct playback of a "good job" message (113 ) to the student.
- FIG. 9 shows a different outcome based on the same rules, all of which are loaded at point 201.
- the rules include eye-tracker data based rules that react to data indicative of the student not watching display, and of microphone pickup of chatter indicative of distraction.
- the Intro LO is loaded, and the intro content playback proceeds.
- the distraction detection rule is running as well.
- the distraction rule triggers a break-offer action 207.
- the break is conducted according to Break Rules 209, which involve playback 211 offering a break, listening (213) for an acceptance, and then resuming on return of the student (215).
- the intro completion flag is then set at point 216.
- the HUD LO then starts according to the HUD description rules 217. There is the HUD content playback 219, followed by a test ofHUD brightness adjustment 221. The student here does not properly change the HUD brightness (223 ), and the rules cause playback of the system itself doing the brightness adjustment (225). A negative effectiveness data value is then stored in the student state data (227).
- the HUD rules actions are completed at 229, and the HUD rules become inactive.
- the rules then load the Flight LO at point 231 with the Flight LO rules 233.
- the flight content is then run, but there is an active rale that has its // portion satisfied - IF (I) the student has a negative HUD score, and (2) if the student data indicates distraction during the intro playback, THEN an action is directed that a HUD brightness training event insertion (235) is made in the flight LO content 237. Once that is completed, the lesson continues as before.
- the remedial action taken in this way using the rules engine avoids failure of the entire lesson effectiveness, because corrective action is taken during the lesson to correct for the distraction and knowledge deficiency detected in the student.
- the result is more efficient use of the simulation system.
- Efficiency of the rules-based approach is also illustrated in the comparative timelines of FIG, 12.
- a proficient student timeline is seen at 301. The proficient student completes four lessons, and his proficiency is detected by rules-based assessment. He then completes two missions 1 and 4 appropriate to his KSA level, completes a test flight and then graduates, freeing the system for the next trainee.
- the timeline 303 for student 2 shows the same four lessons, with additional training content inserted throughout, resulting in a test fight and graduation in slightly longer time than required for the proficient student, but not equivalent to repetition of the course.
- the timeline 305 for an expert student is greatly accelerated, because the training is intensified as the rules detect a high level of KSA, resulting in a mission and a test flight after only one lesson, and immediate graduation. This frees the system for an appreciable amount of time, and does not waste the trainee's time in unnecessary training either.
- FIG. 13 also illustrates flow of a lesson.
- the trainee in this scenario gives the wrong answer at assessment point 401.
- the student data is modified to have a flag indicative of the wrong answer.
- the question is re-asked at point 403, and the right answer is given.
- a running rule tests this question again at point 405, and when the wrong answer is given, new content 407 is inserted and displayed to the trainee.
- the right answer is then given at 409.
- FIGS. 14 and 15 This adaptive learning approach is described in FIGS. 14 and 15.
- the adaptive learning allows for both insertion and reevaluation.
- various missions are run to evaluate the grasp of the content by the student.
- Failed content is inserted into the missions to augment memorization of the content by the student.
- questions are used to determine the retention of the information, the questions will be repeated to enhance memorization.
- the system preserves in the student state data the number of times the information has been presented to the student before the student answers the question correctly.
- the failure to answer a question correctly can trigger a rule that an ad hoc evaluation of the content ma be presented during a mission.
- the rules engine architecture allows for this type of flexible training method. To obtain maximum efficiency, the rules must be developed and written in a way that identifies KSA parameters that are to be satisfied, and breaks the lessons up into workably discrete components that can be addressed independently to determine when the student has developed the requisite level of training KSA, and when he has not, to take remedial action immediately so as not to allow a partial deficiency to delay the entire training process.
- FIG. 10 illustrates the process of creation of the rules for a lesson.
- An existing linear curriculum 81 is broken down by cognitive analysis (step 82) into instructional storyboards (83).
- the cognitive task analysis 82 and the instnactional storyboards 83 are used to develop the expert knowledge rules, and also object modeling for the development of the learning object database for presenting the lesson to a trainee in a rules-based system.
- the rules of the learning object include lesson rules, which govern the content presented and its order of presentation.
- Assessment rules identify specific ways of determining KSA of the student, as well as other aspects of the student's state, such as distraction or boredom.
- the resulting rules are loaded into the IDME when the training is conducted.
- a KSA storyboard example is shown in FIG. 16.
- the trainee logs in and starts the training via the LMS (step 501).
- the learning content manager (which includes the IDME, not shown) constantly assesses the student skill levels.
- the student is first given the knowledge of the lesson, in this case a HUD training exercise, by a virtual coach that performs the HUD usage and then directs the trainee through a declutter operation (stage 502).
- skill is developed by reducing coaching in stage 503. If too slow or too prone to errors, the trainee is sent back to stage 501 for more knowledge training (step 504). If not, the trainee moves to stage 505 for ability and retention training. In this stage 505, a more complex mission using the knowledge and skill is presented to the trainee. If the trainee is not able to perform, the trainee is returned to stage 503 for further skill development. If the trainee is able to perform, further training on points of detected weakness can be given in stage 507.
- the operation of the training method of FIG. 16 is based on rules that are continuously active.
- a rule is constantly in effect that is the determined level of skill falls below a predetermined threshold, the training action is then changed to a knowledge-type coach training as in stage 502.
- a rule responsive to an assessment of ability felling below a predetermined threshold causes the training action of changing to a skill level training.
- the changes of training to different stages are immediate due to the constant applicability of the rules via the IDME. The result is efficient development of knowledge, skill and ability for the trainee.
- FIG. 17 illustrates the architecture of a preferred embodiment of a multiprocessor system supporting a LinkPod immersive training station 3.
- the station 3 includes the set 131 of I O devices that interact with the trainee. These include a 3D immersive main display 133; i f.
- the I O devices include also flight controls 137, which maybe a joystick or a more elaborate cockpit control system that can emulate real vehicle controls, and left and right touch screens 8 that allow trainee input to the system and display appropriate media or VLOs to the trainee.
- the I/O devices may also include biometrie sensors such as an eye tracker 139 of the sort well known in the art of simulation and military aircraft, a microphone 141 that receives audio input from the trainee, and an audio system 143 that generates sound as required by the training process.
- a computer lesson processor #1 (145) with access to a local data storage device and also access to a network 147, is connected directly with and transmits data and/or media to one touch display 8 and the audio system 143. It is also connected with video switch 149, which switches between video supplied from two possible sources, as will be described below. Lesson processor #1 supports execution of the software that supports the IDME and the LC A functions of the station 3.
- the touch screen service a service that displays an avatar instructor for the trainee to view
- spatial audio service that can output specific sounds via audio system 143 as part of the training, playback of video or audio when appropriate, support for a keyboard of the system
- resource management and training plan services that operate as described above with respect to the IDME/LCA operation, obtaining, locally or via network 147 from the LMS, and implementing the various media or data needed for the training selected.
- lesson processor #1 is Initiated by lesson host processor 151, which is connected therewith by the network.
- Lesson host processor 151 supports the eye tracker 139. but also administers the immersive platform and maintains the data of the platform state, which is accessible to the IDME of lesson processor #1 locally or via the network.
- This host processor 151 assists the trainee in initially logging in and accesses over the network 147 the I CS system,
- Lesson processor #1 communicates via network 147 with lesson processor #2 (153), which receives from processor #1 data directing what it should display on the associated touch display 8. Lesson processor #2 also receives data from speech recognition of input via microphone 141, which is incorporated into the platform state data accessible to the IDME.
- simulation host processor 155 provides for vehicle simulation, i.e., it determines using a computer model and scene data as well as data of the platform state or student state how the vehicle is moving or operating. Data including the trainee ownship location in a virtual environment and other simulation data is output over the network to synthetic environment processors 157.
- the synthetic environment processors 157 are essentially a multiprocessor image generator that renders an out-the-window view to be displayed to the trainee. This view includes distinct 3D imagery for the left and right eyes of the trainee, which is sent through a video combiner 159 and displayed in 3D to the trainee on immersive display 133.
- Lesson processor 1 accesses video switch 149 and selectively displays either the OTW imagery being rendered in real time by processors 157, or it transmits recorded video that is transmitted from lesson processor #3 (161). Lesson processor #3 outputs recorded video the training session does not provide for trainee changes in the video portion displayed of, e.g., a flight taking place where the trainee is a passenger or supportive technician in the simulation, working on different aspects of the vehicle operation. Time-stamped recorded video or live video may also be supplied and displayed in this way as well via lesson processor #3.
- the network 147 links all the processors so that the IDME can implement its actions through those processors, and the entire environment acts as a stand-alone training module. Additional training materials and data may be accessed at the LMS system via the network at all times.
- the IDME shown is supported on lesson processor 1. It has access to virtually all the data of the training station 3, including the data stored at the other processors, and rules implemented by the IDME may be based on the state of any of this data. Also, because the rules are in continuous effect, the IDME engine may be divided into distinct sets of rules each supported on a respective processor acting as a decision engine that has access to the platform and student data.
- the training station may also be readily adapted to the training of two or more trainees at once.
- the rules of the IDME simply need to be configured to support this functionality. Separate assessments of KSA for each student based on the different inputs from e.g., different touch screens can also be made and rules-based actions taken in response to those KSA values.
- FIG. 18 shows an aspect of a networked system with some of the interactive tools by which system designers or engineers may access a system of the particularly preferred embodiment.
- the network linking the computers of the system is shown as global data exchange 251.
- the system includes a small or large number of computers (not shown) each of which is connected with this network so as to be able communicate with the other computers on the network by sending data packets to them.
- the middleware system controls network traffic by a system of publishing and subscribing, where each computer transmits or publishes data on the network only to other computers that are subscribing to the data of the publishing computer.
- the middleware system usually includes a module of executable code on each computer that controls communications between the local computer and the network. Data being published is routed to a middleware hub memory from which it is transmitted directly to the subscribing computer systems on the network, where it is received by the module and transmitted to the associated computer.
- the result is that applications running on computers on the network all connect to the middleware instead of each other, and therefore do not need to know about each other.
- the data sent may be of a variety of formats.
- the outgoing data is initially converted at the publishing computer to a data format usable by the middleware, e.g., as data packets.
- Each data packet or "topic" includes a name field identifying the topic and one or more data fields appended to the name.
- the middleware receives the packets and transmits them to the middleware modules at the subscribing computer systems, or more specifically, the computer systems pull from the middleware data packets or topics with names to which they subscribe.
- the middleware is connected with the subscribing computer systems by network adapters that convert data from the middleware communication format to a format of the computer system, which may be, e.g., C++, Java, a web format (such as http) or some other format or organization of the data.
- a format of the computer system which may be, e.g., C++, Java, a web format (such as http) or some other format or organization of the data.
- the network communication is "agnostic" as to the type of simulators or computers connected with it. if a new system with different hardware or software architecture is provided, it may be incorporated into the network system by simply providing network adapters that convert the data packets into the new format, and convert the new format data into usable data packets.
- Subscription of one computer system to published data of other computer systems preferably is limited to identified data packets, i.e., topics, with name data fields indicative of their relevance.
- the publishing system may publish topics, i.e., data packet messages, which include name data tags identifying them as “startup”, "shutdown” and "current speed”.
- the subscribing system subscribes to only “startup” and “shutdown” data packets.
- the middleware will transmit "startup” and “shutdown” data packets to the subscribing system, but will not send any other published data packets, e.g., the "current speed” data packets.
- the network adapters 253 also supply data and receive data from the rules engine system, here indicated as the Standard Link Rules Processor 255, in real time. Rules Processor 255 stores data as data objects on which the rules stored therein operate continually, reacting when the if-portion of any rule is satisfied.
- the function of the network adaptor in delivering the data packet to the Rules Processor system is a mapping function wherein an incoming topic or data packet is identified by the data in its name data field and any other identifying data, and the data field or fields of the topic are stored in the proper data area or areas in the memory of the Rules Processor 255 to be accessed by its rules engine.
- any data transmission produced by the rules engine is converted by the network adapter from data in the rules engine memory data format to a data packet or topic that is transmitted through the middleware, i.e., data from a specific field in the rules processor memory being output over the network is mapped to a topic name that corresponds to the data area in the rules memory, which is placed in the name field of the data packet transmitted to the DDS middleware.
- the middleware then transmits the data package to any computer or computers on the network subscribing to data packages or topics having that name.
- the middleware module When received by the middleware module at the subsciibing computers) it is converted by the local network adapters into data of a format usable in the subscribing system.
- This mapping provides for particularly flexible and efficient use of a rules engine in conjunction with a virtual network, and results in a system with the speed of real-time networking and the flexibility of systems that connect to databases.
- a central component of the training system is a graph database that stores effectively all data for the system, except for the actual learning objects.
- Graph database editor portal system 257 gives a systems engineer access to create, enter data for, and modify the graph database stored on computer accessible memory 259 that serves as the system of record, with the data stored thereon being organized as a Not Only SQL (NoSQL) graph database, allowing for easy modification and addition of additional entries.
- NoSQL Not Only SQL
- the graph database contains data defining all the necessary components of the system.
- the graph database is configured using a Neo4J system, and its infrastructure integrates with the Neo4J graph database engine with a NeoL3 interface.
- the NeoL3 interface implements a computer-accessible stored data structure according to a model-based infrastructure that identifies the systems at their respective nodes on the network by node types, node properties, node labels and the relationships between the nodes.
- the defined internal constructs in the graph database are of known structure, which allows tools to be built for the structure and then reused.
- the graph database contains data referred to here as metadata, which includes a. data defining all topics, i.e.. data packets, transmitted over the network,
- the graph database contains authoritative data for the entire system and is stored so that its contents can be sent to any and all systems on the network.
- the graph database of the invention supports REST API, Web Services and .NET Framework assemblies to promote universal access from all run-time and development systems.
- the graph database is created using only predetermined specified models or templates for the nodes and the relationships, and their properties and labels.
- the templates are limiting, and they do not allow complete freedom of the designer to create any structures whatsoever. The result is that the graph database has a structural organization that can be used to identity data very quickly, taking full advantage of the speed of access of a graph database.
- the graph database editor stores a set of model or template data structures that can be used to create a node or a relationship in the graph database.
- the templates available are:
- ModelNodes data defining the nodes of the graph database, i.e., what
- ModelRelationships data defining the limited ways in which ModelNodes can be connected to each other.
- ModelProperties data defining the permissible properties for nodes, e.g., the
- ModelLabels data defining the labels that can be given to nodes.
- a ModelNode might exist for a data record for a Trainee.
- the ModelNode "Trainee” would define the properties of the node as the ModelProperties Name and Date-of- birth, one being a character string and the other a numerical date of birth.
- the permissible relationships could be identified as the ModelRelationships Student-Teacher or Classmates.
- the permissible label would be defined as a Model Label Location, a character string identifying, e.g., a place of training selectable from limited options.
- the graph database incorporating this node would link a trainee only to the trainee's classmates and teacher.
- the node data would contain only the name and date of birth of the trainee.
- a label on the node might contain the place of training.
- No Trainee node could have a relationship inconsistent with the trainee status, such as Instructor-Employer, or a relationship appropriate only for a machine, e.g., Fuel Needed, or To Be Inspected By. Similarly, a node defining an inanimate equipment resource could not have a relationship of Classmate to any other node.
- nodes may contain hundreds of data records and have many different types of relationships, properties or labels.
- the data structures formed by the templates restrict the otherwise free-form organization of the graph database, which provides a significant benefit. Due to its graph-data-model organization, the graph database can be easily modified or expanded to add more simulator or other systems, but it can also be searched easily using the structures selected to create the node. For instance, referring to the Trainee ModelNode above, a search of all trainees that were classmates could be ran very efficiently by identifying all relationships based on the ModelRelationship "Classmate".
- an engineer can construct or modify the communications of the network 251, adding new systems and also configuring the network adapter layer as needed to seamlessly communicate with the various systems, including the network adapter that connects the Rules Processor 255 connect to the network by mapping topics to data objects of the rules engine and the reverse.
- the IDL 261 can usually auto-generate a network adapter for new systems that are added, but user input may be provided to structure the network adapter functionality. All adapters that have ever been used are stored in a software repository 1 5. Only those adapters relevant to the current system configuration are stored in the network adapters layer 253,
- Rules are developed, written, edited, added or deleted via the Rules Editor user interface computer 267, which accesses the graph database 259 and computer accessible data storage storing the current rules database and available rules 269.
- the rules that are so edited or created incorporate data from the current graph database, and ordinarily not at run-time of the system.
- the rules editor then stores a current rules package 271 incorporating the most current data from the graph database of all system data to a computer accessible storage area 271.
- the completed rule set is stored as a rules package for runtime execution in data storage 271, and the rule package is then loaded into the memory of the rules processor 255, after which the rules engine of the rules processor uses the rules to process the system data also stored in computer accessible memory at the Rules Processor 255.
- Rules packages are normally loaded at system start-up, or to provide updates if the graph database is amended or the rules are modified.
- Rules Processor 255 utilizes the Drools Expert rules engine and software from JBoss, a subsidiary of Red Hat.
- the memory accessed by the rules engine is locally situated, i.e., not at a different network location, and is populated with data received from the network adapter.
- the rules loaded in the Rule Processor memory create, retract and update facts, i.e., data fields, in the working memory of the rules engine, and also communicate by transmitting data to all systems on the network.
- performance data stored on another system for a student receiving truck driving training.
- a training component e.g., a pre-drive inspection of a truck
- performance data indicates a failure of the trainee to detect a problem with a tire. This omission is recorded in the data stored at a Learning Management System
- the rules engine on the network and made available to the rules engine, such that the student is able to experience the consequences of their oversight in the pre-trip-inspection as a vehicle fa ult in a simulation.
- the rules engine would cause the tire with the problem to have a blow-out - a consequent development that is based on rules having an if-portion based on the data object of the performance data from the earlier course that was stored on another system (the LMS) on the network.
- the rules engine can enroll or waive additional lessons for a student based on the student's performance in a current lesson.
- the mechanism for this is that an instructor creates rules to publish enrollment recommendations to the network. Subscribers to these recommendations may include an instructor interface (so an instructor can review the
- a rules engine may subscribe to past-performance information for a student published from an LMS or other learning records store. For example, a student may do a pre-trip- inspection of a unsafe vehicle ahead of operating it in a simulation and miss a problem.
- Rules can be created to adapt / blend training in a single lesson based on the student's performance in that lesson.
- a student who makes a serious error in a simulator may, for example, receive a video presentation to show what the consequences of their error might have been and what they could have done different.
- the difficulty of a training exercise may be reduced for a student who is struggling or increased for a proficient student.
- FIG. 19 Another advantageous aspect of the invention is shown in FIG. 19, where the DDS global data exchange network 251 is connected with Rules Processor 255 as has been described previously through network adapters (not shown), which has been loaded with rules 271 as discussed above as well.
- FIG. 251 Another system on the network 251 is a simulator computer system 273, running a three dimensional simulation application, as is well known in the art.
- the virtual world in which the trainee is operating is defined by application content data 275 that is stored remotely at published over the network to the simulator, where it is stored locally at simulator 273 as computer accessible scene data content 277 and used to formulate a virtual environment in which the trainee moves around or operates a vehicle, etc.
- the simulation application publishes some of the data of the simulation, including state data (location of the ownship), events, simulated time, environmental settings (e.g., weather, fog, etc.), and some trainee actions.
- state data location of the ownship
- events e.g., events, simulated time
- environmental settings e.g., weather, fog, etc.
- the Relative Geometry Processor 279 also receives and stores the published application content data that defines the virtual environment, at least in part, in local storage 281, and subscribes to and receives the state data and other data published by the simulator 273.
- the RGP 279 determines the location of the trainee ownship in the simulation virtual environment, and the ownship 's proximity to sensors in the virtual environment. The RGP 279 then determines e.g., when the proximity of the ownship to a sensor is below a predetermined permissible distance, the path of the student through the simulated environment and make assessment of the trainee's adherence to a predetermined ideal route, and also applying environmental rules such as limits on encroachment and collision avoidance.
- the results of the RGP 279 determinations e.g., proximity sensor data, trigger events, path coordinates and clearances, zone triggers, relative distances, etc., are published over the network and subscribed to by the Rules Processor 255, which receives all data from the simulator 273 and the RGC 279.
- the data is placed in the rules engine memory, and the rules-based Adaptive Learning Engine for monitoring, assessment and adaptivity, e.g., by reacting as appropriate to address any KSA gaps or other shortfalls indicated by the trainee's performance.
- the RGC also has a portal computer system 282 for a user, i.e., an instructor, through which one can view a rendered image of the virtual environment and correlated data used in the application scene data.
- the portal 282 also provides a user interface that allows the instructor to place sensors in the virtual environment.
- the portal exports data for use in the computations of the RGC 279.
- the system preferably has a learning management system (LMS) 283 thereon.
- the LMS is connected with one or more dashboards, which are user interface systems that allow administrators to interact with the system and trainees.
- the LMS subscribes to all systems on the network, and communicates with them using http protocols and a browser or bridge 287 that accesses the DDS network using web protocol language and data.
- the LMS typically is accessed by a trainee in a training station, and provides the trainee with a webpage with a log-in to ascertain at first who the trainee is.
- the LMS typically retains historical data of all training for the system and for the individual trainees. When the trainee has logged in or otherwise identified the trainee and the training needed, the LMS publishes student data to the Rules Processor 255 that causes the Rules Processor to initiate training of the individual.
- the system uses an Integrated Content Environment (ICE) identified at 289 in FIG. 18 to streamline design and efficiency.
- ICE Integrated Content Environment
- the ICE system includes the graph database as a central feature, and its structure aids in the integration of the ICE.
- nodes, relationships, properties and labels using a predetermined limited set of model or template structures, the structure of the stored data is always known, even when the specific data values are not.
- All of the database tools and interfaces are designed to efficiently interact with the data-model structure of the graph database. This allows all the developed tools and interface to operate on any data "domain" defined within the graph.
- Access to all development and run-time systems is preferably provided from a single location.
- Common toolsets are promoted through the Integrated Content Environment 289, minimizing tool version issues.
- Rollout of new tool versions is a single installation.
- Virtual machines providing specialized services can be spun-up on demand and the virtual network (DDS) between systems outperforms physical networks.
- Immersive emulation and testing environments can be constructed virtually, drastically reducing hardware configuration costs. Development collaboration is encouraged, because all developers use a comment set of resources. Notwithstanding this, remote users have the same access as local users.
- ICE maintains various types of data (training data, source data, results data, etc.), and promotes access to and distribution of that data.
- ICEAM data tracking'eatalog system
- ICEAM is used to manage digital assets.
- a storage cloud is used to efficiently house the assets.
- MD5 checksums are used to identify unique assets and implement deduplication. Best fit algorithms attempt to fill volumes and reduce the number of volumes used in storage searches.
- a relational database is used to house asset metadata.
- the system is inherently distributed by using .NET User Controls within Internet Explorer, Collection support allows groups of assets to be related. Automated processing is configured to operate on collections.
- Interrogation Plug-Ins can be added to the system by users to automate metadata extraction from user provided asset types. Users can define their own metadata attributes. Users can also define 'personalities* - sets of attributes which should automatically be applied to certain data sets or types.
- a networked system with a single rule engine has been shown here, it is possible to have a system with a number of rules engines, each having rules for a specific function in the system.
- a system might have a separate rules engine for each simulator on the system.
- the multiple rules engines may be supported on separate computer systems, or on a single system, as e.g., virtual machines on a hypervisor running on a computer system connected with the network.
- the rules engines described herein have been given names such as the 1DME or the SLRP which are descriptive of the rules that may be loaded in them, and are not intended to be in any way limiting the flexibility of usage of the rules engine.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Business, Economics & Management (AREA)
- Physics & Mathematics (AREA)
- Educational Administration (AREA)
- Educational Technology (AREA)
- General Physics & Mathematics (AREA)
- Aviation & Aerospace Engineering (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Electrically Operated Instructional Devices (AREA)
- Feedback Control In General (AREA)
- Cable Transmission Systems, Equalization Of Radio And Reduction Of Echo (AREA)
Abstract
Description
Claims
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/US2014/022161 WO2015134044A1 (en) | 2014-03-07 | 2014-03-07 | Adaptive training system, method and apparatus |
Publications (2)
Publication Number | Publication Date |
---|---|
EP3114671A1 true EP3114671A1 (en) | 2017-01-11 |
EP3114671A4 EP3114671A4 (en) | 2017-09-06 |
Family
ID=54055699
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP14884541.5A Ceased EP3114671A4 (en) | 2014-03-07 | 2014-03-07 | Adaptive training system, method and apparatus |
Country Status (4)
Country | Link |
---|---|
EP (1) | EP3114671A4 (en) |
AU (3) | AU2014385281A1 (en) |
CA (2) | CA3212748A1 (en) |
WO (1) | WO2015134044A1 (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107609835B (en) * | 2017-07-28 | 2023-04-18 | 国网辽宁省电力有限公司 | Power grid manpower configuration application system and method |
CN114511100B (en) * | 2022-04-15 | 2023-01-13 | 支付宝(杭州)信息技术有限公司 | Graph model task implementation method and system supporting multi-engine framework |
CN115100238B (en) * | 2022-05-24 | 2024-09-20 | 北京理工大学 | Training method of light single-target tracker based on knowledge distillation |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5287489A (en) * | 1990-10-30 | 1994-02-15 | Hughes Training, Inc. | Method and system for authoring, editing and testing instructional materials for use in simulated trailing systems |
US7016888B2 (en) * | 2002-06-18 | 2006-03-21 | Bellsouth Intellectual Property Corporation | Learning device interaction rules |
US7516052B2 (en) * | 2004-05-27 | 2009-04-07 | Robert Allen Hatcherson | Container-based architecture for simulation of entities in a time domain |
RU48661U1 (en) * | 2004-12-22 | 2005-10-27 | Закрытое Акционерное Общество "Транзас" | INTEGRATED AIRCRAFT SIMULATOR |
EP1957929A2 (en) * | 2005-11-28 | 2008-08-20 | L3 Communications Corp | Distributed physics based training system and methods |
US9076342B2 (en) * | 2008-02-19 | 2015-07-07 | Architecture Technology Corporation | Automated execution and evaluation of network-based training exercises |
US8170976B2 (en) * | 2008-10-17 | 2012-05-01 | The Boeing Company | Assessing student performance and providing instructional mentoring |
CA2751382A1 (en) * | 2009-01-21 | 2010-07-29 | Musiah Ltd | Music education system |
CA2847234C (en) * | 2011-09-01 | 2020-02-25 | L-3 Communications Corporation | Adaptive training system, method and apparatus |
-
2014
- 2014-03-07 CA CA3212748A patent/CA3212748A1/en active Pending
- 2014-03-07 AU AU2014385281A patent/AU2014385281A1/en not_active Abandoned
- 2014-03-07 WO PCT/US2014/022161 patent/WO2015134044A1/en active Application Filing
- 2014-03-07 CA CA2945617A patent/CA2945617C/en active Active
- 2014-03-07 EP EP14884541.5A patent/EP3114671A4/en not_active Ceased
-
2019
- 2019-11-29 AU AU2019272048A patent/AU2019272048A1/en not_active Abandoned
-
2022
- 2022-01-11 AU AU2022200139A patent/AU2022200139A1/en not_active Abandoned
Also Published As
Publication number | Publication date |
---|---|
CA2945617C (en) | 2023-10-31 |
EP3114671A4 (en) | 2017-09-06 |
WO2015134044A1 (en) | 2015-09-11 |
AU2019272048A1 (en) | 2019-12-19 |
AU2014385281A1 (en) | 2016-10-27 |
AU2022200139A1 (en) | 2022-02-10 |
CA3212748A1 (en) | 2015-09-11 |
CA2945617A1 (en) | 2015-09-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11948475B2 (en) | Adaptive training system, method and apparatus | |
US10311742B2 (en) | Adaptive training system, method, and apparatus | |
Alkhatlan et al. | Intelligent tutoring systems: A comprehensive historical survey with recent developments | |
Badiee et al. | Design evaluation of a simulation for teacher education | |
Gibbons et al. | Computer-based instruction: Design and development | |
AU2022200139A1 (en) | Adaptive training system, method and apparatus | |
KR100695563B1 (en) | Multi-agent collaborative architecture for problem solving and tutoring | |
US9230221B2 (en) | Instruction system with eyetracking-based adaptive scaffolding | |
Delamarre et al. | The interactive virtual training for teachers (IVT-T) to practice classroom behavior management | |
JP2021526234A (en) | Student-centric learning system with student and teacher dashboards | |
Remolina et al. | Intelligent simulation-based tutor for flight training | |
WO2021216718A1 (en) | Systems and methods for accessible computer-user scenarios | |
Dagnino et al. | An integrated platform supporting intangible cultural heritage learning and transmission: Definition of requirements and evaluation criteria | |
Westerfield | Intelligent augmented reality training for assembly and maintenance | |
Farinazzo Martins et al. | Star Life Cycle and games development projects for conducting the human–computer interaction course: A practical experience | |
Hirumi et al. | Advancing virtual patient simulations and experiential learning with InterPLAY: examining how theory informs design and design informs theory | |
Sidhu | Advanced technology-assisted problem solving in engineering education: emerging research and opportunities: emerging research and opportunities | |
Gabriska et al. | Issues of adaptive interfaces and their use in educational systems | |
Buck et al. | Adaptive learning capability: User-centered learning at the next level | |
Lunce et al. | Situational Awareness and Online Instruction: A Perspective for Instructional Designers. | |
Kennedy | The impact of robot tutor social behaviour on children | |
Jensen | Adaptive Training Systems for Human-Robot Interaction | |
Englisch et al. | A LEARNING SYSTEM SUPPORTING ACTIVE LEARNING FOR CONTINUING EDUCATION IN SOFTWARE ENGINEERING | |
Loing | MEMORIZING MARKETING DEFINITIONS THROUGH REAL-TIME FEEDBACK IN AN INTELLIGENT TUTORING SYSTEM | |
Sottilare | DESIGN FOR PROFESSIONAL DEVELOPMENT |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20161010 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
DAX | Request for extension of the european patent (deleted) | ||
A4 | Supplementary search report drawn up and despatched |
Effective date: 20170808 |
|
RIC1 | Information provided on ipc code assigned before grant |
Ipc: G09B 19/00 20060101AFI20170802BHEP |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
17Q | First examination report despatched |
Effective date: 20190708 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
RAP1 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: CAE USA INC. |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R003 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION HAS BEEN REFUSED |
|
18R | Application refused |
Effective date: 20221018 |