CA2809696A1 - Computer assisted training system for interview-based information gathering and assessment - Google Patents

Computer assisted training system for interview-based information gathering and assessment Download PDF

Info

Publication number
CA2809696A1
CA2809696A1 CA2809696A CA2809696A CA2809696A1 CA 2809696 A1 CA2809696 A1 CA 2809696A1 CA 2809696 A CA2809696 A CA 2809696A CA 2809696 A CA2809696 A CA 2809696A CA 2809696 A1 CA2809696 A1 CA 2809696A1
Authority
CA
Canada
Prior art keywords
student
evaluation
predetermined
scenario
feedback
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
CA2809696A
Other languages
French (fr)
Inventor
Ming HOU
Simon Banbury
Michael Lepard
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Minister of National Defence of Canada
Original Assignee
Minister of National Defence of Canada
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Minister of National Defence of Canada filed Critical Minister of National Defence of Canada
Priority to CA2809696A priority Critical patent/CA2809696A1/en
Publication of CA2809696A1 publication Critical patent/CA2809696A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Electrically Operated Instructional Devices (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

A computer-assisted training system for interview-based information gathering and assessment. A GUI displays information pertaining to a training scenario and generates event messages based on student input. An Evaluation Engine compares event messages to rules embodying predetermined instructional content and generates evaluation comments. An Adaptation Engine processes the evaluation comments to produce student feedback that is presented to the student via the GUI. The training scenario includes a scene defining a physical context of the scenario; one or more witnesses who may be interviewed by the student; and the predetermined instructional content. The instructional content includes any of: a predetermined line of questions to be posed by the student to elicit clues relevant to a particular subject of the training scenario, preferred questioning techniques to be employed by the student; and a predetermined line of reasoning to be employed by the student to deduce characteristics of the particular subject.

Description

COMPUTER ASSISTED TRAINING SYSTEM FOR INTERVIEW-BASED
INFORMATION GATHERING AND ASSESSMENT
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This is the first application filed in respect of the present invention.
FIELD OF THE INVENTION
[0002] The present application relates generally to computer-assisted training systems, and more specifically to a computer-assisted training system for developing interview-based information gathering and assessment skills.
BACKGROUND
[0003] Computer-assisted training systems are known in the art, for providing trainees with enhanced opportunities to develop their skills in a specific area. Software of this type is increasingly being used to provide specialized training for law-enforcement and military personnel.
[0004] Hays, et al.; Assessing Learning from a Mixed-Media, Mobile Counter-IED
Trainer; Interservice/Industry Training, Simulation, and Education Conference (UITSEC) 2011, paper 11058, describes a computer assisted counter-Improvised Explosive Device (IED) training system referred to as ExCITE, intended to teach military personnel to counter the threat of IEDs.
Some of the training modules introduce the trainee to physical clues in an environment and/or behavioral clues of persons that may indicate the presence of an IED.
[0005] Pettitt, et al. Recognition of Combatants-Improvised Explosive Devices (ROC-IED) Training Effectiveness Evaluation; Aberdeen Research Laboratory; (March 2009) describes a computer-assisted training system intended to teach military personnel to recognise behavioral clues that may indicate a covert enemy combatant and/or an IED.
[0006] Both of the above systems teach the trainee to identify physical clues in the environment, and behavioral clues of various persons to detect various threats. However, neither 40296637.8 system provides training in interview techniques. In particular, neither system provides training in how to conduct an interview of a person to glean clues regarding IEDs or other threats.
[0007] US Patent No. 5597312 (Bloom et al.) describes a computer-assisted training system for teaching Customer Service Representatives (CSRs) to handle customer calls regarding a particular service or product, and initiate appropriate work orders. A component of the training involves teaching the CSR to obtain relevant information from a customer so as to categorize the call and select an appropriate response from among a set of predetermined responses. However, in the context of customer calls, it can be assumed that the customer wants to provide relevant information to the CSR. In this case, the CSR's task is simply a matter of recognising what the customer wants to accomplish, and selecting an appropriate response.
[0008] In many situations, it may be necessary to gather information about a particular subject by interviewing a witness. For example, military personnel are frequently faced with the challenge of interviewing people in order to identify, recognize, and formulate an accurate threat assessment of a suspected IED or other threat. The effective questioning of such witnesses by military personnel to determine key information elements (or clues) about a threat such as an IED is considered to be both one of the most critical aspects of formulating an accurate threat assessment, and one of the most difficult skills to train.
[0009] Similar situations are encountered in other industries. For example, medical professionals frequently must attempt to determine important information about a patient's medical condition by questioning the patient and/or family members. Similarly, police officers are frequently required to interview witnesses and/or suspects in an effort to obtain information relevant to a criminal investigation.
[0010] What is needed is a computer-assisted training system for interview-based information gathering that enables an interviewer to identify, recognize, and formulate an accurate assessment of a particular subject.
40296637.8 SUMMARY
100111 An aspect of the present invention provides a computer-assisted training system for interview-based information gathering and assessment. A (Graphical User Interface) GUI
displays information pertaining to a training scenario and generates event messages based on student input. An Evaluation Engine compares event messages to a rule set embodying predetermined instructional content and generates evaluation comments. An Adaptation Engine processes the evaluation comments to produce student feedback that is presented to the student via the GUI. The training scenario includes a scene defining a physical context of the scenario; a set of one or more witnesses who may be interviewed by the student; and the predetermined instructional content. The instructional content includes any of: a predetermined line of questions to be posed by the student to elicit clues relevant to a particular subject of the training scenario, preferred questioning techniques to be employed by the student; and a predetermined line of reasoning to be employed by the student to deduce characteristics of the particular subject.
BRIEF DESCRIPTION OF THE DRAWINGS
[0012] Further features and advantages of the present invention will become apparent from the following detailed description, taken in combination with the appended drawings, in which:
[0013] Fig. 1 is a block diagram schematically illustrating elements and operation of a system in accordance with a representative embodiment;
[0014] Fig. 2 schematically illustrates a display screen of an example GUI
usable in the system of Fig. 1;
[0015] Fig. 3 shows an example student feed-back window;
[0016] Fig. 4 shows an example Clue Classification Feedback window;
100171 Fig. 5 shows an example Overall Threat Assessment Feedback window;
and 40296637.8 [0018] Fig. 6 shows a table of representative evaluation criteria and instructional interventions.
[0019] It will be noted that throughout the appended drawings, like features are identified by like reference numerals.
DETAILED DESCRIPTION
[0020] Disclosed is a computer assisted training system for interview-based information gathering that enables an interviewer to identify, recognize, and formulate an accurate assessment of a particular subject. The particular subject can comprise a threat, for example an explosive device.
In the present description, aspects of the present invention are illustrated by way of example embodiments in which the particular subject is a suspected Improvised Explosive Device (IED), and the goal of the interviewer is to identify, recognise and formulate an accurate threat assessment of that suspected TED. However, it will be recognised that such embodiments are not limitative of the present invention. Indeed, techniques and systems in accordance with the present invention may be used in any industry or context where it is desired to train personnel to interview one or more witnesses in order to identify, recognize, and formulate an accurate assessment of a particular subject, independently of what that particular subject happens to be.
[0021] In general, the present invention provides a computer assisted training system in which interview-based information gathering and assessment skills are taught to the student by means of one or more training scenarios. Preferably, a scenario comprises: a scene defining the physical context of the scenario; a set of one or more witnesses who may be interviewed to obtain clues relevant to the particular subject of the scenario; and instructional content.
[0022] In general, the scene sets out the physical context of the scenario, and anything within that context that may be relevant to the scenario. For example, the scene may comprise an office suite in a building, in which an IED may be present. In some embodiments, the scene may be presented to the student by means of one or more images, videos, a virtual reality environment, or any other suitable technique. In some embodiments, the scene may also include "physical" clues which the student may be required to interpret. For example, an office scene 40296637.8 may include graffiti on a wall, or a damaged access door. In some embodiments, the student may be able to move around within the scene, or view different parts of the scene in response to input via a keyboard, mouse, or other pointer device, for example.
[0023] In some embodiments, witnesses may be presented to the student by means of one or more images, videos, avatars in a virtual reality environment, or any other suitable technique.
In some embodiments, a witness may appear as a character within a visual representation of the scene. In some embodiments, one or more witnesses may be controlled by means of an artificial intelligence or the like, in accordance with the parameters of the scenario.
In some embodiments, one or more witnesses may be controlled by a human such as another student or a tutor.
[0024] In general terms, the instructional content defines the subject matter that the student is expected to review and/or learn in the course of working through the scenario. In some embodiments, the instructional content defines at least one line of questioning that has been previously designed to elicit useful information about the particular subject of the scenario. In some embodiments, the instructional content defines at least one line of reasoning for interpreting clues and arriving at appropriate deductions regarding the particular subject of the scenario. For example, the instructional content may define a line of reasoning by which the student may deduce the most likely type of IED based on both physical clues visible in the scene and clues provided by witnesses. In some embodiments, the instruction content may also define one or more constraints under which the student must operate. For example, the student may be required to complete the training scenario with a predetermined period of time.
[0025] It is contemplated that a student may work their way through a training scenario by posing questions to each witness, observing the scene, and using the clues so obtained to deduce the most likely type of IED and assess the threat posed by it. The student may be provided with real-time feedback regarding the questions they have posed to each witness and their evolving assessment of the suspected TED and the threat. In some embodiments, Intelligent Tutoring System (ITS) technology known in the art may be used to facilitate real-time evaluation of student performance and feedback, including provision of tutor's comments and hints to assist the 40296637.8 student. By comparing student performance (based, for example, on current and past question selection) against a predetermined rule set of preferred questioning techniques, an ITS tutor may generate evaluation comments as real-time feedback on the student's question selection and clue classification to improve student questioning efficiency and overall training effectiveness.
[0026] Fig 1 schematically illustrates representative elements of a system implementing the present technique to generate student feedback during execution of a training scenario. In the embodiment of Fig. 1, the system comprises a Graphical User Interface (GUI) 2, an Evaluation Engine 4 and an Adaptation Engine 6.
[0027] The GUI 2 may be provided as any suitable combination of hardware and software and is configured to display information pertaining to the training scenario and receive input from the student. Student input 8 may take any suitable form including (but not limited to) mouse or pointer clicks, responses to Feedback tips or queries, and questions to be posed to witnesses within the scene. Each student input, of any form, may trigger a corresponding Event Message 10 which is supplied to the Evaluation Engine 4. The Evaluation Engine 4 may compare Event Messages to a predetermined rule set embodying the instructional content of the training scenario and output Evaluation Comments 12 to the Adaptation Engine.
The Evaluation Comments 12 reflects the real-time performance of the student. Then the Adaptation Engine 6 may process the Evaluation Comments to produce student feedback 14 that is presented to the student via the GUI 2.
[0028] Fig. 2 is a schematic illustration of a representative screen display of a GUI that may be used in embodiments of the present invention. In the embodiment of Fig 2, the screen display is divided into a Scene View 16, a Dialogue Window 18, and a Question Area 20. The Scene View 16 provides a visual representation of the scene defined in the training scenario. In some embodiments, the Scene View 16 also enables the student to interact with the scenario, for example by selecting a witness to question, navigate to one or more areas within the scene, and investigate a suspected IED to reveal visual clues. As noted above, any suitable visualization technique may be used, including, but not limited to: still images, videos, virtual reality etc. If desired, the Scene View 16 may also include means enabling the student to select different 40296637.8 images or points of view, for example by moving around within a virtual reality space. The Dialogue Window 18 provides a record of the trainee's interviews with each witness, the trainee's assessment of the clues obtained during the course of the training scenario, and their deductions regarding the IED. In some embodiments, the Dialogue Window 18 may display a history of communication between the student and the intelligent tutor, an image 22 identifying a current witness, a current answer 24, as well as past answers and instructional feedback. The Dialogue Window may also provide a means for the student to communicate with an instructor or tutor, analyse clues and assess the particular subject of the training scenario. In some embodiments, the Dialogue Window 18 may be divided into two or more sections, each of which may be accessed by selecting a respective tab 26. In the illustrated embodiment, a set of two tabs are shown, but more or fewer tabs may be provided as required by the training scenario. A first tab may provide a Dialogue History, which may be used to display all questions and answers as well as instructional feedback provided by the intelligent tutor. A second tab may provide a "Threat Assessment" area. When this tab is selected, all clues identified to that time and how the student classified them are displayed. The student can then compare his/her assessment with the correct assessment provided by the tutor. In some embodiments, the Dialogue Window 18 may also provide the student with some means for requesting feedback, hints or tips, and more details from the instructor. In the illustrated embodiment, this function is provided by an "Ask More Details" button 28, although any other suitable technique may be used if desired. The detailed information can be provided in any suitable format including verbal and visual (text, photo, or video) formats. The Question Area 20 enables the student to select questions to ask a witness and may be divided into multiple columns. In the illustrated embodiment, five columns are shown, although more or fewer columns may be provided as desired. A question type t column 30 (on the left of Fig. 2) shows five interrogative question types: who, what, where, when, and why. When the trainee selects a question type, a set of questions of that type can be displayed in one or more follow-up question columns 32-40. When the student selects a question, it is displayed in the Dialogue Window 18 as the current question, and a set of follow-up questions may be displayed in one or more of the columns 32-40 to the right. When a question is selected by the trainee and asked to a witness, the Dialogue Window 18 may be updated to reflect the 40296637.8 question asked and its associated answer from the witness, which will appear in both areas of Current Answer and Dialogue Window. An interview session can be ended by selection of "Goodbye" in the question type column 30.
[0029] In general, a training scenario may comprise any desired number of witnesses. The GUI must provide means by which the student can pose questions to each witness, and receive their answers. In the illustrated embodiment, this is accomplished by means of a selection of a witness in the Scene View. In image of the selected witness may then appear in Dialogue Window. The student can engage in a text chat session with the respective witness by selecting question types in the left column and the follow-on questions in the Question Area. This arrangement is convenient, in that it enables the student to engage in multiple different interview sessions by selecting different types of questions towards efficiently achieving the goal of situation assessment. However, this is not essential. Any suitable means of interviewing each witness, and organizing the content of each interview, may be used.
Preferably, the GUI
provides a means by which the student can identify each witness, and associate that witness with their respective question set. In the illustrated embodiment, this is accomplished by means of image tiles, each of which may contain an image (or other identifier) of a respective one of the witnesses. An image tile 22 of the Current Witness may be positioned on the GUI in an area provided for that purpose, as shown in Fig 2.
[0030] The Evaluation Engine 4 may be provided as any suitable combination of hardware and software and is configured to compare event messages to a predetermined rule set embodying the instructional content of the training scenario and generate evaluation comments that reflect the real-time performance of the student. As noted above, the rule set may be based on predetermined lines of questions to be posed to witnesses, preferred questioning techniques to be employed by the student, and lines of reasoning to be employed by the student to deduce the type of IED and assess the threat posed by the IED. As the student works their way through the training scenario, a corresponding stream of event messages representative of the student's input are received and processed by the Evaluation Engine, which builds an historical record of both student input, and evaluation comments. Newly received messages and the historical record can 40296637.8 be compared to the rule set, and logical inference use to generate new Evaluation Comments that reflect both the current performance of the student and their progress in learning the instructional content of the training scenario.
[0031] The Adaptation Engine 6 may be provided as any suitable combination of hardware and software and is configured to process the evaluation comments from the Evaluation Engine to produce student feedback that is presented to the student via the GUI. In some embodiments, the Adaptation Engine may access a database of predetermined feedback content using the received evaluation comment, in order to identify a set of applicable feedback items. From these items, the Adaptation Engine may select one or more of the identified feedback items, for presentation to the student, based on the student's learning style and past performance history.
By this means, the student may be presented with feedback that is tailored to their needs, which tends to maximize their opportunity to learn the instructional content of the training scenario.
[0032] The following description illustrates an example training scenario utilizing the system of Figs 1 and 2. The illustrated training scenario is designed for training a student's questioning techniques and interview skills for use when they are at a scene and under temporal pressure to assess the situation and identify clues for different types of IEDs. The scenario simulates a domestic IED threat, and requires the student to question a number of witnesses in order to reveal and identify clues that support or refute a deduction that the IED type is time-initiated, remotely-detonated/command, or victim-operated. The questions are designed to determine the "who, what, when, where, and why" about the IED and are based on predetermined lines of questioning. Known Intelligent Tutoring System (ITS) technology is used to provide helpful real-time feedback on the student's questioning technique in the form of short tips highlighting instances of good or poor questioning techniques. The students are assessed based on their ability to ask good questions and deduce the correct device type from the revealed clues. The main software components used in the training scenario include a graphical user interface, an evaluation engine, and an adaptation engine, as illustrated in Figure 1. The evaluation engine compares student performance (based on current and past question selection) against a rule set and generates evaluation comments. Then, the adaptation engine matches the 40296637.8 evaluation comments to instructional content which appears on-screen as real-time feedback from the embedded intelligent tutor.
[0033] Feedback to the student can be presented in four ways, as described below.
100341 Individual Question Feedback. Based on the type of question posed by the student, the tutor may provide immediate feedback on whether the question was good or poor. I some cases, this feedback may also include the specific question (and witness answer) that triggered the tutor's response. An example Individual Question Feedback window is shown in Fig. 3.
[0035] Individual Clue Classification Feedback. As the student questions a witness, each clue that has been revealed as a result of the dialogue must be classified as either supporting or refuting a Timed (T), Command (C), or Victim-operated (V) device, or none of the above (Not Applicable ¨ N/A). Based on this threat assessment, the tutor provides feedback on whether the threat assessment was correct or not, together with a rationale for the correct response specific to each clue. An example Clue Classification Feedback window, which may be presented once the student has completed an interview with a witness and assessed the clues obtained, is illustrated in Fig. 4.
[0036] Overall Threat Assessment Feedback. Immediately after the student has completed the final assessment of the device (which effectively finishes the training scenario), a scenario debrief is presented. The debriefing comprises a summary of the scenario's back-story, target, device type, and the critical clues that contributed to that assessment. An example Overall Threat Assessment Feedback window, which may be presented once the student has completed the Training scenario, is illustrated in Fig. 5.
[0037] Overall Questioning Technique Feedback. After the tutor provides feedback on the student's final threat assessment, a series of training modules pertaining to instructional interventions by the tutor during the training scenario are presented. Fig. 6 is a table showing representative evaluation criteria and instructional interventions. Any instance of tutor feedback during the game will trigger that specific module to be presented on the game's completion.
Therefore, the presentation of modules is determined by the questioning performance of the 40296637.8
- 11 -student. Finally, each training module also includes the question (and answer) that triggered the tutor's response.
[0038] The embodiments of the invention described above are intended to be illustrative only. The scope of the invention is therefore intended to be limited solely by the scope of the appended claims.
40296637.8

Claims (11)

WE CLAIM:
1. A system for computer-assisted training of interview-based information gathering and assessment, the system comprising:
a Graphical User Interface (GUI) configured to display information pertaining to a predetermined training scenario and generate event messages based on input received from a student;
an Evaluation Engine configured to compare event messages to a rule set embodying predetermined instructional content of a training scenario and to generate evaluation comments that reflect a real-time performance of the student; and an Adaptation Engine configured to process the evaluation comments from the Evaluation Engine to produce student feedback that is presented to the student via the GUI;
wherein the training scenario comprises: a scene defining a physical context of the scenario; a set of one or more witnesses who may be interviewed by the student to obtain clues as to a particular subject of the scenario; and the predetermined instructional content; and wherein the predetermined instructional content comprises any one or more of:
a predetermined line of questions to be posed by the student to elicit clues relevant to the particular subject of the training scenario, preferred questioning techniques to be employed by the student; and a predetermined line of reasoning to be employed by the student to deduce characteristics of the particular subject.
2. The system of claim 1, wherein the scene comprises at least one visible clue, and wherein the predetermined line of reasoning includes recognition and assessment of each visible clue.
3. The system of claim 1 wherein the Evaluation Engine is configured to generate a current evaluation message by comparing a current event message and an historical record of past event messages and evaluation comments to the predetermined rule set.
4. The system of claim 1 wherein the Adaptation Engine is configured to:
access a database of predetermined feedback content, using an evaluation comment, to identify a set of applicable feedback items; and select one or more of the identified feedback items for presentation to the student, based on the student's learning style and past performance history.
5. The system of claim 1 wherein the particular subject comprises an improvised explosive device.
6. A method of computer-assisted training of interview-based information gathering and assessment, the system comprising:
defining a training scenario including a scene defining a physical context of the scenario; a set of one or more witnesses who may be interviewed to obtain clues as to a particular subject of the training scenario; and instructional content defining subject matter to be learned by the student;
presenting information of the training scenario to a student using a Graphical User Interface (GUI);
processing student input using an evaluation engine to generate evaluation comments;
processing the evaluation comments to generate student feedback; and presenting the student feedback to the student via the GUI;
wherein the predetermined instructional content comprises any one or more of:
a predetermined line of questions to be posed by the student to elicit clues relevant to the particular subject of the training scenario, preferred questioning techniques to be employed by the student; and a predetermined line of reasoning to be employed by the student to deduce characteristics of the particular subject.
7. The method of claim 6, wherein the scene comprises at least one visible clue, and wherein the predetermined line of reasoning includes recognition and assessment of each visible clue.
8. The method of claim 6 wherein the Evaluation Engine is configured to generate a current evaluation message by comparing a current event message and an historical record of past event messages and evaluation comments to the predetermined rule set.
9. The method of claim 6 wherein processing the evaluation comments comprises:
access a database of predetermined feedback content, using an evaluation comment, to identify a set of applicable feedback items; and select one or more of the identified feedback items for presentation to the student, based on the student's learning style and past performance history.
10. The method of claim 6 wherein the particular subject comprises an improvised explosive device.
11. A non-transitory computer readable storage medium storing software instructions for execution by a processor of a computer, the software instructions implementing a method of computer-assisted training of interview-based information gathering and assessment, the system comprising:
defining a training scenario including a scene defining a physical context of the scenario; a set of one or more witnesses who may be interviewed to obtain clues as to a particular subject of the training scenario; and instructional content defining subject matter to be learned by the student;

presenting information of the training scenario to a student using a Graphical User Interface (GUI);
processing student input using an evaluation engine to generate evaluation comments;
processing the evaluation comments to generate student feedback; and presenting the student feedback to the student via the GUI;
wherein the predetermined instructional content comprises any one or more of:
a predetermined line of questions to be posed by the student to elicit clues relevant to the particular subject of the training scenario, preferred questioning techniques to be employed by the student; and a predetermined line of reasoning to be employed by the student to deduce characteristics of the particular subject.
CA2809696A 2013-03-14 2013-03-14 Computer assisted training system for interview-based information gathering and assessment Abandoned CA2809696A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CA2809696A CA2809696A1 (en) 2013-03-14 2013-03-14 Computer assisted training system for interview-based information gathering and assessment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CA2809696A CA2809696A1 (en) 2013-03-14 2013-03-14 Computer assisted training system for interview-based information gathering and assessment

Publications (1)

Publication Number Publication Date
CA2809696A1 true CA2809696A1 (en) 2014-09-14

Family

ID=51565086

Family Applications (1)

Application Number Title Priority Date Filing Date
CA2809696A Abandoned CA2809696A1 (en) 2013-03-14 2013-03-14 Computer assisted training system for interview-based information gathering and assessment

Country Status (1)

Country Link
CA (1) CA2809696A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11645622B1 (en) * 2019-04-26 2023-05-09 State Farm Mutual Automobile Insurance Company Asynchronous virtual collaboration environments
US11757947B2 (en) 2019-04-29 2023-09-12 State Farm Mutual Automobile Insurance Company Asymmetric collaborative virtual environments
US11758090B1 (en) 2019-01-08 2023-09-12 State Farm Mutual Automobile Insurance Company Virtual environment generation for collaborative building assessment
US11810202B1 (en) 2018-10-17 2023-11-07 State Farm Mutual Automobile Insurance Company Method and system for identifying conditions of features represented in a virtual model

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11810202B1 (en) 2018-10-17 2023-11-07 State Farm Mutual Automobile Insurance Company Method and system for identifying conditions of features represented in a virtual model
US11758090B1 (en) 2019-01-08 2023-09-12 State Farm Mutual Automobile Insurance Company Virtual environment generation for collaborative building assessment
US11645622B1 (en) * 2019-04-26 2023-05-09 State Farm Mutual Automobile Insurance Company Asynchronous virtual collaboration environments
US11875309B2 (en) 2019-04-26 2024-01-16 State Farm Mutual Automobile Insurance Company Asynchronous virtual collaboration environments
US11757947B2 (en) 2019-04-29 2023-09-12 State Farm Mutual Automobile Insurance Company Asymmetric collaborative virtual environments

Similar Documents

Publication Publication Date Title
Morrison et al. Foundations of the after action review process
Baldwin et al. Transfer of training: A review and directions for future research
Hou et al. A generic framework of intelligent adaptive learning systems: from learning effectiveness to training transfer
Perla et al. Gaming and shared situation awareness
Coovert et al. Serious Games are a Serious Tool for Team Research.
Jenkins et al. An evidence-based approach to critical incident scenario development
Cannon-Bowers et al. Improving tactical decision making under stress: Research directions and applied implications
Jenkins et al. A formative approach to developing synthetic environment fidelity requirements for decision-making training
Freeman et al. Intelligent tutoring for team training: Lessons learned from US military research
Tobey A vignette-based method for improving cybersecurity talent management through cyber defense competition design
CA2809696A1 (en) Computer assisted training system for interview-based information gathering and assessment
Huhta et al. Deriving expert knowledge of situational awareness in policing: A mixed-methods study
Cotterill et al. Coaching research: A critical review
Leins et al. Observers’ real-time sensitivity to deception in naturalistic interviews
Herz et al. Human factors issues in combat identification
US20140272804A1 (en) Computer assisted training system for interview-based information gathering and assessment
US20210390878A1 (en) Systems and methods for career selection and adaptive learning techniques in the field of cybersecurity
Carroll et al. Training effectiveness of eye tracking-based feedback at improving visual search skills
Klein et al. An empirical evaluation of the ShadowBox training method
Bryant et al. Retention and fading of military skills: Literature review
Oswald et al. Enhancing immediate retention with clickers through individual response identification
Simpson et al. Evaluating large-scale training simulations
Rajendran et al. Multi-level user modeling in GIFT to support complex learning tasks
Folsom-Kovarik et al. Developing a pattern recognition structure to tailor mid-lesson feedback
Milham et al. Adaptive instructor operating systems: design to support instructor assessment of team performance

Legal Events

Date Code Title Description
FZDE Dead

Effective date: 20170314