MXPA01005215A - Apparatus and method for training using a human interaction simulator - Google Patents
Apparatus and method for training using a human interaction simulatorInfo
- Publication number
- MXPA01005215A MXPA01005215A MXPA/A/2001/005215A MXPA01005215A MXPA01005215A MX PA01005215 A MXPA01005215 A MX PA01005215A MX PA01005215 A MXPA01005215 A MX PA01005215A MX PA01005215 A MXPA01005215 A MX PA01005215A
- Authority
- MX
- Mexico
- Prior art keywords
- user
- statements
- video
- declarations
- audio
- Prior art date
Links
Abstract
A computer based training tool and method that emulates human behavior using a computer-simulated person in a realistic scenario. It provides an interactive experience in detecting deception during interviews and acceptance of statements during interpersonal conversations. The simulated person provides verbal responses in combination with an animated video display reflecting the body language of the simulated person in response to questions asked and during and after responses to the questions. The questions and responses are pre-programmed and interrelated groups of questions and responses are maintained in dynamic tables which are constantly adjusted as a function of questions asked and responses generated. The system provides a critique and numerical score for each training session.
Description
APPARATUS AND METHOD FOR TRAINING USING A HUMAN INTERACTION SIMULATOR
BACKGROUND OF THE INVENTION FIELD OF THE INVENTION This invention relates to a training process for perfecting interviewing techniques and other interpersonal skills using a simulated computer and a training tool based on PC or another type of computer that emulates the human behavior using the simulated person by computer in a realistic scenario. EXHIBITION OF RELATED TECHNIQUE For years, law enforcement agents have used verbal and nonverbal cues to detect deception. Much of the original research that shows the validity of the technique was done by Reid and Associates. This is most easily available as part of the course work provided by your company that includes a course-oriented text entitled, "The Reid Technique of Interviewing and Interrogation", John E. Reid and Associates, Chicago, 1991. In addition, Paul Ekman in "Telling Lies: Clues to Deceit in the Marketplace, Politics and Marriage" ("Telling Lies: Keys to Deception in the Market, Politics and Marriage"), published by. N. Norton and Co. Inc., New York, 1985, "Who Can Catch A Liar?"
("Who Can Catch a Liar?"), Published in
American Psychologist, 46, 913-920, 1991, and Stan Walters in "Principies of Kinesic Interview and Interrogation",
("Principles of Interview and Interrogation
Kinesiológicos "), published by CRC Press, Boca Ratón,
Florida, 1995, have made contributions to this body of knowledge. The skills required to detect deception based on verbal and non-verbal cues are difficult to acquire. Consequently, simulators are needed to train people to deal with problems and social and behavioral situations. To maximize their effectiveness, simulators must provide an attractive environment in which the student can experience several real situations and provide different answers. To be beneficial, the simulators must have recent developments in design and simulation, sociology, psychology and other fields. To be effective, practice is required for training in relation to a wide range of skills. The Government and the industry have designed and developed sophisticated simulators so that the trainees can obtain extensive practice and gain experience without risking lives or expensive equipment. Pilots practice in flight simulators before flying an airplane; the military personnel use simulators of war games to practice the execution of missions; the medical staff uses computer simulations to practice the determination of priorities as part of their training. The simulation training technology, as a result of the development process of such sophisticated simulators, has progressed to the point where it can be used successfully to help develop a variety of interpersonal skills, such as interviewing suspects in criminal investigations. SUMMARY OF THE INVENTION A principal objective of the present invention is to provide a means and apparatus to assist individuals in the development of a variety of interpersonal skills, such as interviewing suspects in criminal investigations. Another objective of the present invention is to create a simulated person by computer incorporating a simulated brain and an ability to remember the nature of questions or statements made by the user and their suitability with respect to current interactions. Still a further objective of the present invention is to provide responses based on typical patterns of behavior and history of a simulated conversation.
Still another objective of the present invention is to provide a system for training students for interviews for the purpose of detecting signs of deception of a suspect in a criminal case. A further objective of the present invention is to provide an interactive system for training in conversation skills. Still a further objective of the present invention is to provide a simulator for interpersonal training comprising logical and emotional components implemented by means of a computer simulation. Still another objective of the present invention is to provide an interactive system in which the responses are affected by the state of a simulated subject. Another objective of the present invention is to provide an interactive system in which responses are affected by the emotional state of a simulated person. Still a further objective of the present invention is to provide a visual image of the subject wherein the expression, position and position of the head, arms, hands, fingers, legs and feet changes in response to questions asked and during and after the responses to them. These objectives are achieved by providing an interactive system for conversation training that includes an interface and a simulated brain. The interface allows the user to easily navigate through many possible questions and observe and hear the response of the simulated subject. Possible questions are pre-programmed and classified into sub-lists in order to be able to find them by (1) selecting a question category and searching through the questions in that category window, (2) examining a question window follow-up to find questions suggested by the system, or (3) ask the system about questions that include specific words such as "promotion." The interface also adds new questions and comments to the sub-lists as the information becomes available and eliminates questions that have lost their relevance. The simulated brain includes a logical component and an emotional component. The logical component selects one of a series of answers to the questions. The selection is based on the probability of each of the answers given in current circumstances. Circumstances are affected by the state of the simulated subject (for example, guilt or innocence in a criminal investigation) and by the emotional state of the subject. The logical component keeps track of the information provided in the responses and tries to maintain consistency in the responses. The emotional component is critical in the random selection of the response. The emotional state of the simulated subject is determined by (1) the subject's status at the beginning of the interview, (2) the full history of the discussion (questions and statements), (3) the most recent questions and statements, (4) ) the last question or statement, (5) the state (guilty or innocent) of the subject, and (6) the opportunity. A random model -determines the ebb and flow of the subject's emotional state at the time the questions are asked and the answers are given. The parameters can be agreed to affect the stimulated personality. For example, the stimulated subject may give poorly verbalized statements or may be disturbed by the wording of the questions and respond slowly. In one modality, a system trains students to interview for the purpose of detecting signs of deception of a potential suspect in a criminal case. In this way, it teaches students to researchers to detect deception, develop listening skills, formulate good questions, structure with agreement and develop topics for questioning. The student tries to determine if the simulated subject tries to cheat by observing both verbal and non-verbal behavior. As part of the exercise, the student develops concordance by creating an environment where the simulated subject is comfortable and will provide complete and informative answers. Each question or statement of the student is classified according to how it contributes to student-to-child affinity. Some statements or questions available to the student make the simulated subject or the less threatening researcher feel comfortable and help to structure the affinity and have positive affinity classifications. Other statements or questions make the simulated subject feel defensive or in some way offended and contribute to a negative affinity. Some statements or questions can have a negative effect on affinity, but help determine the state of guilt (the truth) of the subject. To interact successfully with the simulated subject, the student makes the subject feel that he or she is contributing to the investigation by answering difficult questions. The questions are also evaluated according to their diagnostic value and both the affinity and the diagnostic classification contribute to the total evaluation of the user-student. These classifications depend on the affinity or emotional state of the simulated subject. A hostile subject can interpret the purpose of the question, "Who do you think took the money?" in a different way to a subject that tries to help. The system qualifies the quality of the interview considering several factors including, but not limited to, the affinity and value of the diagnosis of the statements and questions. A correct determination of the truth (true or false) of the subject and the detection of keys contribute to the interviewer's score. When the subject makes a mistake and reveals behavior information that indicates deception (verbal or non-verbal) or provides an answer that would be uncommon for a deceptive subject, that is, that indicates that he or she is true, the student indicates, choosing an appropriate button, which is just providing evidence of cheating or not cheating. Proper identification of these keys increases the interviewer's score. The system has several user selection options. For example, either a male or female voice can be used to ask questions or statements. The interview can be conducted at a beginner, intermediate, advanced or professional level, with fewer keys provided at the most challenging levels. Many of the questions can be asked in any number of ways, some that damage affinity and some that help affinity. The simulated subject can in turn ask the student-researcher a question. If so, the student may choose to ignore or respond using a response found in the tracking window. The system remembers the sequence of questions and answers of the interview so that the complete interview can be repeated. During the repetition, the system identifies any unusual behavior in writing. The interactive system can be used to train in many different areas. It can, for example, be used to help train children in the "Just Say, No" campaign, to medical students, or to any other area where interactive human training of any kind is required. Other objects, features and advantages of this invention will become apparent from the following drawings, specification and claims. BRIEF DESCRIPTION OF THE DRAWINGS Figure 1 is a stylized image of the computer generated image of the simulated subject according to the present invention; Figure 2 is a simplified logical flow diagram showing the means to achieve the design of the behavior of a subject according to the present invention; Figure 3 is a detailed flow diagram showing how the present invention operates; Figure 4 is a computer image showing a "Basic Option" screen used in the present invention;
Figure 5 is a computer image showing the "User Input" screen according to the present invention; Figure 6 is a flow diagram showing how affinity is designed according to the present invention; Figure 7 is an exemplary image of the control panel screen for understanding the state of the affinity according to the present invention; and Figure 8 is an exemplary image of the control panel screen used to input the personality parameters of the subject within the fundamental program of the present invention. DESCRIPTION OF THE PREFERRED MODALITIES Figure 1 is an example of a typical image created by the objective invention to provide visual cues related to the emotional state of the simulated subject. The expression, as best seen in the enlarged image 10, of the simulated subject changes in response to questions asked and during and after the answers obtained by the questions. Simultaneously with the changes of expression seen in the enlarged image, the image of the subject sitting 20 changes. The changes that occur in the image of the seated subject include, in addition to the changes of expression, posture displacements and changes in the position of the subject. the head, arms, hands, fingers, legs and feet. The changes are produced by a series of video vignettes that represent the body language of the subject. They can be produced graphically in the time of honorable traditions including artists who draw sequential images or through the use of contemporary tools such as computer animation. However, in the preferred modality, vignettes are created by capturing the image of living actors who respond to questions in specific ways under the supervision of a director who follows a script for a specific type of training. Specific types of training refer to interpersonal experiences, such as interviewing people related to a crime, that is, suspects and witnesses, interviewing potential employees, interacting with sales personnel, discussing critical problems with our children, and instructing them at a level of Equal to equal are only a few of the potential uses of the invention. The invention is made by creating a plurality of video vignettes that simulate a person responding to statements prepared by a user of the system or by the absence of a statement. In one modality, prepared statements are presented as a list of options, one of which is selected by the user using standard enhancement techniques using the computer's cursor. In an alternate modality, the user verbalizes the statements. This, of course, necessitates that the invention be executed by a computer incorporating speech recognition software. To allow dissemination when system users structure a question or statement, prepared questions are recognized as combinations of keywords. Therefore, a prepared question is identified as such when it is phrased in a variety of different ways, with the recognition criterion being the inclusion of the keywords in the sentence or statement. In the modalities that eliminate the speech recognition requirement of the computer system supporting the invention, prepared statements are selected by more conventional means such as typing a number or a combination of letters on the computer keyboard or, as suggested previously, highlighting the statement with a cursor. The invention helps develop interpersonal skills. This is done by combining a plurality of audio responses and video vignettes that simulate a person responding to prepared statements selected by the user. Choreographing audio responses appropriately, video vignettes and prepared statements, the system can be used to train individuals for a variety of interpersonal situations. For purposes of explanation, the invention is presented as a training aid for interviewing techniques of suspects and witnesses by law enforcement personnel. In a typical training scenario using the invention, the student interviews a simulated subject that is projected as illustrated in Figure 1. (Throughout this presentation, the simulated subject will be referred to as Mike.) This is to simplify and customize to Mike in the same way that the simulation is customized for the user of the invention in the preferred reduction for practice). The interview refers, as an example in this case, to a crime. The purpose of the interview is to determine Mike's commitment. He may or may not have committed the crime. The student conducts the interview by selecting from an extensive written list of questions that reside in the system program and observing Mike's verbal and non-verbal responses. A probabilistic model of Mike's personality selects the answers to the student's questions based on the logical and emotional factors associated with human behavior. Mike's behavior and responses are determined by a computer model of Mike's behavior (considered the "brain"). The computer model sequences the visual and audible responses that are presented by video characterizations as illustrated in Figure 1. Video sequences are created by the software using an actor. This provides a realistic, two-way conversation interview. In the embodiment of the present invention described as an example, a plurality of audio responses for the articulation is created by the simulated person in response to the prepared questions that are selected by the cursor, the keyboard or the vocalization of the user of the system or by the absence of selection within a predetermined period of time. Video vignettes and audio responses are then interrelated by a logical means that is created to reflect the personality of the simulated person. This emulation of the personality profile controls the video and audio responses according to an interrelated network entered by recognizing the prepared questions selected by the computer executing the programs of the invention. Emulation of the personality profile (personality emulator) is created as a logical means in the form of networks that interrelate each of the audio responses with the video vignettes and the statements selected by the user as illustrated in a general way in Figure 2. As shown in Figure 2, the disposition, stage and available information are initialized in step 30. The available questions are then identified in step 40 and deployed in step 42. A question is selected in the stage 44 and it is evaluated by the student interviewer in stage 46. The software then updates the disposition of the interviewee (Mike) in stage 48 according to the selected question. The software then determines a response in step 50 and displays it in step 52. The available information is then updated to 54. The student can then choose to continue or stop the interview in step 56. The software stops in step 58 if you are directed or continue to stage 40 if the student decides to continue. Figure 3 is a detailed flow diagram showing how the present invention operates. It should be noted, that many variations are available and this flow chart is given as an example of a way to achieve the present invention. A "Basic Option" screen appears on "102" and then the user can select from any of the five Basic Options on the home screen shown in Figure 4. These options include the selection of Instructions 104, Manual 106, Background of Case 108, Exit 110, or Interview Mr. Siprmens 112. The selection of Instructions 104, Manual 106, or Background of Case 108, uses the quick review of the system to display a text document. When the revision is closed, the Basic Option Screen reappears (Figure 4). The quick review feature can be used to scroll through documents or search for specific text. The Output 110 option completes the software. If the option of Interview Mr. Siirtmens 112 is selected, the interview begins by initializing the state of Mike Simmens' guilt and the initial affinity level. The User Input screen 114 shown in Figure 5 then appears. This screen (Figure 5) is the main control screen for the rest of the program. Before starting the Interview by selecting the option "Start Interview" 116, the user can choose to update some of the Basic Options 102 listed at the top of the screen or "End the Interview" 110. The "User Input" screen is shown in Figure 5. When the user begins the interview, a window appears asking for the student's name. The "File" menu at the top of the User Input screen allows the user to select from the Basic Options that include, but are not limited to, the sex of the researcher's voice, the level of difficulty (few keys are presented) at high levels of difficulty), the dimensions of the screen used to display the subject (in this example, Mike) and the option to inhibit audio and video. The user can also ask for help in the "Help" menu and remove Instructions 104, Manual 106 or Case 108 Background using a quick review network or read the copyright notice under the Download rights menu. Once the interview is in progress, the user can also choose the repetition of the last question and answer (Repetition download menu), or repeat the complete interview until the last response (menu of downloading Playback). Before exercising any of the other options, the user must start the interview. After providing a username 118 or using a default name, the user can select from a list of Basic Options from the menu at the top of the screen, Question 120 for a certain question using a keyword, display questions in a Category of Question 122, such as "ATM" or "Jones", Finish Interview 124 or ask that any question be displayed either in a window of the category of Follow-up question on the screen or in a window of the category Question Possible on the screen to click twice on it. If the user selects "Start Interview" 116, he or she will be required to enter their name to accept the default name. After doing this, the entered name will be automatically associated with the score. The user can then put any keyword in the Question window and select the Question button. When this is done, all questions with that word will be displayed in the Possible Question window on the screen. The user can then select any Question Category 122 at any time and all questions available in that category will be displayed. The user can choose to end the interview
(step 124) at any time. If this is done, a window appears asking the user to decide if the subject, Mike in this case, has committed the crime (step 126). The student's decision along with the quality of the interview are used to calculate and display an interview score. Once a question is selected (step 128), then a number of steps are required. The key indicates that state fields are used to judge the student's assessment (truthfulness) (step 126) of a previous response (if any) at the time the evaluation records are updated. The information associated with the selected question is used to update the affinity (step 130) and the evaluation of the interview. If the affinity is too low, the subject ends the interview (stage 132), one of a number of videos is played (stage 134), and the program proceeds as if the interview had been completed by the student. Otherwise, a response is selected by structuring a probability model and using a random number to select an answer (step 136) from the list of available answers. The full list of available questions and available answers is updated (step 138) to avoid redundancies and inconsistencies and to allow new questions and relevant answers to be added to the list of possibilities. The student's score is then updated (stage 140). The question and the answer are then reproduced (step 142). Finally, the revised question list is displayed (step 144). The software executes each of the student commands and then waits for the next command. To use the invention, the student listens carefully and observes Mike's answers. It should be noted that each time the system is exercised, the subject
(in this example, Mike) behaves differently.
This is due to the fact that the subject's behavior is driven by random numbers. The student plans a line of questions based on their interpretation of the answers and judges the content as true, misleading, or non-informative. The response and behavior of the simulated subject depend on the student's input. Since most of the questions have several possible answers and the simulated Mike may or may not be guilty during a simulation, the interview proceeds differently each time he conducts himself and the subject behaves realistically and unpredictably. Like a real interview, the simulated interview is expected to take more than an hour, but it can last as little or as much as required. An interview conducted properly will take more than an hour. The model for Mike's behavior includes specific attributes that help the user develop interviewing skills. Mike "remembers" the nature of the interviewer's questions and statements and responds based on typical patterns of behavior related to his guilt or innocence and the content of the interview. The logical component within the system tracks the responses and keeps them reasonable and consistent. The system's operational program selects one of a series of likely responses considering the question and the circumstances, which are affected by Mike's status (eg, guilt or innocence) and by his emotional state. Mike's emotional component is critical in selecting the answer to a question. Your emotional state is determined primarily by the student's questions. A mathematical model determines the flow of Mike's emotions at the time the questions are asked and the answers are given. The model parameters can be adjusted to affect Mike's personality. The emotional model can be harmonized so that Mike can excuse himself from a poorly phrased question, or he can easily be disturbed and be slow to excuse the blunders made by the questioner. The logical component of Mike's brain is stored in a database. It contains all the available questions and all possible answers to those questions. Different questions can share answers, and each question can have many answers, so that the data fields written in the database are used to link questions and answers and to generate sub-lists of potential questions that can be made for a given progress of the interview. In a preferred embodiment, there are more than 500 possible questions that can be asked in almost any order. To reduce the search for the desired questions, those that have been made are removed from the list. Similar questions are also deleted. At the time the answers are provided, new information is revealed and new questions become relevant and available. The fields in the database (logical component) are used to identify which questions and answers are open and closed as a result of each question and Mike's response. The program that implements the invention ensures that Mike's answers are consistent. For example, assuming you ask Mike if he likes his supervisor and he replies, "It's okay." If the next question is "Have you socialized with her?" the answer "No, I can not stand it" should be discarded from the set of possible answers. In other situations, a different or inconsistent response is required. For example, if you ask Mike, "What do you like to do in your free time?", He can answer that he plays golf when the possible answers are that he plays golf, reads, or skis, if he is asked, "What else? you like to do? "the answer of course is an inconsistent response should be discarded. in a preferred embodiment, Mike will be in any of at least five emotional states. the states represent five levels of affinity, including possible poor worse, average, good, and excellent. at the time the affinity deteriorates, Mike provides short uninformative replies, while with better affinity, Mike provides more complete and revealing answers. for another application, states could be anger, denial , depression, negotiation and acceptance and Mike would respond accordingly, Mike moves through his emotional states as a result of interactions with the student. it encodes according to its effect on Mike's emotional state. The code depends on the affinity and is defined in Table 1. Table 1. Definitions of the Affinity State
When the interview is started, quantitative emotional values are assigned to each of the five affinity states, and these values are restricted to add up to 1. The questions act as a stimulus to affect the flow of these emotional values from one state to another. The emotional flow model is complicated, and even then it can be easily modified to accommodate change requirements. For Mike, the model modifies it himself as the interview progresses. For example, each time you get irritated (the state of affinity deteriorates), you get irritated progressively more easily. Mike's status is the state of affinity with the greatest emotional value. The emotional model performs two fundamental functions: (1) determines the direction of the emotion flow, and (2) determines the magnitude of the flow. At the time the questions are asked, the model determines how emotions flow towards an objective affinity from all other affinity states. The flow continues to reach either the emotion limit for that state or the signal of the emotional stimuli. Sr, changes. If good questions are asked, Sr is positive, and when the flow reaches the limit, the next affinity state is selected using a matrix of transition probabilities to select the next target state. This matrix is called the transition advancement matrix. If the Sr is negative, then a different transition matrix, called the transition backspace matrix, is used to select the next target state. For the preferred embodiment, the emotion flows towards an improved affinity are positive and those towards a weaker affinity are negative. The Sr signal determines the direction of the flow. If the signal changes while the questions are being asked, the flow direction changes immediately and a new objective affinity is selected. The target affinity selected is commonly the next highest or lowest state depending on the Sr signal, but this is not a requirement. For this mode, the transition advance matrix commonly selects the next highest level of affinity. However, the transition backspace matrix can skip the next lower state. In this way, poor questions commonly cause the affinity with Mike to deteriorate faster but can cause the interview to suddenly become acidic. The Sr stimulus is computed using affinity values associated with all the previous questions and provides input to this model. An affinity value is associated with each question and with a value between 0 and 9 with 0 associated with very poor questions and 9 with the best possible question. The affinity value is first converted to an affinity value between -4.5 and +4.5. Negative values represent the selection of a poor question. After each question, the stimulus value, Sr, is computed using the average affinity value of all the previous questions and the current question affinity value, Sq as follows: Sr = 0.8 [memory * Sr.1 + (1 -memory) * Sq] (1) +0.2 (average affinity value) In this equation, Sr-? It is the stimulus before the last question. The amount of memory in the equation is a parameter that can be adjusted to change Mike's behavior and is nominally set to 0.45 in a preferred mode. This parameter controls the influence of the last stimulus question. The stimulus is influenced mainly by the value of the last question, but it is also influenced by the most recent history and the complete story. The memory constant described is an example of several parameters that allow Mike's personality to adjust. The limit of emotions in a state before another objective affinity state is selected is determined separately for each state. These related parameters of five states are used to make it difficult to move away from some states and facilitate the sliding of others. Another set of related states of parameters affects the emotion flow rate. These parameters reflect a "stickiness" factor and make the flow of emotions from individual states difficult or easy. This parameter determines the portion of an affinity state emotion that is allowed to flow from that source to an objective state. A sticky state produces less toward the target state, that is, the more "sticky" the less likely that the affinity state will change. Two parameters that affect all the states are the parameters of forward and backward speed. These speed factors provide a form of dependent direction for the developer to regulate the speed of the flow of emotions that affects all states equally. For the preferred embodiment, the affinity is slow to build and can be quickly destroyed with few poor questions, the forward speed (improved affinity) is much slower than the recoil speed (impaired affinity). These two parameters, like most of the parameters that make up Mike's "brain", can vary to change Mike's personality. The emotional "brain" of Mike is altered by varying parameters such as "stickiness", which varies the personality of Mike. This is accomplished through a specialized input control screen shown in Figure 6. The software for modeling and tracking a subject affinity, in this case Mike's, is unique. It includes initializing the affinity routine when the interview begins and updating the affinity (step 130 in Figure 3). The affinity routine is updated and run after each question. The initialization routine in Figure 6 initializes the personality parameters of Mike (step 200) including the transition probabilities of the affinity state, the limit before the transition to another state memory, and the state "stickiness". In addition, a pseudo-random number generator is used to establish Mike's initial affinity by assigning weights to each state. Figure 7 shows an example of affinity states and related information. Most of the initial weighting is assigned to the affinity states 2 and 3 in this example. The affinity state with the highest weighting, state 3 in Figure 7, is defined as Mike's mood (step 202 in Figure 6) or affinity state. Each time there is a stimulus to a question, the Affinity Update routine in Figure 6 is called to cause the weights to flow to an objective state. More often this routine will simply take the weighting of other states and place them in an objective state. Sometimes the routine will select a new target state. Step 300 initializes and revises the variables and the parameter. The affinity value associated with the selected question is normalized to provide a value between -0.5 and +0.5. This value, the current stimulus value, Sr_? and the average question value for the complete interview is used to compute the new stimulus value, Sr (step 302). The new stimulus value Sr is checked to see if its signal has changed (step 304). If the signal of the new stimulus does not change or if the "limit for change" of the weight in the target state has not been reached (step 306), the target state does not change. However, if any of these conditions is reached, the target state changes (step 308) using either the forward or backward transition matrix.
If the signal changes from positive to negative, or if the limit for the target state has been reached and the stimulus is negative, the backward transitions are used to select the next target state. For the transition in the matrices, Figure 7, the transition advance is always towards the next higher state, but the transition retracement can be either towards the next lower state or it can jump two states down. When the state limit is reached (1.0) and the stimulus is negative, the interview ends. Once the target state is determined, the weights of each of the other states are moved to the target state. To determine the amount of weighting taken, one minus the "stickiness" factor is multiplied by the normalized affinity score of the question and the weighting in the state, creating a change factor. This factor is multiplied by an appropriate factor of speed (forward or backward) to determine the total weight that will move toward the target state. Once the weights change (step 310), the update is completed and the program returns to its question state. Figure 8 shows an exemplary control panel screen used to input the personality parameters of the subject within the fundamental program of the invention. The present invention allows the person who develops it to adjust the emotional model so that the subject complies with the requirements of the training system that is developed. In addition, the software can be adjusted to these model parameters. For example, Mike's personality can be modified as part of the initialization of the program. For the preferred embodiment, each time Mike enters the affinity states 1 and 2, the parameters are adjusted so that it is more easily altered. The characteristic of allowing the emotional model to modify itself adds significant richness to the model and makes Mike's behavior more like a real person. When the interview begins, Mike is assigned a number of initial conditions at random and these initial values are used to select responses from the database according to "brain" imperatives. It can be guilty or innocent. If he is guilty, he could be motivated by need or hatred. Your initial affinity with the interrogator is assigned randomly with most of the emotion assigned to the bad and neutral states. The selection of your state of guilt affects your behavior throughout the interview. The student must identify and improve the affinity while determining Mike's guilt. Mike's responses are selected from the logical portion of his brain using his affinity state, his state of guilt and a pseudo-random number generator. For each state of guilt and each affinity state, the interface fields in the logical database provide the probabilities of each response. These probabilities are numbered between 0 and 9 and are used to develop the probabilities associated with the available responses. When a question is asked, the odds of all available answers for Mike's current state and state are added together. Each probability is divided by the sum to produce a series of probabilities for the available answers. These probabilities and a pseudo-random number generator are used to select Mike's response. Different probabilities are assigned for each of the different states of guilt and the different states of affinity. The most significant part of the training simulation provided by the invention focuses on the experience with a single case, where the student works through the stages of an interview. At each stage, the student is given an opportunity to make mistakes. Every time the system is used, the simulated subject, Mike, provides different answers, sometimes subtly indicating true behavior and other times indicating deception, motivated either by revenge or financial need. The goal of the invention is to simulate a real interview by interweaving a plurality of video vignettes of a personality with questions and answers. In this modality, due to the technology in the present, the possible questions and answers are limited to those planned as it was developed in the script incorporated to the software. However, there is a set of standard questions that provide important diagnostic information and a goal is to familiarize the student with these questions. These questions are included in the script, which offers a reasonably rich variety of questions, giving the student the opportunity to practice asking questions. Even though the questions may seem limited, there are still hundreds or thousands available, making it possible to provide routes that represent many real interview scenarios. Finally, the answers depend on how well the student has laid the foundations for the questions, making the development of affinity an essential part of the successful simulated interview. In the present example, a lack of spontaneity and realism is perceived in the simulated interview. The student must somehow enter or select the desired question while Mike waits. (The presentation of the video is frozen at the end of Mike's answer to each question). However, the delay does not give the training time to think and to develop better habits. Of course, given that the technology advances, so will the interactions so that they can appear in "real time". While the interview proceeds, the student learns to determine when it is appropriate to ask certain questions while observing and listening for indicative answers. If and when the student feels that Mike is the probable perpetrator of the crime or will not provide new information, the student may end the interview or suggest the end of the interview. At that time, the student is asked to fill out a short on-screen questionnaire that forces the user to make a decision on the question of Mike's honesty. As noted above, when the executable software of the invention is started, the user can select one of a number of options. The first is an online manual (106), which can be consulted and read or printed. Mike acts many different behaviors described in this manual. The user can also select a "User's Guide" (104) and learn how to use the software and be provided with instructions for a better interview. He or she can select "Background" (108), to learn the basic information of the case. Finally, the user can select "Mr. Simmen Interview" (112), to start the interview. The interview option allows the user to ask questions by selecting them in any of a number of ways. The student can enter a keyword, which presents a list of questions that contain the word. The user can search through a list in at least 14 different categories. As the exchange between the user progresses, the system provides a list of questions and follow-up statements in the Monitoring window. These questions are obvious follow-up questions that consist of good and bad questions or statements. These are provided to help prevent the user from searching through a long list of questions to find the following obvious questions. Some of the follow-up questions make sense only at that point in the interview and will disappear after the next question is asked. These questions are marked with an asterisk (*). Sometimes Mike asks the student a question, in which case the student can choose to ignore the question or respond using one of the answers in the tracking window. The student can also choose to make statements of support, helping to structure the affinity. These statements are made available throughout the list of questions at appropriate times. A critical part of the system is to provide bad questions easily available. The student's poor performance ratings and information restrictions are the result of using the bad questions. Many students will select poor questions to see how Mike responds and observe how his answers differ. This experience is also added to the training. In the scenario herein used to demonstrate a preferred embodiment of the invention, the student investigates the theft of $ 43,000 from an ATM machine
(ATM) in a bank. The subject of the interview, Mike, is a male loan officer who had the opportunity to take the money. The sample interview begins as shown in the flow logic diagram shown in Figure 2, which includes the main question cycle 40, which is repeated for each question. Mike is initially assigned to any of the three guilt states before beginning the first question curve (stage 30). The states of guilt are: True (innocent), guilty and motivated by Revenge, or guilty and motivated by Financial pressure (states T, R, and F, respectively). The student researcher has several possibilities to explore, including revenge for having been discarded for a promotion, financial pressure caused by a series of events, problems with alcohol, problems with drugs, problems with betting or problems with a girlfriend. The innocent subject also has all those same motives, but demonstrates a different pattern of responses to other critical questions. The system remembers the sequence of questions and answers of the interview so that when the interview ends, the user can reproduce it. During repetition, the system stops at the end of each response. Unusual behaviors, if any, are identified in the form of text on the screen during playback but not during the original interview. For example, a change in voice or an unusual movement may be noticed. By design, some of the behaviors are subtle and a few misleading. The steps involved in producing the specific parts of the scenario of the invention include the development of the master list of questions and a corresponding list of potential responses. Next, a history board is created to demonstrate the screen formats and the integration of the question and answer model. A speech synthesizer is used so that a single audio version of the system can be extensively tested. Recording the vocalization ("through voice") of the questions follows this. Finally the actor is used to gather the multitude of video segments. Once recorded, the segments are carefully edited before being digitized. Then the hours of video and audio data are compressed for the available medium. For each usable response, the starting structures by key and tension in the video stream are identified and integrated with the audio questions so that the answers are seamless and synchronized. The goal of the script for the invention is to make Mike behave realistically and unpredictably, while allowing the student to ask a wide variety of questions. Even if the student detects significant clues early, it may be useful to continue the interview to identify questioning issues. The interview script, which individualizes the basic fundamental program of the invention to make it applicable for a final work, consists of all available questions and answers. Both verbal and non-verbal behaviors are described for each of the more than 1200 responses in preferred modalities. When an interview is started, a large number of questions become available to the student 41. All of these will make sense. Some of the other planned questions will not be revealed because the specific information has not been developed or because they may imply information that would not be available in reality. These questions are not made accessible to the user until the appropriate information is revealed. At the time that certain information comes to light, some questions may no longer make sense or the available answers may not make sense. These questions will be removed. After asking each question, this and similar questions are removed from the list of questions. The logical part of Mike's "brain" consists of the questions / statements and associated codes and the list of responses and associated codes. A very important feature of the system is these associated model codes contained in the script. For each question, there is a list of keywords, a question code, an affinity value and a diagnostic value that is incorporated as part of the question script shown in Table 2. Table 2. Example of a Script Question.
The Question / Statement column in Table 2 provides the text of the question for the research student. This text can be a basic question, a statement made in response to Mike's response or an answer to Mike's question. The question code is used by the software to identify the text and to match it to the possible answers. That is, the questions are assigned a number and the answers are assigned an equalization number. The answers given to each question depend on the affinity between the subject and the student. Affinity is determined by the emotion model and uses the affinity value associated with each question to determine the flow of emotions. As a result, each question is classified according to its affinity value as shown in Table 1. Certain questions or comments improve affinity, even when they are of little value in providing information or in improving diagnostic information. Mike-student affinity depends on the history of the interaction, the affinity before the question and the last question asked. In this way, the sequence of selection of the statements made is important. The value can depend on the affinity, so that a question can be a good choice when the affinity is good or excellent and a bad choice when the affinity is poor or the worst. Therefore, the codes are provided in Table 3 to allow the script to specify different values for the questions, depending on the affinity. Four codes are provided (P, A, G, B). Their names indicate the states when the question should be asked. A "P" indicates that this question is excellent when the affinity is POOR and needs to be structured, but it is a loss when the affinity is good. An "A" question is valuable with an AVERAGE affinity, but it is not particularly useful with a very poor or very good affinity. A question "G" is productive in affinity states 4 or 5 when the affinity is GOOD, but the question "B" tenuous is BETTER when the affinity is found either in state 4 or 5. The one who develops the system can provide a factor to measure these codes by scales. For example, the user can enter (0.5) P to use half of all affinity values or use (1.5) P to increase all values to either their limit of 9 or 150%.
Table 3. Functions of the Question Value Dependent of the Affinity State. Question values Function State 1 State 2 State 3 State 4 Affinity affinity affinity affinity affinity affinity state 5
POOR 8 8 4 3 3 AVERAGE
GOOD
BEST
When affinity is poor, Mike will provide short answers with much less information. When it is high, Mike is more likely to provide more valuable information. The affinity score is used to determine the affinity state. When writing the script, special care is taken to periodically provide the student with evidence of affinity level. The student needs to be able to identify a deteriorated affinity and have dialogue available, which improves affinity. Some questions may hurt affinity or irritate Mike, but they will provide very useful information and may assist in detecting the deception or identifying the reason. Therefore, each question is classified according to its diagnostic value which is the main factor in determining the question value. The diagnostic value is influenced by the affinity value. At the end of the interview, these diagnostic question values are combined to produce a diagnostic score used to evaluate the interview. Once again, the sequence of selection of statements helps establish a performance score for the user / student. If a sequence of statements with the appropriate affinity is selected, the diagnostic score (performance) will be increased. If the state of affinity is poor, the same questions will result in a lower score. Table 4 describes how these diagnostic values of Question are determined.
If the student consistently asks good questions, that information produces an evaluation of the student's technique that is higher than either. The student must also be able to recognize the keys and determine if Mike is deceitful or true to obtain the best evaluation. Sometimes when Mike answers a question, that and other questions will be answered. To reduce the number of questions the user must search completely to find the desired questions. The "Close Question" column in Table 2 identifies those close operations as a result of asking that question. To facilitate the user's selection of questions, most questions are associated with one or more than 14 categories. When the user selects a category all questions associated with that category are displayed in the question window. The final column in Table 2 contains keywords that are used to identify the categories associated with the questions.
Some questions are associated with several categories. For example, the question "How much money did your wife earn?" Will be in the categories of Personnel, Financial and Family.Other questions are not associated with any category, so no keyword is provided.These are opened by one of the answers For example, when Mike asks, "How are you?" the "Well, Thank You" answer is not associated with any category and is only available immediately after Mike's question, for each question or comment, there is a series of possible answers Table 5 provides examples of responses with associated codes Each response requires 8 columns of information or codes.
In Table 5, the "Response Code" column provides unique identifiers for each response. The column "Available for" identifies the questions to which the answer can be given. The "Response" column contains the text of the response. Sometimes Mike asks a question or comment that introduces new information by asking new questions. The "Open Question" column identifies Questions / Declarations / Answers that have been made available to the user as a result of the response. Often these questions / statements / answers are made available in the tracking window. The "Guilty TRF" column limits the use of the response to certain states of guilt using the binary codes in the logical database. The order is: (1) Mike is true, (2) Mike has a motive for revenge or (3) Mike has a financial motive. A number 1 in the table indicates that the answer can be used when Mike is in the corresponding state of culpability and a 0 indicates that it can not be used. For example, 111 indicates that Mike can give the answer with some state of guilt while 010 indicates that the answer can be used only when Mike has a guilt state of Revenge "R" and not with a state of truthful guilt "T" or Financial "F". Some nonverbal responses can be a result of stress or habit. For example, Mike can express stress nonverbally by rubbing his chin, maintaining extensive eye contact and covering his mouth. Your way of speaking can be slow or your voice soften. These behavioral clues include tone of voice, and clarity of speech as well as movements of the head, eyes, hands, arms, and legs. If these behaviors occur only when certain issues are addressed, then the researcher should be aware that these issues are sensitive and may require further discussion. The column "Response Indicator" in Table 6 identifies groups of behaviors frequently associated with deception. They are used in production to show the actor how to present the answer. A group consists of several simple behaviors that occur in a short interval. These will be used by the student to help detect the deception. Table 5 shows how simple behavior codes are combined to form the groups and these groups are coded with a unique code (Cl, C2 ...). Simple movements in the script are described with a few codes of behavior while standard groups have their own codes.
Table 6. t Group Code Example
A pattern of behavior for an individual being interviewed may be normal or just a sign of stress. For another individual the same behavior can occur only if he / she attempts deception. As part of a good interview, the student should identify normal or baseline behavior for Mike. That behavior will differ from one interview to another, so the student must identify the baseline behavior for the particular interview he / she leads. To create this necessary changing baseline behavior, many of Mike's responses are recorded five times using five different patterns or behavior groups. While the interview starts, two of the five patterns for baseline behavior are selected at random. These answers will often be seen in an appropriate conduct interview. The other three groups will only be seen when and only if Mike tries to cheat. When Mike's response contains one of these five groups, it will indicate cheating when it is not part of his baseline behavior and will not indicate cheating when it is part of his baseline behavior. The student is instructed to carefully base the behavior of the subject in order to avoid misunderstanding normal behavior by deception. In Table 5, in the "Response Indicator" column, the word "all" is used to indicate that five responses have been recorded for this response. When this response is used, Mike demonstrates one of five behavior patterns. If he is innocent, it will be a baseline behavior, - if he is cheating then it will not be a baseline behavior seen only when he is lying. The last two columns of Table 5 correspond to "Affinity of Deception" and "Verity Affinity". These columns contain probability codes used to determine how likely the response is and to compute the changing probability of the corresponding response. The probability depends on the affinity with Mike, his veracity and the probabilities of the answers available at the time the question is asked. Each of the two columns contains a string of probability codes consisting of five digits between 0 and 9. The five digits correspond to the five affinity values. The first digit corresponds to the worst affinity probability and the last digit corresponds to the best affinity. If the affinity state is 3 for a guilty Mike, the probability is in the third digit of the Deception column. If that answer is common for Mike in state 3, the third digit of the probability code can be assigned to 7, 8 or 9. If the answer is unusual, that digit will be given in 0, 1 or 2. Some of the possible answers can be be removed because they are inappropriate for the guilty state or because they are inconsistent with another of Mike's statements. The probability codes of the possible answers are added and then each one is divided by that total to obtain a probability for each response. Finally, these probabilities along with a random number are used to select the answer. Other factors are used to evaluate the interview. These include the correct determination of Mike's status (true or deceptive) and key detection. Some of Mike's responses reveal attempts at deception and must be identified as an indicator of deception. Other answers are unusual for a deceptive person and indicate that Mike is truthful. In a typical interview, both types of codes will be observed, but most of the codes will reveal the truth. The appropriate identification of all the keys is rewarded with points. The following is a detailed description of the classification algorithms: 1) If the student correctly determines that
Mike is true he gets 30 correct points. If the student correctly determines that Mike is deceptive (easier and more clues are provided), they get 20 correct points. 2) Take the average question diagnostic value for the best 20 diagnostic questions and the 20 worst diagnostic questions and then calculate: Diagnostic Value Points = 4 * [(AVG_Question_Value_Top_20- .5) +. { 2 * (AVG_Question_Value_Bottom_20-4.5)} ] The rational for the 2 is to lose more points for asking bad questions than those gained by asking good questions. The use of only the 20 most extreme questions is to deny the effect of routine neutral questions. 3) For each follow-up question with affinity value and question value greater than 5 add a point to the tracking points. In addition to rewarding the student for good questions, this rewards a logical stream of questions. 4) For each identified correct key add two points to the Key Points; if no key is presented and the student selects truth or deception then he loses a Key Point. If Mike provides a cheat code and the student selects true then he loses 2 Key Points. If Mike provides a true key and the student selects cheating then he loses 2 Key Points. 5) Affinity Points 8 * Average Affinity Value. 6) Total Score = Correct Points + Diagnostic Value Points + Tracking Points + Affinity Points 7) If the student made less than 100 questions and if the Total Score is greater than (0.8 * Number of Questions) then Total Score = (0.8 * Number of Questions) This last stage avoids a high score for asking a good diagnostic question, a good affinity question and getting the subject's status right. This rewards the student for taking the time to reassure the subject and develop topics for questioning. The program has four levels of difficulty: (1) beginner, (2) intermediate, (3) advanced or (4) professional.
Some keys are provided at the most challenging levels. This is achieved by multiplying the probability of answers that contain keys by a factor greater than one for the beginner level and factors less than one for the advanced or professional level. When a question is selected, it is read before Mike responds, reinforcing the question and allowing the student to observe any delay in Mike's responses. Another option allows the user to read the questions using either a male or female voice. The system stores the sequence of questions and answers of the interview so that the complete interview can be reproduced and reexamined. During the response, the system identifies and records any unusual behavior in Mike. The invention allows the modification or adjustment of Mike's basic personality. This is achieved by varying the personality parameters of Mike through a specialized entry control screen. Through this device, the model for the subject Mike, is manipulated to meet the requirements of the user and the application. Figure 8 illustrates the control panel screen. The present invention is not limited to the previous example. The apparatus, system and method of the present invention can be used in any area where training techniques for interviewing or structuring interviewing skills are needed. Some other examples will include the training of doctors, nurses, etc., to train medical visitors, the "just say, no" drug program, for teachers, etc. The method can be implemented not only in a computer system using CD ROM, but can be used by any other electronic system, such as over the intercom network, DVD, etc. Specifically, applications include training for surprise attacks in clandestine drug laboratories, closed circuit testing, cultural awareness training, EEOC and affirmative action training, drug education, raising awareness of legal reinforcement to crime victims, doctor interviews with patients, interviews for job application, training in interviews for employees, in social matters such as adoptive care interviews, interactive training between officers and staff enrolled and training against sexual harassment, to name a few. Of particular importance is the ability of these systems to quantitatively assess the user's ability by providing numerical results free of predispositions either for or against the person being evaluated. For example, a doctor may need to achieve a certain level of ability to interview a certain patient before being certified by the board. A law enforcement agent may need to achieve a certain level of ability to deal with the public before being promoted. A diplomat may need to be aware of cultural differences before being appointed to a position or an employee may need to demonstrate an ability to deal with a problem employee prior to his promotion to management. The present invention can also be applied to computer games involving the interaction of the person in the game with players when the computer is in fact a player. It should be noted that the invention covers systems that go beyond the reproduction of a video when a question is asked. The key is that each user input affects more than the selection of a response or a video. Another key is that these (the videos) are linked to simulate human responses. The foregoing is considered illustrative only of the principles of the invention. In addition, since many modifications and changes will readily be made to those skilled in the art, it is not desired to limit the invention to the exact construction and applications shown and described, and accordingly, all suitable modifications and equivalents that fall back can be accommodated. within the scope of the invention and the appended claims and their equivalents.
Claims (48)
- CLAIMS 1. An apparatus for developing interpersonal skills, comprising: a plurality of video vignettes that simulate a person; a plurality of declarations to be selected by a user of the apparatus; a plurality of audio responses for articulation by the simulated person; and a logical means to interrelate each of the statements to be selected by the user, the audio responses and the video vignettes.
- 2. An apparatus for developing interpersonal skills as defined by Claim 1, wherein the logical means to interrelate each of the audio responses, the video vignettes and the plurality of "statements to be selected by the user comprises: an emulator of personality profile, a video selection network controlled by the personality profile emulator to select one of the video vignettes in response to those selected from the plurality of statements, and an audio selection network controlled by the emulator of personality profile for selecting one of the audio responses in response to those selected from the plurality of statements
- 3. An apparatus for developing interpersonal skills as defined by Claim 2, wherein the video selection network includes media controlled by the personality profile emulator to select one of the video vignettes in response to the failure of the user to select one of the plurality of statements within a predetermined period of time, and the audio selection network includes means controlled by the personality profile emulator to select one of the audio responses in response to the failure of the user to select one of the plurality of declarations within the predetermined time period.
- 4. An apparatus for developing interpersonal skills as defined by Claim 2, wherein the personality profile emulator includes means for adjusting the interrelation functions of the logical medium, further comprising: a plurality of alternate statements to be selected by the user of the apparatus, compiled from the plurality of statements; and the alternate declarations selected from the plurality of declarations according to a criterion established by means of the personality profile emulator in response to the history of the audio responses selected through the audio selection network.
- An apparatus for developing interpersonal skills as defined by Claim 2, wherein the personality profile emulator includes means for adjusting the interrelation functions of the logical medium, further comprising: P a plurality of alternate declarations to be selected by the user of the apparatus compiled from the plurality of declarations; and the alternate declarations are selected from the plurality of declarations according to a criterion established by means of the personality profile emulator in response to the history of the video vignettes selected by the video selection network.
- 6. An apparatus for developing interpersonal skills as defined by Claim 2, wherein the personality profile emulator is modified in response to those selected by the user from the plurality of statements to thereby alter the interrelation functions of the logical medium .
- 7. An apparatus for developing interpersonal skills as defined by Claim 2, further comprising means for establishing a performance score for the user of the device as a function of a history of those selected from the plurality of statements.
- 8. An apparatus for developing interpersonal skills as defined by Claim 7, comprising means for establishing a performance score for the user of the device as a function of the sequence of selection of the plurality of statements.
- 9. An apparatus for developing interpersonal skills as defined by Claim 1, comprising: a personality profile emulation means for adjusting the interrelation functions of the logical medium; a secondary list of the plurality of declarations to be selected by the user of the apparatus compiled from the plurality of declarations and selected according to a criterion established by means of the emulation medium of personality profile in response to the interrelation created by the medium logic of the audio responses and of the plurality of declarations to be selected by the user.
- 10. An apparatus for developing interpersonal skills as defined by Claim 9, wherein the plurality of statements comprising the secondary list is selected according to a criterion established by means of emulation of personality profile in response to the interrelation created by the logical means of the video vignettes and the declarations to be selected by the user.
- 11. An apparatus for developing interpersonal skills as defined by the Claim 10, wherein the means of emulation of personality profile is modified in response to the statements selected by the user from the lists of the plurality of declarations to thereby alter the functions of interrelation of the logical medium.
- 12. An apparatus for developing interpersonal skills as defined by Claim 10, comprising means for establishing a performance score for the user of the device as a function of the statements selected by the user.
- 13. An apparatus for developing interpersonal skills as defined by Claim 12, comprising means for establishing a performance score for the user of the device as a function of the sequence of selection of the statements selected by the user.
- 14. A method for creating a system for developing interpersonal skills, with the steps of: creating a plurality of video vignettes simulating a person; create a plurality of declarations to be selected by the user of the apparatus; create a plurality of audio responses for articulation by the simulated person; and create a logical means to interrelate each of the audio responses, the video vignettes and the declarations to be selected by the user.
- 15. A method for creating a system for developing interpersonal skills as defined by Claim 14, wherein the step of creating a logical means to interrelate each of the audio responses, the video vignettes and the declarations to be selected by the user includes the stages of: creating an interrelated network that links the video vignettes with the declarations to be selected by the user according to a personality profile; create an interrelated network that links the audio responses with the declarations to be selected by the user according to the personality profile; and create an interrelated network that links video cartoons with audio responses.
- 16. A method for creating a system for developing interpersonal skills as defined by Claim 15, wherein the step of creating a logical means to interrelate each of the audio responses, the video vignettes and the statements to be selected by the user includes the steps of: creating an interrelated network that links the video vignettes with the absence of declarations to be selected by the user according to the personality profile; and create an interrelated network that links the audio responses with the absence of the declarations to be selected by the user according to the personality profile.
- 17. A method to develop interpersonal skills, which includes the stages of: selecting a statement from a list of prepared statements, - observing facial expressions of a simulated person in a video presentation; observe the body language of the simulated person in a video presentation; listen to an audio response from the simulated person; and select a statement from a list of statements prepared in response to observed facial expressions, body language and audio response.
- 18. A method to develop interpersonal skills as defined by claiming 17, which includes the step of repeating the steps of Claim 17 until making a determination regarding the veracity of the simulated person.
- 19. A method to develop interpersonal skills as defined by the Claim 18, which includes the stage of signaling the program by creating the video presentations and the audio responses of the determination regarding the veracity of the simulated person.
- 20. A method for developing interpersonal skills as defined by Claim 17, which includes the stage of signaling the program by creating video presentations and audio responses if the video presentation constitutes a key to the veracity of the simulated person .
- 21. A method for developing interpersonal skills as defined by Claim 17, which includes the stage of signaling the program by creating video presentations and audio responses if the audio response constitutes a key referring to the veracity of the simulated person.
- 22. An apparatus for developing interpersonal skills, comprising: a plurality of video vignettes that simulate a person; a list of a plurality of declarations to be verbalized by the user of the device; a plurality of audio responses for its articulation by the simulated person; and a logical means to interrelate each audio response, the video vignettes and the statements to be verbalized by the user.
- 23. An apparatus for developing interpersonal skills as described by the Claim 22, where the logical means to interrelate each of the audio responses, the video vignettes and the statements to be verbalized by the user comprises: an emulation of personality profile; a video network that links the video vignettes with the statements to be verbalized by the user according to the emulation of the personality profile; an audio network that links the audio responses with the statements to be verbalized by the user according to the emulation of the personality profile; and means to link the video vignettes with the audio responses according to the emulation of the personality profile.
- 24. An apparatus for developing interpersonal skills as defined by Claim 23, wherein the video network includes means that link the video vignettes with the absence of the statements to be verbalized by the user according to the emulation of the personality profile; and the audio network includes means to link the audio responses with the absence of the statements to be verbalized by the user according to the emulation of the personality profile.
- 25. An apparatus for developing interpersonal skills as defined by the Claim 24, comprising: a means of emulating the personality profile to adjust the interrelation functions of the logical medium; a secondary list of a plurality of declarations to be verbalized by the user of the apparatus compiled from the plurality of declarations; and the secondary list of a plurality of declarations is selected from the list of a plurality of declarations according to a criterion established by means of the emulation medium of the personality profile in response to the interrelation created by the logical means of responses of audio and declarations to be verbalized by the user.
- 26. An apparatus for developing interpersonal skills as defined by Claim 25, wherein the plurality of statements comprising the secondary list is selected according to a criterion established by means of emulation of the personality profile in response to the interrelation created by the logical means of the video vignettes and the plurality of statements to be verbalized by the user.
- 27. An apparatus for developing interpersonal skills as defined by the Claim 26, wherein the means of emulation of the personality profile is modified in response to the verbalization made by the user of the plurality of statements from the lists to thereby alter the interrelation functions of the logical medium.
- 28. An apparatus for developing interpersonal skills as defined by the Claim 27, comprising means for establishing a performance score for the user of the device as a function of the statements selected from the plurality of statements spoken by the user.
- 29. An apparatus for developing interpersonal skills as defined by the Claim 28, comprising means for establishing a performance score for the user of the device as a function of the sequence of verbalization of the statements selected from the plurality of statements verbalized by the user.
- 30. A system for developing interpersonal skills as defined by Claim 22, comprising: A means of emulating the personality profile to adjust the interrelation functions of the logical medium; A secondary list of a plurality of declarations to be verbalized by the user of the apparatus compiled from the plurality of declarations and selected according to a criterion established by the means of emulation of the personality profile in response to the interrelation of the audio responses and the statements to be verbalized by the user created by the logical medium.
- 31. A system for developing interpersonal skills as defined by Claim 30, wherein the plurality of declarations comprising the secondary list is selected according to a criterion established by the means of emulation of the personality profile in response to the interrelation created by the logical means of the video vignettes and the declarations to be verbalized by the user.
- 32. A system for developing interpersonal skills as defined by Claim 31, wherein the means of emulation of the personality profile is modified in response to the verbalization made by the user of the statements of the lists to thereby alter the functions of interrelation of the logical medium.
- 33. A system for developing interpersonal skills as defined by the Claim 31, comprising means for establishing a performance score for the user of the device as a function of the statements selected from the plurality of statements spoken by the user.
- 34. A system for developing interpersonal skills as defined by the Claim 33, comprising means for establishing a performance score for the user of the apparatus as a function of the sequence of verbalization of the statements selected from a plurality of statements verbalized by the user.
- 35. A method for creating a system for developing interpersonal skills, comprising the steps of: Creating a plurality of video vignettes emulating a person; Create a plurality of statements to be verbalized by the user of the device; Create a means to recognize verbal statements of the plurality of statements; Create a plurality of audio responses for articulation by the simulated person; and Create a logical means to interrelate each of the audio responses, the video vignettes and the plurality of statements to be verbalized by the user.
- 36. A method for creating a system for developing interpersonal skills as defined by Claim 35, wherein the step of creating a logical means to interrelate each audio response, the video vignettes and the plurality of statements to be verbalized by the user includes the stages of: Creating an interrelated network to link the video vignettes with verbal statements recognized according to a personality profile; Create an interrelated network to link the audio responses with the verbal statements recognized according to the personality profile; and Create an interrelated network to link the video bullets with the audio responses.
- 37. A method for creating a system for developing interpersonal skills as defined by Claim 36, wherein the step of creating a logical means to interrelate each audio response, the video vignettes and the plurality of statements to be verbalized by the user includes the stages of: Create an interrelated network to link video vignettes and audio responses with the absence of recognition of statements verbalized according to the personality profile.
- 38. A method to develop interpersonal skills, which includes the steps of: Verbalising a statement including one or more keywords selected from a list of prepared statements that include the keywords; Observe the facial expression of a simulated person in a video presentation; Observe the body language of the simulated person in the video presentation; Listen to an audio response from the simulated person; and Verbalize a statement including one or more keywords selected from a list of prepared statements that include the keywords in response to the facial expression and body language observed and the audio response.
- 39. A method to develop interpersonal skills as defined by the Claim 38, which includes the stage of repeating the stages of Claim 38 until making a determination regarding the veracity of the simulated person.
- 40. A method for developing interpersonal skills as defined by the Claim 39, which includes the stage of signaling the program by creating the video presentations and the audio responses of the determination regarding the veracity of the simulated person.
- 41. A method for developing interpersonal skills as defined by Claim 38, which includes the stage of signaling the program by creating video presentations and audio responses if the video presentation constitutes a key to the veracity of the simulated person .
- 42. A method for developing interpersonal skills as defined by Claim 38, which includes the stage of signaling the program by creating video presentations and audio responses if the audio response constitutes a key referring to the veracity of the simulated person.
- 43. A system for developing interpersonal skills comprising: Means of memory that include a plurality of statements; Monitor media for video presentation; and keyboard means for selecting one of the plurality of declarations in response to a visual indication of the monitor means or an audio indication; the video presentation responds to the selected statement of the plurality of statements.
- 44. A method to develop interpersonal skills with a simulated subject using electronic means, the method includes the steps of: Initializing a program including weighted questions; Assign quantitative emotional values to affinity states, quantitative emotional values for affinity states are added in more than one; Affect the flow of emotional values through the state of affinity for a state of affinity based on stimuli derived from the questions asked.
- 45. A method according to Claim 44, wherein the affinity states include the worst, bad, neutral, good and excellent.
- 46. A method according to Claim 45, wherein the state of impaired affinity occurs more readily than the state of constructive affinity.
- 47. A method according to Claim 46, wherein the stimulus computed based on an average of all the previous questions made during the affinity value of the interview and the current question affinity value, employs the following algorithm: Sr = 0 .8 [memory * -? R_1 + (1-memory) * Sq] +0 .2 (average affinity value) Where Sr is the stimulus value, Sr-? is the stimulus before the last question, and Sq is the average affinity value of all the previous questions and the current question.
- 48. A system, which includes a computer, a monitor and a keyboard to develop interpersonal skills including the initiation of an interview and qualifying emotional values that are assigned to states of affinity, a stimulus value being computed based on an average of all the questions asked during the interview affinity value and the current question affinity value, the system uses the following algorithm: Sr = 0.8 [memory * S_1 + (1-memory) * Sq] +0.2 (average affinity value) In where Sr_? is the stimulus before the last question, and Sq is the average affinity value of all the previous questions and the current question, to compute a stimulus value Mr.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US60/109,974 | 1998-11-25 |
Publications (1)
Publication Number | Publication Date |
---|---|
MXPA01005215A true MXPA01005215A (en) | 2001-12-04 |
Family
ID=
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7648365B2 (en) | Apparatus and method for training using a human interaction simulator | |
Kenny et al. | Virtual patients for clinical therapist skills training | |
MX2011001060A (en) | Systems and methods for computerized interactive skill training. | |
JP2012516463A (en) | Computer execution method | |
Guo et al. | Tertiary students’ acceptance of a game to teach information literacy | |
Abdelghani et al. | Conversational agents for fostering curiosity-driven learning in children | |
KR100432176B1 (en) | Apparatus and method for training using a human interaction simulator | |
Kenny et al. | Embodied conversational virtual patients | |
Cano et al. | Applying the information search process model to analyze aspects in the design of serious games for children with hearing impairment | |
Davis et al. | Managing your own learning | |
Cao | Learning with Virtual Mentors: How to make e-learning interactive and effective? | |
Oikawa et al. | Proposal for a serious game to assist in the daily care of children with asd before covid-19 | |
King et al. | Developing smart virtual reality to teach functional communication training | |
Hosea et al. | The Impact of Implementing the Gamification Method in Learning Indonesian Sign Language with Bisindo Vocabulary | |
MXPA01005215A (en) | Apparatus and method for training using a human interaction simulator | |
De Wit et al. | Designing and evaluating iconic gestures for child-robot second language learning | |
Borghini | An Assessment and Learning Analytics Engine for Games-based Learning | |
Casillas | Video Games for Empathy and Understanding Towards Human Migration | |
Bodonhelyi et al. | From Passive Watching to Active Learning: Empowering Proactive Participation in Digital Classrooms with AI Video Assistant | |
Schwarz et al. | Humanoid patient robot for diagnostic training in medical and psychiatric education | |
Ratcliffe | Understanding the distinctions in, and impacts of, hand-based sensorimotor interaction in immersive virtual reality; via second language learning | |
Vrana | Vrana, SR & Vrana, DT (in press). Can a Computer Administer a Wechsler Intelligence Test? Professional Psychology: Research and Practice. | |
Piccolo | Why this text? why now? a case study involving four nsw stage 5 english teachers | |
Weaver | Using an intelligent interviewer to conduct cognitive assessments | |
Kelly | The validation of an instrument measuring cognitive and emotional involvement |