US20200163605A1 - Automated detection method for insider threat - Google Patents
Automated detection method for insider threat Download PDFInfo
- Publication number
- US20200163605A1 US20200163605A1 US16/691,637 US201916691637A US2020163605A1 US 20200163605 A1 US20200163605 A1 US 20200163605A1 US 201916691637 A US201916691637 A US 201916691637A US 2020163605 A1 US2020163605 A1 US 2020163605A1
- Authority
- US
- United States
- Prior art keywords
- subject
- behavioral biometric
- readable medium
- transitory computer
- input device
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000001514 detection method Methods 0.000 title description 6
- 230000003542 behavioural effect Effects 0.000 claims abstract description 54
- 238000000034 method Methods 0.000 claims abstract description 44
- 230000004044 response Effects 0.000 claims description 75
- 238000003066 decision tree Methods 0.000 claims description 3
- 230000005057 finger movement Effects 0.000 claims description 3
- 230000006397 emotional response Effects 0.000 abstract description 7
- 238000012360 testing method Methods 0.000 description 46
- 230000006399 behavior Effects 0.000 description 29
- 238000004458 analytical method Methods 0.000 description 27
- 230000008859 change Effects 0.000 description 14
- 238000002474 experimental method Methods 0.000 description 14
- 230000000694 effects Effects 0.000 description 13
- 230000009471 action Effects 0.000 description 11
- 230000008520 organization Effects 0.000 description 10
- 230000001133 acceleration Effects 0.000 description 8
- 238000011835 investigation Methods 0.000 description 7
- 238000007726 management method Methods 0.000 description 6
- 238000012216 screening Methods 0.000 description 6
- 230000037007 arousal Effects 0.000 description 5
- 230000005764 inhibitory process Effects 0.000 description 5
- 230000035484 reaction time Effects 0.000 description 5
- 206010046542 Urinary hesitation Diseases 0.000 description 4
- 238000013459 approach Methods 0.000 description 4
- 238000012552 review Methods 0.000 description 4
- 230000002159 abnormal effect Effects 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 3
- 230000001149 cognitive effect Effects 0.000 description 3
- 230000003993 interaction Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000007704 transition Effects 0.000 description 3
- 206010000117 Abnormal behaviour Diseases 0.000 description 2
- 238000000692 Student's t-test Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 230000007423 decrease Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 230000002996 emotional effect Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- 230000003304 psychophysiological effect Effects 0.000 description 2
- 238000012353 t test Methods 0.000 description 2
- 208000019901 Anxiety disease Diseases 0.000 description 1
- 230000002547 anomalous effect Effects 0.000 description 1
- 230000036506 anxiety Effects 0.000 description 1
- 230000000052 comparative effect Effects 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 238000013481 data capture Methods 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 230000008451 emotion Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 230000003278 mimic effect Effects 0.000 description 1
- 230000001095 motoneuron effect Effects 0.000 description 1
- 231100000989 no adverse effect Toxicity 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000009897 systematic effect Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/16—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
- A61B5/164—Lie detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/30—Authentication, i.e. establishing the identity or authorisation of security principals
- G06F21/31—User authentication
- G06F21/316—User authentication by observing the pattern of computer usage, e.g. typical user behaviour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/30—Authentication, i.e. establishing the identity or authorisation of security principals
- G06F21/31—User authentication
- G06F21/32—User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/30—Authentication, i.e. establishing the identity or authorisation of security principals
- G06F21/31—User authentication
- G06F21/40—User authentication by quorum, i.e. whereby two or more security principals are required
Definitions
- the present invention relates to a system and a method for eliciting information to sensitive questions that require use of an input device (e.g., keyboard and/or pointing device, etc.) and reliably detects whether one is being deceptive, concealing information, or experiencing a heightened emotional or cognitive response to the question by analyzing the input device usage characteristic.
- an input device e.g., keyboard and/or pointing device, etc.
- the system and the method of the invention are based on analyzing the user behavioral biometric of using one or more input device(s).
- Insider threats a trusted adversary who operates within an organization's boundaries—are a significant danger to both private and public sectors, and are often cited as the greatest threat to an organization.
- Insider threats include disgruntled employees or ex-employees, potential employees, contractors, business partners, and auditors.
- the damage caused by an insider threat can take many forms, including workplace violence; the introduction of malware into corporate networks; the theft of information, corporate secrets, or money; the corruption or deletion of data; and so on.
- insider threats According to a recent survey, it takes on average 416 days to contain an insider attack (HP Cyber Risk Report, 2012), and insider threats have been estimated to result in “tens, if not hundreds of billions of dollars” in damages.
- the identification process of insider threats is heightened in very large organizations. For instance, identifying a small number of potential insider threats within an organization with thousands of employees is a literal “needle in the haystack” problem.
- ADMIT Automated Detection Method for Insider Threat
- ADMIT is a web-based survey tool that elicits information to sensitive questions that requires an input device usage (e.g., keyboard, and/or a pointing device, etc.) and reliably detects whether one is being deceptive, concealing information, or experiencing a heightened emotional or cognitive response to the question by analyzing the input device usage characteristic.
- the system and the method are based on a subject's behavioral biometrics.
- the approach consists of establishing distinctive behavioral biometrics for a subject based on characteristic(s) of the subject's input device usage.
- the usage characteristic comprises how and the way the user uses the input device.
- the variables for how the user uses the input device include, but are not limited to, input device dynamics.
- the keyboard (i.e., input device) dynamics include, but are not limited to, the dwell time (the length of time a key is held down), transition time (the time to move from one key to another) and rollover time for keyboard actions. After these measurements are collected, the collected actions are translated and analyzed in order to determine the truthfulness of the subject's answer to a particular questionnaire.
- An algorithm can be used to generate a Keystroke Dynamics Signature (KDS), which is used as a reference profile for the subject using non-threatening or seemingly innocuous or harmless questions.
- the KDS is constructed using a key oriented neural network based approach, where a neural network is trained for each keyboard key to best simulate its usage dynamics with reference to other keys.
- the pointing device dynamics include, but are not limited to, comparing selected pointing device actions generated by the subject as a result of subject's answer to an on-screen question or interaction with a graphical user interface (GUI) or any other display shown on the display screen.
- GUI graphical user interface
- the data obtained from these actions are then processed in order to analyze the behavior of the user.
- Pointing device actions include general pointing device movement, drag and drop, point and click, and silence (i.e., no movement).
- the behavioral analysis utilizes neural networks and statistical approaches to generate a number of factors from the captured set of actions; these factors are used to construct what is called a Pointing Device Dynamics Signature (PDDS), a unique set of values characterizing the subject's behavior during both seeming innocuous or harmless questions and during a more direct question-and-answer sessions.
- PDDS Pointing Device Dynamics Signature
- Some of the factors consist of calculating the speed, total distance traveled, initial direction of movement, total response time, change in direction on the x-axis, change in direction on the y-axis, idle time, area under the curve, amount of deviation, reaction time, applied pressure, changes in angle, the pattern of a users' acceleration or deceleration during a movement, the precision of movements, the click latency, click pressure, or a combination of two or more thereof
- the detection algorithm for an input device calculates the significance of each factor with respect to the other factors, i.e. KDS, PDDS or other input device usage characteristics are weighted since certain actions are more prone to revealing truthfulness of the subject.
- this deception detection system (sometimes referred herein as ADMIT or “Automated Detection Method for Insider Threat”) is a web-based survey tool that elicits information to sensitive questions and reliably detects whether one is being deceptive, concealing information, or experiencing a heightened emotional response to the question.
- systems and methods of the invention can also include eliciting information from the subject on non-sensitive, benign, innocuous or seemingly harmless questions (i.e., control questions) to establish reference input device usage characteristics of the subject.
- Control questions can be presented at the beginning of the session or it can be interdispersed with sensitive questions to establish the reference input device usage characteristics of the subject.
- the system and method can include randomly inserting or presenting to the subject control questions to determine the reference (or baseline) input device usage characteristic.
- the reference input device usage characteristics can be based on the average input device usage characteristics of a plurality of subjects for a particular question.
- the subject's input device usage characteristics i.e., behavioral biometrics
- the baseline or the reference input device usage characteristics can be based on the subject's own behavior biometrics during non-sensitive or non-threatening questionnaire session or it can be based on the input device usage characteristics of a plurality of subjects' input device usage characteristics for the same or similar question, or a combination of both.
- ADMIT is based on the discovery that humans guilty of acts known to be immoral, criminal, or unethical have uncontrolled physiological changes that can be detected as observable behavioral changes when responding to questions regarding such events. Similar to the way a polygraph (lie detector) detects physiological changes in the body based on uncontrolled responses when answering sensitive questions when a person is guilty of actions known to be wrong, the present inventors have discovered that such responses can be detected through use of an input device such as by monitoring mouse or other pointing device usage characteristics and/or keystroke usage characteristics. Abnormal behavior that is indicative of insider threat can then be highlighted or alerted to specific individuals in the organization for review and further investigation.
- One particular aspect of the invention provides a behavioral biometric-based deception analysis system and/or method for determining whether a subject is truthful or deceptive to a question of interest (i.e., a sensitive question or key question).
- a question of interest i.e., a sensitive question or key question.
- Such systems typically include displaying the question (e.g., on a computer screen or projecting the question on a display). The subject is then allowed to select or input subject's answer to the question presented using one or more input device.
- the system includes a data interception unit that is configured to intercept input from the subject who is directed to a question presented on a display screen.
- the data interception unit is configured to passively collect an input device (e.g., a pointing device, such as a mouse, a touch screen, a touch pad, a stylus, a track ball, etc.) usage characteristic.
- the system also includes a behavior analysis unit operatively connected to said data interception unit to receive the passively collected input device usage characteristic; and a behavior comparison unit operatively connected to said behavior analysis unit.
- the system dynamically monitors and passively collects behavioral biometric information (i.e., input device usage characteristics), and translates the behavioral biometric information into representative data, stores and compares results, and outputs a result associated with truthfulness or deception to the question of interest presented on the display screen.
- said behavior comparison unit is operatively connected to an application or program that presents a question on the display screen such that said behavior comparison unit influences the next question presented on the display screen by the application using a decision tree structure based on the result.
- a follow-up type of question can be displayed to further analyze the subject's behavior biometrics to a particular sensitive question.
- the input device usage characteristics comprise pointing (e.g., mouse, joystick, stylus, trackball, etc.) device usage characteristics.
- the pointing device usage characteristics comprise movement of said pointing device between the starting position of said pointing device and the answer selected by the subject on the display screen, the elapsed time between presentation of the question on the display screen and the a selection of the answer by the subject, the speed, total distance traveled, initial direction of movement, total response time, change in direction on the x-axis, change in direction on the y-axis, idle time, area under the curve, amount of deviation, reaction time, applied pressure, changes in angle, acceleration, the precision of movements, the click latency, click pressure, or a combination of two or more thereof.
- the usage characteristic can include finger movement, precise timing, and applied pressure between the initial position of a pointer and the answer on the display screen selected by the subject.
- the input device usage characteristic can include characteristic can include speed, total distance traveled, initial direction of movement, total response time, change in direction on the x-axis, change in direction on the y-axis, idle time, area under the curve, amount of deviation, reaction time, applied pressure, changes in angle, the pattern of a users' acceleration or deceleration during a movement, the precision of movements, the click latency, click pressure, or a combination of two or more thereof.
- area under the curve refers to the area formed by the total distance or actual direction traveled by the user starting from the starting position of the pointer (x 1 , y 1 ) to the answer selected by the subject (x 2 , y 2 ), and the distance between y 1 and y 2 (i.e., absolute value of y 1 -y 2 ) and the distance between x 1 and x 2 . (i.e., absolute value of x 1 -x 2 ).
- said input device e.g., pointing device such as mouse
- said input device usage characteristic is based on the speed, total distance traveled, initial direction of movement, total response time, change in direction on the x-axis, change in direction on the y-axis, idle time, area under the curve, amount of deviation, reaction time, applied pressure, changes in angle, the pattern of a users' acceleration or deceleration during a movement, the precision of movements, the click latency, click pressure, or a combination of two or more thereof
- the behavior comparison unit compares the result of the subject's behavioral biometric to a reference behavioral biometric.
- the reference behavioral biometric comprises the subject's behavioral biometric to a non-interested or non-sensitive question.
- the reference behavioral biometric comprises behavioral biometric of a plurality of subjects based on the same non-interested or non-sensitive question.
- the reference behavioral biometric can comprise behavioral biometric of a plurality of subjects who answered truthfully on the same sensitive question or who answered non-truthfully on the same sensitive question.
- the reference behavioral biometric can comprise an average of behavioral biometric obtained from the plurality of subjects.
- the reference behavioral biometric can be based on a desired confidence limit (e.g., 95%) under the standard curve. In this latter reference behavioral biometric, the subject's behavioral biometric is analyzed to determine whether it is within the desired confidence limit range.
- said reference behavioral biometric comprises an average behavioral biometric to the same question presented on the display screen of a plurality of subjects.
- said behavioral biometric-based deception analysis system is suitably configured for real-time deception analysis.
- said data interception unit is further configured to passively collect keyboard usage characteristic of the subject.
- said keyboard usage characteristic comprises what key was pressed, what time (e.g., elapsed time between presenting the question on the display screen and the time) a key was pressed, what time it was released, or a combination thereof.
- said keyboard usage characteristic can be based on or includes at least one of speed, transition time, dwell time, pressure of key pressed, or a combination thereof.
- Yet another aspect of the invention provides a method for determining whether a subject is truthful or deceptive to a question of interest, said method comprising:
- the different question can be a non-sensitive question to further establish the subject's reference behavior biometrics. It can also be another sensitive question or a follow-up question to further establish the truthfulness of the subject.
- such a method can further comprise the steps of:
- said reference input device usage characteristic is an average input device usage characteristic of a plurality of subjects for the same question of interest. It should be noted that such reference input device usage characteristics can be either of those subjects who have truthfully answered the question or input device usage characteristics of those subjects who did not truthfully answer the question. Alternatively, the method can compare to both of these subjects to determine which reference input device usage characteristics more closely resembles the subject's input device usage characteristics.
- said input device usage characteristic comprises input device movement between the starting position of input device and the answer selected by the subject on the display screen.
- said input device usage characteristic is based on at least one of speed, total distance traveled, initial direction of movement, total response time, change in direction on the x-axis, change in direction on the y-axis, acceleration, idle time, area under the curve, amount of deviation, reaction time, applied pressure, and changes in angle.
- FIG. 1 is a schematic representation showing combined mouse movement resulting when multiple answers catch a respondent's attention.
- FIG. 2 is an example of insider threat question.
- FIG. 3 is an example of one particular embodiment of ADMIT response analysis framework.
- FIG. 4 is a graph of X-location by real time for guilty participants in a simulated ADMIT test.
- FIG. 5 is a graph of X-locations by normalized time for guilty participants in a simulated ADMIT test.
- FIG. 6 is a graph of Y-locations by real time for guilty participants in a simulated ADMIT test.
- FIG. 7 is a graph of Y-locations by normalized time for guilty participants in a simulated ADMIT test.
- FIG. 8 is a graph of velocity by real time for guilty participants.
- FIG. 9 is a bar graph showing mean velocity for guilty participants on control vs. key questions.
- FIG. 10 is a graph showing angles by real time for guilty participants.
- FIG. 11 is a graph of X-locations by normalized time for key items.
- FIG. 12 is a graph of Y-locations by normalized time for key items.
- FIG. 13 is a graph of X-location by normalized time for control items.
- FIG. 14 is a graph of Y-location by normalized time for control items.
- input device e.g., mouse and/or keyboard, other input devices known to one skilled in the art, an other input devices that are developed
- usage features or characteristics are diagnostic of insider threats in sensitive questions (i.e., questions about insider threat activities or other key or security questions) administered on a computer.
- Some aspects of the invention are based on the discovery by the present inventors that when people see two or more answers to a question that catch or draw their attention (e.g., a truthful answer and a deceptive answer that the person will ultimately choose), the mind automatically starts to program motor movements toward both answers simultaneously. To eliminate one of the motor movements (e.g., eliminate the movement towards scholaring to an insider threat activity), the mind begins an inhibition process so that the target movement can emerge.
- Inhibition is not immediate, however, but rather occurs over a short period of time depending on the degree both answers catch the respondents attention (up to ⁇ 750 milliseconds or more). If movement begins before inhibition is complete, the movement trajectory is a product of motor programming to both answers. See FIG. 1 .
- the incriminating answer measures this incriminating answer (measured on an x, y axis) on its way toward the non-incriminating (e.g., deceptive) answer.
- the incriminating answer generally does not catch their attention to the same degree, and thus inhibition occurs more quickly and their mouse movements is less biased toward the opposite response.
- being deceptive normally causes an increase in arousal and stress.
- arousal and stress causes neuromotor noise that interferes with people's fine motor skills (e.g., using the hand and fingers to move a mouse or use a touch screen to answer a question).
- the precision of mouse movements decreases when people are being deceptive, ceteris paribus.
- the intended target e.g., a deceptive answer in the ADMIT survey
- people automatically and subconsciously compensate for this decrease in precision through reducing speed and creating more adjustments to their movement trajectories based on continuous perceptual input.
- ADMIT surveys the present inventors have found that people exhibit slower velocity, more adjustments (x and y flips), greater distance, and more hesitancy when being deceptive compared to when telling the truth.
- the present inventors have found that people guilty of insider threat activities display different mouse movements on non-incriminating questions compare to innocent people. In anticipation of a question that might incriminate them, guilty people show a task-induced search bias: before answering a question, they take a fraction of a second longer to evaluate the question. After seeing that the question is not relevant, they then move more quickly to the truthful answer than innocent respondents.
- Table 1 summarizes examples of mousing features that can be used to differentiate between how guilty insiders and innocent respondents respond to ADMIT questions. In some embodiments at least four or more, typically at least eight or more, often at least ten or more, still more often at least fifteen or more, and most often at least twenty or more of these characteristics are determined and analyzed. Still in other embodiments, all of the input device usage characteristics in Table 1 are determined and analyzed.
- the AUC minimum the minimum AUC Overall Distance The total distance traveled by the mouse trajectory Additional Distance The distance a users' mouse cursor traveled on the screen minus the distance that it would have required to traveling along the idealized response trajectory (i.e. straight lines between users' mouse clicks), Distance Buckets Distance traveled for each 75 ms X Flips The number of reversals on the x axis Y Flips The number of reversals on the y axis Maximum Deviation The largest perpendicular deviation between the actual trajectory and its idealized response trajectory (i.e., straight lines between users' mouse clicks), Speed Buckets Average speed for each 75 ms Overall Speed Average overall speed Idle Time if there is a change in time greater than 200 ms but no movement, this is counted as idle time Idle Time on Same If there is a change in time but not a change in location, this mean an Location event other than movement triggered a recording (e.g., such as leaving the page, and other things).
- ADMIT asks specially designed questions on a computer about illicit behavior and requires respondents to answer by admitting to or denying the behavior by dragging a button on the bottom of the screen to ‘yes’ or ‘no’.
- survey items are normally crafted to conform to at least two categories: benign questions that can be used to establish baseline behavioral data, and sensitive questions that the organization is interested in the answer to.
- Responses are analyzed with both with-in subject comparisons as well as by comparing responses with the aggregate responses of other employees. For instance, FIG.
- Mouse movements are captured while the respondent is answering the question and compared to an individual baseline (how the individual moves the mouse on truthful responses) and/or to a population baseline (how other people normally move the mouse on this question) to detect deception.
- Survey items have acceptable and non-acceptable ranges of responses. See FIG. 3 . These acceptable response ranges will be determined by the individual organization involved in the survey. For example, if the question were asked “A crime was committed in our organization. If you committed the crime, you will know what act was performed. Did you perform any of the following crimes?” The system would then list 6 options (e.g., unauthorized disclosure of classified information, theft of credit card numbers, theft of hardware, etc.). For the key item (the item of interest), a threshold is set on abnormal mouse movements; any abnormal mouse movements above this threshold is deemed as unacceptable (and anything below, acceptable).
- a secure management dashboard can be implemented to visualize (e.g., in real-time) the results and execute policy-driven responses to threats;
- Probabilities of deception can be calculated based on multi-tiered testing; and/or
- Different features of deception are extracted for different devices (desktop, iPad, etc.).
- ADMIT system introduces a lightweight and easily deployable system for quickly identifying potential threats. Many agencies go to significant lengths, and at great expense, to identify potential threats. Unfortunately, current techniques used to identify potential threats are labor intensive, laden with bias, and frequently miss potential threats. For instance, polygraphs are often used for employee initial and ongoing screening, but are extremely problematic for widespread deployment. A single polygraph test requires hours of pre-planning, pre-test interviewing, testing, and post-testing reviews, costing hours of productive time and thousands of dollars per administration. Other methods such as conducting face-to-face interviews (that must be done individually and at great expense) or traditional surveys (which are cheap to deploy but easy to subvert) are equally constrained. ADMIT can be deployed simultaneously to thousands of employees at minimal expense.
- ADMIT improves upon previous methods in at minimum the following ways: (i) Easy and inexpensive to deploy to a large number of employees simultaneously; (ii) A data capture process runs in the background during survey administration, while analysis can take place on a separate and secure remote system; (iii) Behavioral sensing data (e.g., keyboard and mouse usage) is gathered in an unobtrusive manner with no adverse effect to the user; (iv) Users need not be aware of the data collection that is taking place; (v) Unlike systems that rely on linguistic features, the system's behavioral analysis approach is language agnostic (i.e., the detection methodology will work with English, Spanish, Arabic, etc.) because it relies on system usage patterns rather than message content; (vi) Survey items or questions are specifically constructed to identify behaviors of interest; i.e., ADMIT can be deployed in a broad range of
- ADMIT can be used to confirm the individuals that are operating within a given acceptable range of behavior and system usage. It also provides a powerful tool for management to proactively identify and investigate those individuals who respond abnormally to targeted questions. ADMIT provides unprecedented and innovative capabilities to identify those individuals that are potentially a threat to the effective operation of an organization.
- a mock theft study was conducted using a multi-level experiment design (2). Half of the participants were randomly assigned to commit an insider threat activity (steal sensitive information). All participants—both innocent and guilty—then completed a computer-based screening test (an ADMIT test) that asked questions about the insider threat activity (i.e., key questions) and benign questions not related to the activity (i.e., control questions).
- an ADMIT test computer-based screening test
- a task was designed to mimic a realistic insider threat scenario. Although the task was sanctioned (i.e., participants were told to perform the elicit act and appear innocent), it was conducted to solicit suspicion and anxiety. Participants registered for an appointment online and were randomly assigned to an innocent or guilty condition. Upon arriving at the experiment, the participant was given a consent form to review and sign.
- Participants in the guilty condition were given a folded piece of paper and a jump drive. They were instructed to go in the elevator (on the first floor), press the button to the fourth floor, and only then read the paper.
- the instructions on the paper are shown in Table 3.
- Participants in the innocent condition were also given a folded piece of paper. They were instructed to go in the elevator (on the first floor), press the button to the fourth floor, and then read the paper. Similar to the guilty participants, the paper asked participants to go to the Management Information Systems department. However, rather than stealing information, innocent participants were asked to pick up a piece of paper (a news article) at the front desk and then go back to the room on the first floor. Upon arriving at the room, an experiment facilitator explained that a theft occurred and the participant was a suspect for that theft. The participant was then asked to take an investigation screening (ADMIT test described below).
- ADMIT test described below.
- the ADMIT test consisted of a concealed information test (CIT) adapted to the ADMIT format and then several follow-up questions. All questioning was presented on a computer.
- the CIT is the most scientifically validated polygraph questioning technique (Ben-Shakhar and Elaad 2003; Council 2003; Fiedler et al. 2002).
- the objective of the CIT is to detect if a person has ‘inside’ or ‘concealed’ knowledge of an activity (e.g., stealing the credit card numbers) (Ben-Shakhar and Elaad 2003).
- the person being interviewed is presented a question or a stimulus about a specific target (e.g., a crime).
- the interviewer verbally asks the interviewee a question such as, “Very important information was stolen today from a computer. If you committed the theft, you will know what was stolen. Did you steal any of the following information today?”
- the interview then recites five to six plausible answers. For example, the interviewer might recite: ‘passwords’, ‘credit card numbers’, ‘exam key, ‘social security numbers’, ‘health records’, or ‘encryption codes’. Usually, the interviewee is asked to verbally repeat the possible answer and then respond ‘yes’ or ‘no’. One of the plausible answers should relate directly to the target under investigation. This is referred to as the key item.
- Electrodermal response data was typically used in CIT polygraph testing. It was used here to compare to and validate the procedure for detecting insider threats based on mouse movements.
- ADMIT performs several transformations and computations to facilitate analysis as follows: (i) Space rescaling—All mouse trajectory data were rescaled to a standard coordinate space (a 2 ⁇ 1.5 rectangle that is compatible with the aspect ratio of the computer screen). The top left corner of the screen corresponds to ⁇ 1, 1.5, and the bottom right corner of the screen corresponds to 1,0. Thus the starting position is at position 0, 0; (ii) Remapping—All data were remapped so the mouse started at position 0,0. Although the user must click a button at the middle-bottom of the screen to see the next item, the button's size allows variations to exist (e.g., someone might actually click on the right side of the button).
- Time normalization was required for analysis of spatial attraction/curvature and complexity such as maximum deviation (maximum perpendicular deviation between the straight line trajectory), area under the curve (geometric area difference between the actual trajectory and the straight line), and x-flips and y-flips.
- maximum deviation maximum perpendicular deviation between the straight line trajectory
- area under the curve geometric area difference between the actual trajectory and the straight line
- x-flips and y-flips The rational for time normalization is that recorded trajectories tend to have different lengths. For example, a trial that lasts 800 ms will contain 56 x, y coordinate pairs. However, a trial that last 1600 ms will contain 112 x,y coordinate pairs.
- electrodermal responses were also captured using two sensors on the pointer and ring fingers of the participant's non-dominant hand (the hand not used to move the mouse). 12 seconds were allowed between each question/item for an individual's electrodermal activity to react and then to level out before asking the next question (Gamer et al. 2006).
- Analysis were divided into three relevant areas to detect insider threats: (i) Area 1 examined what features differentiated how a guilty person answers a key question versus a control question (a within-subject analysis). This is a typical analysis done in polygraph administration; (ii) Area 2 examined what features differentiated between how a guilty person answers a key question versus how an innocent person answers a key question (a between-subject analysis); and (iii) Area 3 examined what features differentiated between how a guilty person answers a control question versus how an innocent person answers a control question (a between subject-analysis).
- Table 5 summarizes the areas of analysis. The analysis proceeded as follows. For each area, determination was made to see if there was a difference in electrodermal response as done in traditional polygraph testing. Determination was also made to see if differences in mousing behavior also existed.
- the polygraph is based on the assumption that a guilty person will experience a heightened electrodermal response (caused by arousal and stress) when answering key questions deceptively compared to answering control questions truthfully (Krapohl et al. 2009). Results of ADMIT experiment confirmed that this effect was present in this experiment.
- a linear mixed model predicting deception (control vs. key item) was specified based on electrodermal responses nested within each participant. In other words, this experiment examined deviations from individual electrodermal baselines by examining z-scores. Thus, participants were only compared to their own electrodermal baseline to detect anomalies.
- the average difference between truthful and deceptive responses in x-location was 0.0778 (on a transformed scale between 0 and 1);
- the difference in x-location was 0.2217; honest responses had traveled nearly twice as far on the x-axis as the deceptive responses at this time interval.
- Running multiple independent tests as done in this study may cause alpha slippage—i.e., something being significant due to random chance.
- the probability were computed of having multiple significant tests in a row.
- the probability of having 16 time slots significant in a row at a p ⁇ 0.1 level due to random chance is 0.1 ⁇ 10 16 (p ⁇ 0.0000000000000001).
- the probability of having 8 time slots significant in a row at a p ⁇ 0.05 level due to random chance is 0.05 ⁇ 10 8 (p ⁇ 0.0000000000390625).
- FIG. 11 is a graph of the x-location by normalized time slots for guilty and innocent participants while answering the key questions. As can be seen, guilty individuals' mouse trajectories were more biased toward the opposite (i.e., the truthful) choice than those of innocent individuals. Using a series oft-tests (e.g., Duran et al.
- FIG. 12 is a graph of the y-location by normalized time slots for guilty and innocent participants while answering the key questions.
- guilty individuals' mouse trajectories also were more hesitant toward moving upward toward the deceptive answer than were innocent participants moving upward toward the truthful answer.
- Using a series oft-tests e.g., Duran et al. 2010
- the polygraph assumes that no-significant electrodermal difference will be found between innocent and guilty participants when answering control questions. Supporting this assumption, our analysis of electrodermal responses revealed no significant differences between innocent and guilty participants when responding to control items.
- FIG. 13 and FIG. 14 show the x,y-locations, respectively, by normalized time slots for guilty and innocent responses to control questions. As a reminder, each participant answered 4 control questions regardless whether they were guilty or innocent.
- a linear mixed model was conducted nesting participants' responses within each control item (e.g., finding anomalies from the baseline within each of the 4 control items through examining z-scores).
- the innocent person started moving horizontally almost immediately to answer the question, the guilty person had a small hesitancy before committing to the answer.
- this difference only lasted a short while, after which the guilty person had moved as far or further horizontally along the x-axis than the innocent person.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Security & Cryptography (AREA)
- Theoretical Computer Science (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Social Psychology (AREA)
- Hospice & Palliative Care (AREA)
- Heart & Thoracic Surgery (AREA)
- Developmental Disabilities (AREA)
- Psychiatry (AREA)
- Psychology (AREA)
- Child & Adolescent Psychology (AREA)
- Biophysics (AREA)
- Pathology (AREA)
- Biomedical Technology (AREA)
- Educational Technology (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The present invention provides a system and a method for eliciting information to sensitive questions and reliably detecting whether one is being deceptive, concealing information, or experiencing a heightened emotional response to the question. In particular, the system and the method of the invention are based on analyzing the user behavioral biometric of using one or more input device(s).
Description
- This application claims the priority benefit of U.S. Provisional Application No. 61/837,153, filed Jun. 19, 2013, which is incorporated herein by reference in its entirety.
- The present invention relates to a system and a method for eliciting information to sensitive questions that require use of an input device (e.g., keyboard and/or pointing device, etc.) and reliably detects whether one is being deceptive, concealing information, or experiencing a heightened emotional or cognitive response to the question by analyzing the input device usage characteristic. In particular, the system and the method of the invention are based on analyzing the user behavioral biometric of using one or more input device(s).
- The threat of malicious insiders is a top concern for government and corporate agencies. Insider threats—a trusted adversary who operates within an organization's boundaries—are a significant danger to both private and public sectors, and are often cited as the greatest threat to an organization. Insider threats include disgruntled employees or ex-employees, potential employees, contractors, business partners, and auditors. The damage caused by an insider threat can take many forms, including workplace violence; the introduction of malware into corporate networks; the theft of information, corporate secrets, or money; the corruption or deletion of data; and so on. According to a recent survey, it takes on average 416 days to contain an insider attack (HP Cyber Risk Report, 2012), and insider threats have been estimated to result in “tens, if not hundreds of billions of dollars” in damages. The identification process of insider threats is heightened in very large organizations. For instance, identifying a small number of potential insider threats within an organization with thousands of employees is a literal “needle in the haystack” problem.
- Therefore, there is a need for a system and a method for determining whether a particular personnel poses an insider threat.
- Some aspects of the invention address the insider threat challenge by providing a system and a method called ADMIT (i.e., Automated Detection Method for Insider Threat). In some embodiments, ADMIT is a web-based survey tool that elicits information to sensitive questions that requires an input device usage (e.g., keyboard, and/or a pointing device, etc.) and reliably detects whether one is being deceptive, concealing information, or experiencing a heightened emotional or cognitive response to the question by analyzing the input device usage characteristic.
- Prior research on deception has established that humans guilty of acts known to be immoral, criminal, or unethical have uncontrolled physiological changes that can be detected as observable behavioral changes when responding to questions regarding such events. Similar to the way a polygraph (i.e., lie detector) detects physiological changes in the body based on uncontrolled responses when answering sensitive questions. The present inventors have discovered that such responses can be detected through monitoring a person's input device usage, (e.g., mouse and keystroke behavior) when a person is guilty of actions known to be wrong. Abnormal behavior that is indicative of insider threat can then be highlighted or alerted to specified individuals in the organization for review and further investigation. ADMIT operates like well-known web-based survey tools like SURVEYMONKEY® or QUALTRICS®, and thus can be mass deployed to an entire organization simultaneously.
- In one embodiment, the system and the method are based on a subject's behavioral biometrics. The approach consists of establishing distinctive behavioral biometrics for a subject based on characteristic(s) of the subject's input device usage. The usage characteristic comprises how and the way the user uses the input device.
- Some of the variables for how the user uses the input device include, but are not limited to, input device dynamics. For example, when the input device is a keyboard, the keyboard (i.e., input device) dynamics include, but are not limited to, the dwell time (the length of time a key is held down), transition time (the time to move from one key to another) and rollover time for keyboard actions. After these measurements are collected, the collected actions are translated and analyzed in order to determine the truthfulness of the subject's answer to a particular questionnaire. An algorithm can be used to generate a Keystroke Dynamics Signature (KDS), which is used as a reference profile for the subject using non-threatening or seemingly innocuous or harmless questions. In some embodiments, the KDS is constructed using a key oriented neural network based approach, where a neural network is trained for each keyboard key to best simulate its usage dynamics with reference to other keys.
- When the input device is a pointing device such as a mouse, the pointing device dynamics include, but are not limited to, comparing selected pointing device actions generated by the subject as a result of subject's answer to an on-screen question or interaction with a graphical user interface (GUI) or any other display shown on the display screen. The data obtained from these actions are then processed in order to analyze the behavior of the user. Pointing device actions include general pointing device movement, drag and drop, point and click, and silence (i.e., no movement). The behavioral analysis utilizes neural networks and statistical approaches to generate a number of factors from the captured set of actions; these factors are used to construct what is called a Pointing Device Dynamics Signature (PDDS), a unique set of values characterizing the subject's behavior during both seeming innocuous or harmless questions and during a more direct question-and-answer sessions. Some of the factors consist of calculating the speed, total distance traveled, initial direction of movement, total response time, change in direction on the x-axis, change in direction on the y-axis, idle time, area under the curve, amount of deviation, reaction time, applied pressure, changes in angle, the pattern of a users' acceleration or deceleration during a movement, the precision of movements, the click latency, click pressure, or a combination of two or more thereof
- The detection algorithm for an input device calculates the significance of each factor with respect to the other factors, i.e. KDS, PDDS or other input device usage characteristics are weighted since certain actions are more prone to revealing truthfulness of the subject.
- One particular aspect of the invention provides systems and methods for detecting deception in a subject. In one embodiment, this deception detection system (sometimes referred herein as ADMIT or “Automated Detection Method for Insider Threat”) is a web-based survey tool that elicits information to sensitive questions and reliably detects whether one is being deceptive, concealing information, or experiencing a heightened emotional response to the question.
- As discussed above, systems and methods of the invention can also include eliciting information from the subject on non-sensitive, benign, innocuous or seemingly harmless questions (i.e., control questions) to establish reference input device usage characteristics of the subject. Control questions can be presented at the beginning of the session or it can be interdispersed with sensitive questions to establish the reference input device usage characteristics of the subject. For example, the system and method can include randomly inserting or presenting to the subject control questions to determine the reference (or baseline) input device usage characteristic.
- Alternatively, the reference input device usage characteristics can be based on the average input device usage characteristics of a plurality of subjects for a particular question. In this manner, the subject's input device usage characteristics (i.e., behavioral biometrics) can be compared to the “baseline” or the “reference input device usage characteristics” that consists of average or range of input device usage characteristic of a plurality of individual to the same question. Accordingly, the baseline or the reference input device usage characteristics can be based on the subject's own behavior biometrics during non-sensitive or non-threatening questionnaire session or it can be based on the input device usage characteristics of a plurality of subjects' input device usage characteristics for the same or similar question, or a combination of both.
- In general, ADMIT is based on the discovery that humans guilty of acts known to be immoral, criminal, or unethical have uncontrolled physiological changes that can be detected as observable behavioral changes when responding to questions regarding such events. Similar to the way a polygraph (lie detector) detects physiological changes in the body based on uncontrolled responses when answering sensitive questions when a person is guilty of actions known to be wrong, the present inventors have discovered that such responses can be detected through use of an input device such as by monitoring mouse or other pointing device usage characteristics and/or keystroke usage characteristics. Abnormal behavior that is indicative of insider threat can then be highlighted or alerted to specific individuals in the organization for review and further investigation.
- One particular aspect of the invention provides a behavioral biometric-based deception analysis system and/or method for determining whether a subject is truthful or deceptive to a question of interest (i.e., a sensitive question or key question). Such systems typically include displaying the question (e.g., on a computer screen or projecting the question on a display). The subject is then allowed to select or input subject's answer to the question presented using one or more input device. The system includes a data interception unit that is configured to intercept input from the subject who is directed to a question presented on a display screen. The data interception unit is configured to passively collect an input device (e.g., a pointing device, such as a mouse, a touch screen, a touch pad, a stylus, a track ball, etc.) usage characteristic. The system also includes a behavior analysis unit operatively connected to said data interception unit to receive the passively collected input device usage characteristic; and a behavior comparison unit operatively connected to said behavior analysis unit. In some embodiments, the system dynamically monitors and passively collects behavioral biometric information (i.e., input device usage characteristics), and translates the behavioral biometric information into representative data, stores and compares results, and outputs a result associated with truthfulness or deception to the question of interest presented on the display screen.
- In some embodiments, said behavior comparison unit is operatively connected to an application or program that presents a question on the display screen such that said behavior comparison unit influences the next question presented on the display screen by the application using a decision tree structure based on the result. Thus, for example, if the subject's behavior biometrics is ambiguous or inconclusive, a follow-up type of question can be displayed to further analyze the subject's behavior biometrics to a particular sensitive question.
- Yet in other embodiments, the input device usage characteristics comprise pointing (e.g., mouse, joystick, stylus, trackball, etc.) device usage characteristics. In some instances, the pointing device usage characteristics comprise movement of said pointing device between the starting position of said pointing device and the answer selected by the subject on the display screen, the elapsed time between presentation of the question on the display screen and the a selection of the answer by the subject, the speed, total distance traveled, initial direction of movement, total response time, change in direction on the x-axis, change in direction on the y-axis, idle time, area under the curve, amount of deviation, reaction time, applied pressure, changes in angle, acceleration, the precision of movements, the click latency, click pressure, or a combination of two or more thereof.
- When the pointing device is a touch screen or a touch pad, the usage characteristic can include finger movement, precise timing, and applied pressure between the initial position of a pointer and the answer on the display screen selected by the subject. In addition or alternatively, the input device usage characteristic can include characteristic can include speed, total distance traveled, initial direction of movement, total response time, change in direction on the x-axis, change in direction on the y-axis, idle time, area under the curve, amount of deviation, reaction time, applied pressure, changes in angle, the pattern of a users' acceleration or deceleration during a movement, the precision of movements, the click latency, click pressure, or a combination of two or more thereof. The term “area under the curve” refers to the area formed by the total distance or actual direction traveled by the user starting from the starting position of the pointer (x1, y1) to the answer selected by the subject (x2, y2), and the distance between y1 and y2 (i.e., absolute value of y1-y2) and the distance between x1 and x2. (i.e., absolute value of x1-x2).
- Still in other embodiments, said input device (e.g., pointing device such as mouse) usage characteristic is based on the speed, total distance traveled, initial direction of movement, total response time, change in direction on the x-axis, change in direction on the y-axis, idle time, area under the curve, amount of deviation, reaction time, applied pressure, changes in angle, the pattern of a users' acceleration or deceleration during a movement, the precision of movements, the click latency, click pressure, or a combination of two or more thereof
- Yet in other embodiments, the behavior comparison unit compares the result of the subject's behavioral biometric to a reference behavioral biometric. In some instances, the reference behavioral biometric comprises the subject's behavioral biometric to a non-interested or non-sensitive question. In other embodiments, the reference behavioral biometric comprises behavioral biometric of a plurality of subjects based on the same non-interested or non-sensitive question. Still alternatively, the reference behavioral biometric can comprise behavioral biometric of a plurality of subjects who answered truthfully on the same sensitive question or who answered non-truthfully on the same sensitive question. It should be noted in this case, the reference behavioral biometric can comprise an average of behavioral biometric obtained from the plurality of subjects. Alternative, the reference behavioral biometric can be based on a desired confidence limit (e.g., 95%) under the standard curve. In this latter reference behavioral biometric, the subject's behavioral biometric is analyzed to determine whether it is within the desired confidence limit range.
- In one particular embodiment, said reference behavioral biometric comprises an average behavioral biometric to the same question presented on the display screen of a plurality of subjects.
- Yet in other embodiments, said behavioral biometric-based deception analysis system is suitably configured for real-time deception analysis.
- Still in other embodiments, said data interception unit is further configured to passively collect keyboard usage characteristic of the subject. In some instances, said keyboard usage characteristic comprises what key was pressed, what time (e.g., elapsed time between presenting the question on the display screen and the time) a key was pressed, what time it was released, or a combination thereof. Moreover, said keyboard usage characteristic can be based on or includes at least one of speed, transition time, dwell time, pressure of key pressed, or a combination thereof.
- Yet another aspect of the invention provides a method for determining whether a subject is truthful or deceptive to a question of interest, said method comprising:
-
- (a) presenting a question of interest and a plurality of answers on a display screen that requires use of an input device;
- (b) allowing a subject to select an answer using the input device (e.g., a pointing device, a keypad, a touch pad, or a touch screen);
- (c) passively collecting subject's input device usage characteristic;
- (d) comparing subject's input device characteristic with a reference input device usage characteristic to determine whether the subject is truthful or deceptive to the question of interest; and
- (e) optionally repeating steps (a)-(d) with a different question.
- The different question can be a non-sensitive question to further establish the subject's reference behavior biometrics. It can also be another sensitive question or a follow-up question to further establish the truthfulness of the subject.
- In some embodiments, such a method can further comprise the steps of:
-
- (a) presenting a benign or control question and a plurality of answers on a display screen that requires use of the input device;
- (b) allowing the subject to select an answer using the input device;
- (c) passively collecting input device usage characteristic of the subject;
- (d) storing passively collected input device usage characteristic of the subject as the reference input device usage characteristic (i.e., reference behavior biometrics); and
- (e) optionally repeating steps (a)-(d) with a different question.
- Still in other embodiments, said reference input device usage characteristic is an average input device usage characteristic of a plurality of subjects for the same question of interest. It should be noted that such reference input device usage characteristics can be either of those subjects who have truthfully answered the question or input device usage characteristics of those subjects who did not truthfully answer the question. Alternatively, the method can compare to both of these subjects to determine which reference input device usage characteristics more closely resembles the subject's input device usage characteristics.
- Yet in other embodiments, said input device usage characteristic comprises input device movement between the starting position of input device and the answer selected by the subject on the display screen.
- In other embodiments, said input device usage characteristic is based on at least one of speed, total distance traveled, initial direction of movement, total response time, change in direction on the x-axis, change in direction on the y-axis, acceleration, idle time, area under the curve, amount of deviation, reaction time, applied pressure, and changes in angle.
-
FIG. 1 is a schematic representation showing combined mouse movement resulting when multiple answers catch a respondent's attention. -
FIG. 2 is an example of insider threat question. -
FIG. 3 is an example of one particular embodiment of ADMIT response analysis framework. -
FIG. 4 is a graph of X-location by real time for guilty participants in a simulated ADMIT test. -
FIG. 5 is a graph of X-locations by normalized time for guilty participants in a simulated ADMIT test. -
FIG. 6 is a graph of Y-locations by real time for guilty participants in a simulated ADMIT test. -
FIG. 7 is a graph of Y-locations by normalized time for guilty participants in a simulated ADMIT test. -
FIG. 8 is a graph of velocity by real time for guilty participants. -
FIG. 9 is a bar graph showing mean velocity for guilty participants on control vs. key questions. -
FIG. 10 is a graph showing angles by real time for guilty participants. -
FIG. 11 is a graph of X-locations by normalized time for key items. -
FIG. 12 is a graph of Y-locations by normalized time for key items. -
FIG. 13 is a graph of X-location by normalized time for control items. -
FIG. 14 is a graph of Y-location by normalized time for control items. - The present inventors have discovered that input device (e.g., mouse and/or keyboard, other input devices known to one skilled in the art, an other input devices that are developed) usage features or characteristics are diagnostic of insider threats in sensitive questions (i.e., questions about insider threat activities or other key or security questions) administered on a computer. Some aspects of the invention are based on the discovery by the present inventors that when people see two or more answers to a question that catch or draw their attention (e.g., a truthful answer and a deceptive answer that the person will ultimately choose), the mind automatically starts to program motor movements toward both answers simultaneously. To eliminate one of the motor movements (e.g., eliminate the movement towards confessing to an insider threat activity), the mind begins an inhibition process so that the target movement can emerge. Inhibition is not immediate, however, but rather occurs over a short period of time depending on the degree both answers catch the respondents attention (up to −750 milliseconds or more). If movement begins before inhibition is complete, the movement trajectory is a product of motor programming to both answers. See
FIG. 1 . Thus, in an ADMIT survey, when people are asked a question about an insider threat activity and the incriminating answer catches their attention, their mouse trajectory is biased toward this incriminating answer (measured on an x, y axis) on its way toward the non-incriminating (e.g., deceptive) answer. For innocent people, the incriminating answer generally does not catch their attention to the same degree, and thus inhibition occurs more quickly and their mouse movements is less biased toward the opposite response. - In addition, being deceptive normally causes an increase in arousal and stress. Such arousal and stress causes neuromotor noise that interferes with people's fine motor skills (e.g., using the hand and fingers to move a mouse or use a touch screen to answer a question). As a result, the precision of mouse movements decreases when people are being deceptive, ceteris paribus. To reach the intended target (e.g., a deceptive answer in the ADMIT survey), people automatically and subconsciously compensate for this decrease in precision through reducing speed and creating more adjustments to their movement trajectories based on continuous perceptual input. Thus, in ADMIT surveys, the present inventors have found that people exhibit slower velocity, more adjustments (x and y flips), greater distance, and more hesitancy when being deceptive compared to when telling the truth.
- As another example, the present inventors have found that people guilty of insider threat activities display different mouse movements on non-incriminating questions compare to innocent people. In anticipation of a question that might incriminate them, guilty people show a task-induced search bias: before answering a question, they take a fraction of a second longer to evaluate the question. After seeing that the question is not relevant, they then move more quickly to the truthful answer than innocent respondents. Table 1 summarizes examples of mousing features that can be used to differentiate between how guilty insiders and innocent respondents respond to ADMIT questions. In some embodiments at least four or more, typically at least eight or more, often at least ten or more, still more often at least fifteen or more, and most often at least twenty or more of these characteristics are determined and analyzed. Still in other embodiments, all of the input device usage characteristics in Table 1 are determined and analyzed.
-
TABLE 1 Examples of features that distinguish an insider threat (exemplary features monitored) Statistic Description X The X coordinates for each movement Y The Y coordinates for each movement Z The Z coordinate for each movement Pressure The pressure for each movement Rescaled X The X coordinates for the interaction normalized for screen resolution Rescaled Y The Y coordinates for the interaction normalized for screen resolution X Average The X coordinates averaged in buckets of 75 ms Y Average The Y coordinates averaged in buckets of 75 ms X Norm The X coordinates time normalized Y Norm The Y coordinates time normalized Pressure The pressure applied to the mouse for every raw recording Timestamps The timestamp for every raw recording Click Direction Whether the mouse button was pushed down (d) or released (u) for every time an action occurred with the mouse button Click X The X coordinates for each mouse click event Click Y The Y coordinates for each mouse click event Click Rescaled X The X coordinates for each mouse click event normalized for screen resolution Click Rescaled Y The Y coordinates for each mouse click event normalized for screen resolution Click Pressure The pressure applied to the mouse for every raw recording Click timestamps The timestamp for every mouse click event Acceleration The average acceleration for each 75 ms Angle The average angle for each 75 ms Area Under the Curve The geometric area between the actual mouse trajectory and the (AUC) idealized response trajectory (i.e., straight lines between users' mouse clicks); it is a measure of total deviation from the idealized trajectory. Additional AUC The AUC minimum the minimum AUC Overall Distance The total distance traveled by the mouse trajectory Additional Distance The distance a users' mouse cursor traveled on the screen minus the distance that it would have required to traveling along the idealized response trajectory (i.e. straight lines between users' mouse clicks), Distance Buckets Distance traveled for each 75 ms X Flips The number of reversals on the x axis Y Flips The number of reversals on the y axis Maximum Deviation The largest perpendicular deviation between the actual trajectory and its idealized response trajectory (i.e., straight lines between users' mouse clicks), Speed Buckets Average speed for each 75 ms Overall Speed Average overall speed Idle Time if there is a change in time greater than 200 ms but no movement, this is counted as idle time Idle Time on Same If there is a change in time but not a change in location, this mean an Location event other than movement triggered a recording (e.g., such as leaving the page, and other things). The time in this event is summed. Idle Time On 100If there is a change in distance greater than 100 between two points, Distance this may indicate that someone left the screen and came back in another area Total Time Total response time Click Mean Speed The mean speed of users click Click Median Speed The median speed of users click Click Mean Latency The mean time between when a user clicks down and releases the click Click Median Latency The median time between when a user clicks down and releases the click Answer Changes The number of times an answer was selected; if over 1, the person changed answers Hover Changes The number of times an answer was hovered; if over 1, the person hovered over answers they didn't chose Hover Region The amount of time a person overs over a region Return Sum The number of times a person returns to a region after leaving it Dwell The measurement of how long a key is held down Transition Time between key presses Rollover The time between when one key is released and the subsequent key is pushed - System Implementation:
- ADMIT asks specially designed questions on a computer about illicit behavior and requires respondents to answer by admitting to or denying the behavior by dragging a button on the bottom of the screen to ‘yes’ or ‘no’. Following polygraph techniques (the concealed information test, control question test, comparative questions test, etc.) survey items are normally crafted to conform to at least two categories: benign questions that can be used to establish baseline behavioral data, and sensitive questions that the organization is interested in the answer to. Responses are analyzed with both with-in subject comparisons as well as by comparing responses with the aggregate responses of other employees. For instance,
FIG. 2 is an example of an insider threat question—“have you stolen any classified information?” In this example, the respondent must move the mouse from the lower middle of the screen to the “No” answer to deny stealing classified information or to “Yes” to confess. Mouse movements are captured while the respondent is answering the question and compared to an individual baseline (how the individual moves the mouse on truthful responses) and/or to a population baseline (how other people normally move the mouse on this question) to detect deception. - Survey items (e.g., questions) have acceptable and non-acceptable ranges of responses. See
FIG. 3 . These acceptable response ranges will be determined by the individual organization involved in the survey. For example, if the question were asked “A crime was committed in our organization. If you committed the crime, you will know what act was performed. Did you perform any of the following crimes?” The system would then list 6 options (e.g., unauthorized disclosure of classified information, theft of credit card numbers, theft of hardware, etc.). For the key item (the item of interest), a threshold is set on abnormal mouse movements; any abnormal mouse movements above this threshold is deemed as unacceptable (and anything below, acceptable). By observing both the answer provided by the user and using the mouse and keyboard behavior to detect changes in emotional response, one can make four potential observations about a response to a survey item using ADMIT (FIG. 3 ): (i) Lower Left Quadrant: Answer was within acceptable range with normal emotional response (i.e., no action is necessary); (ii) Upper Left Quadrant: Answer is outside acceptable range with normal emotional response (i.e., an HR problem and alert needs to be generated); (iii) Upper Right Quadrant: Answer is outside acceptable range with elevated emotional response (i.e., an HR problem and alert needs to be generated); and (iv) Lower Right Quadrant: Answer was within acceptable range, however, with an elevated emotional response (i.e., a deceptive answer; an investigation needs to be launched). - Several functionalities can be implemented in the ADMIT system to facilitate accurate and reliable analysis of mouse movements. For example, (i) Data is time-normalized (e.g., all trajectories are evenly split into 101 equal buckets) to compare trajectories between respondents for detecting deception; (ii) Data is averaged into 75 ms duration intervals to account for differences in computers speeds and mouse characteristics within subjects; (iii) Data is rescaled to a standard scale to account for the trajectories of respondents who used different screen resolutions; (iv) Respondents are required to start moving their mouse or finger before an answer is shown, so that a respondent's initial movements can be captured as soon as they see the answer; (v) If respondents stop moving their mouse or finger or stop dragging an answer, an error is shown; (vi) To help respondents get use to the testing format and improve the performance of the evaluation, a tutorial and practice test can be provided; (vii) All items (sensitive and control items) can be pilot tested to make sure innocent people respond as intended; (viii) A tree-like questioning framework can be implemented to ask follow-up questions when deception is detected or suspected; (ix) All input device usage characteristics (such as mousing data) can be sent to a server data server via a web service to be analyzed for deception. This reduces the likelihood that data can be tampered with during the analysis; (x) A secure management dashboard can be implemented to visualize (e.g., in real-time) the results and execute policy-driven responses to threats; (xi) Probabilities of deception can be calculated based on multi-tiered testing; and/or (xii) Different features of deception are extracted for different devices (desktop, iPad, etc.).
- ADMIT system introduces a lightweight and easily deployable system for quickly identifying potential threats. Many agencies go to significant lengths, and at great expense, to identify potential threats. Unfortunately, current techniques used to identify potential threats are labor intensive, laden with bias, and frequently miss potential threats. For instance, polygraphs are often used for employee initial and ongoing screening, but are extremely problematic for widespread deployment. A single polygraph test requires hours of pre-planning, pre-test interviewing, testing, and post-testing reviews, costing hours of productive time and thousands of dollars per administration. Other methods such as conducting face-to-face interviews (that must be done individually and at great expense) or traditional surveys (which are cheap to deploy but easy to subvert) are equally constrained. ADMIT can be deployed simultaneously to thousands of employees at minimal expense. Additionally, by eliminating humans and creating an objective methodology for identifying possible insider threats, ADMIT is not subject to the same biases that more conventional methods may fall victim to. Thus, ADMIT improves upon previous methods in at minimum the following ways: (i) Easy and inexpensive to deploy to a large number of employees simultaneously; (ii) A data capture process runs in the background during survey administration, while analysis can take place on a separate and secure remote system; (iii) Behavioral sensing data (e.g., keyboard and mouse usage) is gathered in an unobtrusive manner with no adverse effect to the user; (iv) Users need not be aware of the data collection that is taking place; (v) Unlike systems that rely on linguistic features, the system's behavioral analysis approach is language agnostic (i.e., the detection methodology will work with English, Spanish, Arabic, etc.) because it relies on system usage patterns rather than message content; (vi) Survey items or questions are specifically constructed to identify behaviors of interest; i.e., ADMIT can be deployed in a broad range of contexts, e.g., employment applications, healthcare (doctor or insurance), life insurance, loan application, ongoing employment screening, financial disclosure, etc.; (vii) The system is not easily fooled, as heightened emotions that would trigger anomalous event typically manifests itself as subtle differences in typing or mouse movement behavior that occurs between 20 and 100 milliseconds. Attempts to modify one's keystroke or mouse use can be flagged as abnormal, thus identifying individuals attempting to fool the system; and (viii) The system is not subject to biases that are common in face-to-face investigations.
- ADMIT can be used to confirm the individuals that are operating within a given acceptable range of behavior and system usage. It also provides a powerful tool for management to proactively identify and investigate those individuals who respond abnormally to targeted questions. ADMIT provides unprecedented and innovative capabilities to identify those individuals that are potentially a threat to the effective operation of an organization.
- Additional objects, advantages, and novel features of this invention will become apparent to those skilled in the art upon examination of the following examples thereof, which are not intended to be limiting. In the Examples, procedures that are constructively reduced to practice are described in the present tense, and procedures that have been carried out in the laboratory are set forth in the past tense.
- Below is an example study conducted to validate ADMIT. This is only an example. In other types of assessments, the same or different factors may or may not be significant.
- A mock theft study was conducted using a multi-level experiment design (2). Half of the participants were randomly assigned to commit an insider threat activity (steal sensitive information). All participants—both innocent and guilty—then completed a computer-based screening test (an ADMIT test) that asked questions about the insider threat activity (i.e., key questions) and benign questions not related to the activity (i.e., control questions).
-
TABLE 2 Multi-level experiment design Level 1 Committed the insider Did not commit the insider threat activity threat activity Level 2 Key questions Control Key questions Control questions questions - Participants:
- A total of 75 people were recruited for the experiment from an undergraduate business course for extra credit. Of these, 5 of the participants in the guilty treatment refused to perform the mock insider threat theft. Four others confessed to committing the theft during the screening. This resulted in useable data from 66 participants (30 in the insider threat treatment and 36 in the innocent treatment). 59% of participants were female and the average age was 21.8. The average number of years of college education was 3.1. The most represented nationalities were American (69%), Chinese (11%), and Mexican (9%). Twenty-nine percent of students were business management majors, 27% accounting, 20% marketing, 15% finance, and 9% management information systems.
- Procedures:
- A task was designed to mimic a realistic insider threat scenario. Although the task was sanctioned (i.e., participants were told to perform the elicit act and appear innocent), it was conducted to solicit suspicion and anxiety. Participants registered for an appointment online and were randomly assigned to an innocent or guilty condition. Upon arriving at the experiment, the participant was given a consent form to review and sign.
- Guilty Condition:
- Participants in the guilty condition were given a folded piece of paper and a jump drive. They were instructed to go in the elevator (on the first floor), press the button to the fourth floor, and only then read the paper. The instructions on the paper are shown in Table 3. In summary, the instructions asked participants to go to the Management Information Systems department, login to a computer in the front office using a set of credentials, and steal a file containing department credit card numbers. Participants were instructed to lie if confronted about the theft.
- Guilty participants then were instructed to go back to the room on the first floor. Upon arriving at the room, an experiment facilitator explained that a theft occurred and the participant was a suspect for that theft. The participant was then asked to take an investigation screening (an ADMIT test).
- Innocent Participants:
- Participants in the innocent condition were also given a folded piece of paper. They were instructed to go in the elevator (on the first floor), press the button to the fourth floor, and then read the paper. Similar to the guilty participants, the paper asked participants to go to the Management Information Systems department. However, rather than stealing information, innocent participants were asked to pick up a piece of paper (a news article) at the front desk and then go back to the room on the first floor. Upon arriving at the room, an experiment facilitator explained that a theft occurred and the participant was a suspect for that theft. The participant was then asked to take an investigation screening (ADMIT test described below).
- ADMIT Test:
- The ADMIT test consisted of a concealed information test (CIT) adapted to the ADMIT format and then several follow-up questions. All questioning was presented on a computer. The CIT is the most scientifically validated polygraph questioning technique (Ben-Shakhar and Elaad 2003; Council 2003; Fiedler et al. 2002). The objective of the CIT is to detect if a person has ‘inside’ or ‘concealed’ knowledge of an activity (e.g., stealing the credit card numbers) (Ben-Shakhar and Elaad 2003). In a standard CIT, the person being interviewed is presented a question or a stimulus about a specific target (e.g., a crime). In a face-to-face CIT, the interviewer verbally asks the interviewee a question such as, “Very important information was stolen today from a computer. If you committed the theft, you will know what was stolen. Did you steal any of the following information today?” The interview then recites five to six plausible answers. For example, the interviewer might recite: ‘passwords’, ‘credit card numbers’, ‘exam key, ‘social security numbers’, ‘health records’, or ‘encryption codes’. Usually, the interviewee is asked to verbally repeat the possible answer and then respond ‘yes’ or ‘no’. One of the plausible answers should relate directly to the target under investigation. This is referred to as the key item. For example, if the CIT is investigating theft of ‘credit card numbers’, this answer must be included in the set of answers accompanied by several other plausible yet unrelated answers (Krapohl et al. 2009). An innocent person with no insider knowledge' should exhibit the same amount of arousal for each answer. However, a guilty person should experience a detectable psychophysiological change—an orienting response—when presented the key item.
- In designing the CIT for ADMIT in this experiment, all of the items (key and control items) were pilot tested to make sure that an innocent person will respond similarly to each item without unintended psychophysiological responses. Next, prior to administering the CIT, each participant was familiarized with the format of the CIT through a practice test. In the practice test, the program required the respondent to move the mouse within the first second, or displayed an error. This helps ensure the inhibition is not complete before movement occurs. This also reduces the likelihood that an orienting response would occur due to the novel format of the test and therefore confound the results (Krapohl et al. 2009). CIT was then administered to investigate the theft of the credit card numbers. The CIT was administered by a computer, rather than by a human facilitator. Screenshots and explanations of the CIT are shown in Table 4.
- Measures:
- Mouse and electrodermal data were collected from each subject. The electrodermal response data is typically used in CIT polygraph testing. It was used here to compare to and validate the procedure for detecting insider threats based on mouse movements.
- Mousing:
- ADMIT performs several transformations and computations to facilitate analysis as follows: (i) Space rescaling—All mouse trajectory data were rescaled to a standard coordinate space (a 2×1.5 rectangle that is compatible with the aspect ratio of the computer screen). The top left corner of the screen corresponds to −1, 1.5, and the bottom right corner of the screen corresponds to 1,0. Thus the starting position is at
position position - Electrodermal Responses:
- Using a polygraph machine, electrodermal responses were also captured using two sensors on the pointer and ring fingers of the participant's non-dominant hand (the hand not used to move the mouse). 12 seconds were allowed between each question/item for an individual's electrodermal activity to react and then to level out before asking the next question (Gamer et al. 2006).
- Pilot Tests:
- This test builds on 7 exploratory pilot studies with approximately 1293 participants to understand the dynamics of capturing mouse movements to detect deception, to validate that the items do not inherently cause an unanticipated response for an innocent person, and to discover what features to extract and analyze to detect deception and facilitate hypothesis creation. The specific scenario used in this experiment, was pilot tested with an additional 6 people to make final adjustments to the experiment protocol and tests.
- Analysis:
- Analysis were divided into three relevant areas to detect insider threats: (i)
Area 1 examined what features differentiated how a guilty person answers a key question versus a control question (a within-subject analysis). This is a typical analysis done in polygraph administration; (ii)Area 2 examined what features differentiated between how a guilty person answers a key question versus how an innocent person answers a key question (a between-subject analysis); and (iii) Area 3 examined what features differentiated between how a guilty person answers a control question versus how an innocent person answers a control question (a between subject-analysis). - Other possible areas of analysis were excluded at least in part for the following reasons: (i) Confessing to an act, whether truthfully or deceptively, will always flag the response for follow-up questioning. Hence, this eliminates the need to create a model to predict: a) when guilty people are being deceptive on a control question (falsely confessing), b) when deceptive people are being truthful on a key question (confessing), and c) when innocent people are being deceptive on either a key question or control question (falsely confessing); and (ii) Important for the validity of the CIT, innocent people should experience no systematic difference in how they response to key and control questions. This was confirmed through pilot testing. Hence, a model differentiating between the two types of questions for innocent people was not needed.
- Table 5 summarizes the areas of analysis. The analysis proceeded as follows. For each area, determination was made to see if there was a difference in electrodermal response as done in traditional polygraph testing. Determination was also made to see if differences in mousing behavior also existed.
-
TABLE 5 Summary of areas of analysis Area 1: Guilty key vs. control questions Area 2: Key questions Control Questions Guilty control Guilty key Innocent Guilty Innocent Guilty control questions questions key questions key questions control questions questions (truthful (deceptive (truthful (deceptive (truthful response) (truthful response) response) response) response) response) - Guilty Key Vs. Control Items:
- First, whether differences can be detected was investigated in how guilty individuals (n=30) answer control vs. key questions in the ADMIT test. The assumption of the CIT for ADMIT was that a difference can be detected via electrodermal responses. This assumption was cross-validated, and then the test was also analyzed to see whether mouse movements can also be predictive of deception.
- Electrodermal Response:
- The polygraph is based on the assumption that a guilty person will experience a heightened electrodermal response (caused by arousal and stress) when answering key questions deceptively compared to answering control questions truthfully (Krapohl et al. 2009). Results of ADMIT experiment confirmed that this effect was present in this experiment. A linear mixed model predicting deception (control vs. key item) was specified based on electrodermal responses nested within each participant. In other words, this experiment examined deviations from individual electrodermal baselines by examining z-scores. Thus, participants were only compared to their own electrodermal baseline to detect anomalies.
- It was found that the peak electrodermal response was a significant predictor of key items (p<0.05, z=1.911, n=30, one-tailed). In other words, after controlling for individual differences, people were significantly more likely to experience a higher electrodermal response on the key items than on the control item. Similarly, it was found that the minimum electrodermal responses were significant predictors of control items (p<0.05, z=−1.743, n=30, one-tailed). In other words, after controlling for individual differences, people were more likely to experience a lower electrodermal response on the control questions compared to the key questions.
- Mousing Behavior:
- Complementing the electrodermal responses, it was also found that several significant mousing differences existed in how guilty participants answered key vs. control questions. Linear mixed models was used to predict deception (key vs. control item) based on mousing behavior nested within each participant. In other words, models were constructed at each time interval to find deviations from individual mousing baselines through examining z-scores. Thus, participants were only compared to their own mousing baseline to detect anomalies. The results are described below.
- First, participants' mouse trajectories on key items (deceptive responses) showed more attraction toward the opposite answer than did their trajectories on control items (truthful responses). This was apparent in both the x-location by raw-time graph (
FIG. 4 ) and the x-location by normalized time graph (FIG. 5 ). The raw-time graph for x-locations (FIG. 4 ) shows that participants experienced an initial delay in moving horizontally on key questions (˜600 ms). After this delay, the rate at which participants moved along the x-axis when lying was slower than when telling the truth. For example, at time interval 526-600 ms, the average difference between truthful and deceptive responses in x-location was 0.0778 (on a transformed scale between 0 and 1); At time interval 1426-1500 ms, however, the difference in x-location was 0.2217; honest responses had traveled nearly twice as far on the x-axis as the deceptive responses at this time interval. - To validate these observations, a linear mixed model was specified at each time interval (˜75 ms) to identify anomalies—a total of 20 independent tests were conducted. The results showed that the individuals' trajectories for key and control questions were significantly different on the x-axis at a p<0.1 level (z>1.282, n=30) for all time slots between 301 ms-1500 ms (16 sequential time slots). Furthermore, the trajectories were significantly different at a p<0.05 level (z>1.645, n=30) for a subset of these times slots between 901 ms-1500 ms (8 sequential time slots).
- Running multiple independent tests as done in this study may cause alpha slippage—i.e., something being significant due to random chance. To determine the extent alpha-slippage might account for the results, the probability were computed of having multiple significant tests in a row. The probability of having 16 time slots significant in a row at a p<0.1 level due to random chance is 0.1×1016 (p<0.0000000000000001). The probability of having 8 time slots significant in a row at a p<0.05 level due to random chance is 0.05×108 (p<0.0000000000390625). Hence, it can be concluded that for the 20 independent tests run, the significant difference in the trajectories was likely not due to alpha-slippage.
- Next, by examining x-location by normalized time, the present inventors were able to measure spatial attraction toward an opposite choice. Similarly to the present inventors previous analysis, a linear mixed model predicting deception (control vs. key item) based on x-location was specified for each standardized time slot (101 independent tests conducted). Complementing the present inventors' previous findings, it was found that the normalized trajectories were significantly different at a p<0.1 level (z>1.282, n=30) from time slots 45-48 (4 sequential time slots), 55-69 (15 sequential time slots), and again in time slots 92-94 (3 sequential time slots).
- On the y-axis, the mouse trajectories during key questions (deceptive responses) showed increased hesitancy in moving upward. This was apparent in both the y-location by raw-time graph (
FIG. 6 ) and the y-location by normalized time graph (FIG. 7 ). The raw-time graph for y-locations (FIG. 6 ) revealed that participants started moving upward at approximately the same time when deceiving as they did when telling the truth. However, the rate of upward movement was slower. For example, the difference at the 451-525 ms time slot was 0.0675; whereas the difference at the 1426-1500 ms time slot was 0.3472. - Specifying a linear mixed model for each time period (20 independent tests were conducted), whether individuals' trajectories while being deceptive was significantly different from their trajectories while being truthful was tested. It was found that the key item (deceptive) trajectories were significantly different at a p<0.1 level (z>1.282, n=30) from 451 ms to 1500 ms (15 sequential time slots); and within this time period, the trajectories were different at a p<0.5 level (z>1.645, n=30) from 826 ms to 1500 ms (9 sequential time slots).
- Using the y-location by normalized time, vertical hesitancy toward answering a key question deceptively was measured. Specifying a linear mixed model for each time slot (101 independent test runs), whether the deceptive and truthful trajectories were significantly different was tested. It was found that the key item trajectories were significantly different at a p<0.1 level (z>1.282, n=30) from time slot 56-74 (19 sequential time slots) and significantly different at a p<0.05 level (z>1.645, n=30) from timeslot 64-70 (7 sequential time slots). The trajectories were again different at a p<0.1 level (z>1.282, n=30) on the y-axis near the end of the movement from time slot 88-92 (5 sequential time slots) and, within this, at a p<0.5 level (z>1.645, n=30) from 89-90 (2 sequential time slots).
- As the rates of movement along the x-axis and y-axis were slower for deceptive responses than for truthful responses, not surprisingly deceptive responses also had a slower overall velocity. See
FIG. 8 . Specifying a linear mixed model for each time slot (20 independent tests), it was found that deceptive responses showed a significantly lower velocity at the peaks inFIG. 8 from 376 ms-675 ms (4 sequential time slots) and from 901 ms to 1200 ms (4 sequential time slots) at a p<0.1 level (z>1.282, n=30). When examining the mean velocity across the entire movement, guilty participants showed significantly lower velocity on key question (p>0.01, z=−2.494, n=30). SeeFIG. 9 . Velocity on key questions was nearly half. - Also in support that trajectories show attraction toward the truthful answer while moving toward the deceptive answer, data analysis showed that when guilty participants were deceptive, they actually had movement toward the truthful answer for a short period of time before moving toward the deceptive answer as shown in
FIG. 10 . In this figure, any value over 90 degrees indicates movement along the x-axis in the opposite direction (going left toward the truthful response). As seen in the chart, deceptive responses on average move toward the truthful answer for a few hundred milliseconds before totally committing to the deceptive answer. Specifying a linear mixed model for each time period (20 independent tests), it was found that this difference is significant from 601 ms to 1050 ms at a p<0.1 level (z>1.282, n=30) (6 sequential time slots) and, within this time frame, significant from 601 ms to 900 ms at a p<0.05 level (z>1.645, n=30) (4 sequential time slots). - Guilty and Innocent Key Item Trajectories:
- In this experiment, whether differences in mouse movement can be detected between how guilty and innocent people answer key items were tested. The first test was whether an electrodermal response was present, next test was whether differences in mousing behavior existed.
- Electrodermal Response:
- Typically, comparisons only within subject are made in a polygraph examination because of individual differences. Hence, a comparison of electrodermal activity in how guilty and innocent people answer key questions is not normally conducted. This was cross validated in the experiment by the present inventors and found no differences in electrodermal responses between how innocent and guilty people answered key questions.
- Mousing Behavior:
- Although electrodermal responses did not reveal differences, it was found that mousing behavior did show a significant diffference. Guilty individuals showed a more tentative commitment toward the opposite answer (for the guilty individual, the truthful answer) than did the innocent individuals on key items.
FIG. 11 is a graph of the x-location by normalized time slots for guilty and innocent participants while answering the key questions. As can be seen, guilty individuals' mouse trajectories were more biased toward the opposite (i.e., the truthful) choice than those of innocent individuals. Using a series oft-tests (e.g., Duran et al. 2010) for each normalized time slot (total of 101 independent t-tests), it was found that the innocent and guilty participant trajectories are significantly different at a p<0.1 level (t>1.295, df=65) from time slots 1-9 (9 sequential time slots), 25-39 (15 sequential time slots), 72-101 (30 sequential time slots). Within these intervals, the trajectories were significantly different at a p<0.05 level (t>1.669, df=65) from time spots 1-2 (2 sequential time slots), 5-6 (2 sequential time slots), 28-36 (9 sequential time slots), and 73-101 (29 sequential time slots). -
FIG. 12 is a graph of the y-location by normalized time slots for guilty and innocent participants while answering the key questions. As can be seen, guilty individuals' mouse trajectories also were more hesitant toward moving upward toward the deceptive answer than were innocent participants moving upward toward the truthful answer. Using a series oft-tests (e.g., Duran et al. 2010) for each normalized time slot (total of 101 independent t-tests), we found that the innocent and guilty participant trajectories are significantly different at a p<0.1 level (t>1.295, df=65) from time slots 74-92 (19 sequential time slots). Within these intervals, the trajectories were significantly different at a p<0.05 level (t>1.669, df=65) from time spots 80-85 (6 sequential time slots). - In the normal administration of the polygraph, an analysis between how guilty and innocent people answer key items is not normally done; as expected electrodermal was not able to differentiate responses between the guilty and innocent. However, mouse movements was able to significantly differentiate between the guilty and innocent.
- Guilty and Innocent Control Item:
- Whether differences exist in how guilty and innocent people answer control items was also tested. In this case, the difference is believed to be due solely to the arousal associated with committing the mock theft, and not due to being deceptive on a question.
- Electrodermal Response:
- The polygraph assumes that no-significant electrodermal difference will be found between innocent and guilty participants when answering control questions. Supporting this assumption, our analysis of electrodermal responses revealed no significant differences between innocent and guilty participants when responding to control items.
- Mousing Behavior:
- Although no differences were found in electrodermal data, differences in mouse behavior were found that may be suggestive of a task-induced search bias by guilty participants (a fundamentally different cognitive response compared to being deceptive).
FIG. 13 andFIG. 14 show the x,y-locations, respectively, by normalized time slots for guilty and innocent responses to control questions. As a reminder, each participant answered 4 control questions regardless whether they were guilty or innocent. To test for significant differences in trajectories, a linear mixed model was conducted nesting participants' responses within each control item (e.g., finding anomalies from the baseline within each of the 4 control items through examining z-scores). - The significant difference in x-locations took place at the beginning of the mouse trajectory.
FIG. 13 . Trajectories between guilty and innocent individuals were different at a p<0.1 level (z>1.282, n=66) between time slots 12-31 (20 sequential time slots) and, with in this, different at a p<0.05 level (z>1.645, n=66) between slots 15-28 (14 sequential time slots) at a p<0.05 level (z>1.645, n=66). Whereas the innocent person started moving horizontally almost immediately to answer the question, the guilty person had a small hesitancy before committing to the answer. However, this difference only lasted a short while, after which the guilty person had moved as far or further horizontally along the x-axis than the innocent person. - Interestingly, when examining the y-location on a time normalized scale, both guilty and innocent participants moved upward at about the same rate prior to the ‘decision period’ shown on the x-location chart (a little before time slot 40). However, immediately following this ‘decision period’, guilty participants' progressed along the y-axis at a faster rate than innocent participants. Thus, during the middle interval, guilty participants are significantly further along the y-axis than innocent participants. This difference is significant at a p<0.1 level (z>1.282, n=66) from time slots 52-66 (15 sequential time slots) and at a p<0.05 level (z>1.645, n=66) from time slots 53-64 (12 sequential time slots).
- This mousing behavior is suggestive of a task-induced search bias: Anticipating a question that will incriminate them, guilty insiders take a fraction of a second longer to determine how to respond (shown on the x-axis) rather than habitually responding as innocent respondents do. Realizing that the question is irrelevant to the crime, they make a quick and efficient move toward the correct answer catching up to innocent participants on the x-axis and passing them on the y-axis.
- The foregoing discussion of the invention has been presented for purposes of illustration and description. The foregoing is not intended to limit the invention to the form or forms disclosed herein. Although the description of the invention has included description of one or more embodiments and certain variations and modifications, other variations and modifications are within the scope of the invention, e.g., as may be within the skill and knowledge of those in the art, after understanding the present disclosure. It is intended to obtain rights which include alternative embodiments to the extent permitted, including alternate, interchangeable and/or equivalent structures, functions, ranges or steps to those claimed, whether or not such alternate, interchangeable and/or equivalent structures, functions, ranges or steps are disclosed herein, and without intending to publicly dedicate any patentable subject matter. All references cited herein are incorporated by reference in their entirety.
Claims (31)
1-14. (canceled)
15. A method for analyzing behavior comprising:
receiving, from a subject, behavioral biometric information comprised of an input device usage characteristic;
translating the behavioral biometric information into representative data;
comparing the representative data against reference behavioral biometric data, wherein the reference behavioral biometric data is comprised of the subject's behavioral biometric data and the behavioral biometric data of a plurality of non-subjects; and
outputting a result associated with the subject's deceptive intent.
16. The method of claim 15 , wherein the input device usage characteristic is received from a pointing device.
17. The method of claim 16 , wherein the pointing device is a mouse, a joystick, a stylus, a trackball, a touch screen, or a touch pad.
18. The method of claim 15 , wherein the input device characteristic is finger movement, precise timing, or applied pressure between an initial position of a pointer and a second position associated with an input selected by the subject.
19. The method of claim 15 , further comprising requesting a second input from the subject based on the result associated with the subject's deceptive intent.
20. The method of claim 17 , wherein the second input is selected using a decision tree structure.
21. The method of claim 16 , wherein the input device usage characteristic is detected 20 to 100 milliseconds after a signal is generated by the pointing device.
22. The method of claim 15 , wherein the representative data is averaged into fixed time intervals.
23. The method of claim 15 , wherein the comparison of representative data against reference behavioral biometric data occurs at a remote server.
24. The method of claim 15 , wherein the result associated with the subject's deceptive intent is searchable on a management dashboard application.
25. The method of claim 15 , wherein the behavioral biometric information is a number of times the subject changes an answer to a query.
26. The method of claim 15 , wherein the behavioral biometric information is a total response time associated with the subject.
27. The method of claim 15 , wherein the behavioral biometric information is an additional distance traveled by a cursor on the screen relative to an idealized response trajectory for said cursor.
28. The method of claim 15 , wherein the behavioral biometric information is an amount of time between key presses.
29. The method of claim 16 , wherein the behavioral biometric information is an average overall speed of the pointing device.
30. A non-transitory computer-readable medium that stores a program for analyzing behavior that, when executed, causes a processor to:
receive, from a subject, behavioral biometric information comprised of an input device usage characteristic;
translate the behavioral biometric information into representative data;
compare the representative data against reference behavioral biometric data, wherein the reference behavioral biometric data is comprised of the subject's behavioral biometric data and the behavioral biometric data of a plurality of non-subjects; and
output a result associated with the subject's deceptive intent.
31. The non-transitory computer-readable medium of claim 30 , wherein the input device usage characteristic is received from a pointing device.
32. The non-transitory computer-readable medium of claim 31 , wherein the pointing device is a mouse, a joystick, a stylus, a trackball, a touch screen, or a touch pad.
33. The non-transitory computer-readable medium of claim 30 , wherein the input device characteristic is finger movement, precise timing, or applied pressure between an initial position of a pointer and a second position associated with an input selected by the subject.
34. The non-transitory computer-readable medium of claim 30 , wherein the program, when executed, further requests a second input from the subject based on the result associated with the subject's deceptive intent.
35. The non-transitory computer-readable medium of claim 34 , wherein the second input is selected using a decision tree structure.
36. The non-transitory computer-readable medium of claim 31 , wherein the input device usage characteristic is detected 20 to 100 milliseconds after a signal is generated by the pointing device.
37. The non-transitory computer-readable medium of claim 30 , wherein the representative data is averaged into fixed time intervals.
38. The non-transitory computer-readable medium of claim 30 , wherein the program, when executed, transmits the representative data to a remote server for comparison against reference behavioral biometric data.
39. The non-transitory computer-readable medium of claim 30 , wherein the result associated with the subject's deceptive intent is searchable on a management dashboard application.
40. The non-transitory computer-readable medium of claim 30 , wherein the behavioral biometric information is a number of times the subject changes an answer to a query.
41. The non-transitory computer-readable medium of claim 30 , wherein the behavioral biometric information is a total response time associated with the subject.
42. The non-transitory computer-readable medium of claim 30 , wherein the behavioral biometric information is an additional distance traveled by a cursor on the screen relative to an idealized response trajectory for said cursor.
43. The non-transitory computer-readable medium of claim 30 , wherein the behavioral biometric information is an amount of time between key presses.
44. The non-transitory computer-readable medium of claim 31 , wherein the behavioral biometric information is an average overall speed of the pointing device.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/691,637 US20200163605A1 (en) | 2013-06-19 | 2019-11-22 | Automated detection method for insider threat |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201361837153P | 2013-06-19 | 2013-06-19 | |
PCT/US2014/043057 WO2014205149A1 (en) | 2013-06-19 | 2014-06-18 | Automated detection method for insider threat |
US201514899865A | 2015-12-18 | 2015-12-18 | |
US16/691,637 US20200163605A1 (en) | 2013-06-19 | 2019-11-22 | Automated detection method for insider threat |
Related Parent Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2014/043057 Continuation WO2014205149A1 (en) | 2013-06-19 | 2014-06-18 | Automated detection method for insider threat |
US14/899,865 Continuation US10524713B2 (en) | 2013-06-19 | 2014-06-18 | Identifying deceptive answers to online questions through human-computer interaction data |
Publications (1)
Publication Number | Publication Date |
---|---|
US20200163605A1 true US20200163605A1 (en) | 2020-05-28 |
Family
ID=52105254
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/899,865 Active 2034-12-26 US10524713B2 (en) | 2013-06-19 | 2014-06-18 | Identifying deceptive answers to online questions through human-computer interaction data |
US16/691,637 Abandoned US20200163605A1 (en) | 2013-06-19 | 2019-11-22 | Automated detection method for insider threat |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/899,865 Active 2034-12-26 US10524713B2 (en) | 2013-06-19 | 2014-06-18 | Identifying deceptive answers to online questions through human-computer interaction data |
Country Status (2)
Country | Link |
---|---|
US (2) | US10524713B2 (en) |
WO (2) | WO2014205148A1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
IT202100021104A1 (en) | 2021-08-05 | 2021-11-05 | Cia Puglia Servizi S R L | System and method for identifying security anomalies in the use of data contained in multi-access databases in front-office and back-office services |
US20220270716A1 (en) * | 2019-04-05 | 2022-08-25 | Ellipsis Health, Inc. | Confidence evaluation to measure trust in behavioral health survey results |
US12010152B2 (en) | 2021-12-08 | 2024-06-11 | Bank Of America Corporation | Information security systems and methods for cyber threat event prediction and mitigation |
Families Citing this family (178)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101739058B1 (en) * | 2014-04-24 | 2017-05-25 | 주식회사 바이브라시스템 | Apparatus and method for Psycho-physiological Detection of Deception (Lie Detection) by video |
US9729583B1 (en) | 2016-06-10 | 2017-08-08 | OneTrust, LLC | Data processing systems and methods for performing privacy assessments and monitoring of new versions of computer code for privacy compliance |
US9547971B2 (en) * | 2014-12-27 | 2017-01-17 | Intel Corporation | Technologies for determining a threat assessment based on fear responses |
US10368792B2 (en) * | 2015-06-02 | 2019-08-06 | The Charles Stark Draper Laboratory Inc. | Method for detecting deception and predicting interviewer accuracy in investigative interviewing using interviewer, interviewee and dyadic physiological and behavioral measurements |
US11100201B2 (en) * | 2015-10-21 | 2021-08-24 | Neurametrix, Inc. | Method and system for authenticating a user through typing cadence |
CN105306496B (en) * | 2015-12-02 | 2020-04-14 | 中国科学院软件研究所 | User identity detection method and system |
US9392460B1 (en) | 2016-01-02 | 2016-07-12 | International Business Machines Corporation | Continuous user authentication tool for mobile device communications |
US11663297B2 (en) * | 2016-03-10 | 2023-05-30 | Dell Products, Lp | System and method to assess anomalous behavior on an information handling system using indirect identifiers |
US11004125B2 (en) | 2016-04-01 | 2021-05-11 | OneTrust, LLC | Data processing systems and methods for integrating privacy information management systems with data loss prevention tools or other tools for privacy design |
US20220164840A1 (en) | 2016-04-01 | 2022-05-26 | OneTrust, LLC | Data processing systems and methods for integrating privacy information management systems with data loss prevention tools or other tools for privacy design |
US10706447B2 (en) | 2016-04-01 | 2020-07-07 | OneTrust, LLC | Data processing systems and communication systems and methods for the efficient generation of privacy risk assessments |
US11244367B2 (en) | 2016-04-01 | 2022-02-08 | OneTrust, LLC | Data processing systems and methods for integrating privacy information management systems with data loss prevention tools or other tools for privacy design |
US20170300560A1 (en) * | 2016-04-18 | 2017-10-19 | Ebay Inc. | Context modification of queries |
US11544667B2 (en) | 2016-06-10 | 2023-01-03 | OneTrust, LLC | Data processing systems for generating and populating a data inventory |
US10706131B2 (en) | 2016-06-10 | 2020-07-07 | OneTrust, LLC | Data processing systems and methods for efficiently assessing the risk of privacy campaigns |
US10678945B2 (en) | 2016-06-10 | 2020-06-09 | OneTrust, LLC | Consent receipt management systems and related methods |
US10909265B2 (en) | 2016-06-10 | 2021-02-02 | OneTrust, LLC | Application privacy scanning systems and related methods |
US11354434B2 (en) | 2016-06-10 | 2022-06-07 | OneTrust, LLC | Data processing systems for verification of consent and notice processing and related methods |
US11727141B2 (en) | 2016-06-10 | 2023-08-15 | OneTrust, LLC | Data processing systems and methods for synching privacy-related user consent across multiple computing devices |
US10565236B1 (en) | 2016-06-10 | 2020-02-18 | OneTrust, LLC | Data processing systems for generating and populating a data inventory |
US11227247B2 (en) | 2016-06-10 | 2022-01-18 | OneTrust, LLC | Data processing systems and methods for bundled privacy policies |
US11134086B2 (en) | 2016-06-10 | 2021-09-28 | OneTrust, LLC | Consent conversion optimization systems and related methods |
US11416798B2 (en) | 2016-06-10 | 2022-08-16 | OneTrust, LLC | Data processing systems and methods for providing training in a vendor procurement process |
US10510031B2 (en) | 2016-06-10 | 2019-12-17 | OneTrust, LLC | Data processing systems for identifying, assessing, and remediating data processing risks using data modeling techniques |
US10706174B2 (en) | 2016-06-10 | 2020-07-07 | OneTrust, LLC | Data processing systems for prioritizing data subject access requests for fulfillment and related methods |
US10685140B2 (en) | 2016-06-10 | 2020-06-16 | OneTrust, LLC | Consent receipt management systems and related methods |
US10454973B2 (en) | 2016-06-10 | 2019-10-22 | OneTrust, LLC | Data processing systems for data-transfer risk identification, cross-border visualization generation, and related methods |
US10642870B2 (en) | 2016-06-10 | 2020-05-05 | OneTrust, LLC | Data processing systems and methods for automatically detecting and documenting privacy-related aspects of computer software |
US11222139B2 (en) | 2016-06-10 | 2022-01-11 | OneTrust, LLC | Data processing systems and methods for automatic discovery and assessment of mobile software development kits |
US11328092B2 (en) | 2016-06-10 | 2022-05-10 | OneTrust, LLC | Data processing systems for processing and managing data subject access in a distributed environment |
US11438386B2 (en) | 2016-06-10 | 2022-09-06 | OneTrust, LLC | Data processing systems for data-transfer risk identification, cross-border visualization generation, and related methods |
US11151233B2 (en) | 2016-06-10 | 2021-10-19 | OneTrust, LLC | Data processing and scanning systems for assessing vendor risk |
US10853501B2 (en) * | 2016-06-10 | 2020-12-01 | OneTrust, LLC | Data processing and scanning systems for assessing vendor risk |
US10726158B2 (en) | 2016-06-10 | 2020-07-28 | OneTrust, LLC | Consent receipt management and automated process blocking systems and related methods |
US11087260B2 (en) | 2016-06-10 | 2021-08-10 | OneTrust, LLC | Data processing systems and methods for customizing privacy training |
US11138242B2 (en) | 2016-06-10 | 2021-10-05 | OneTrust, LLC | Data processing systems and methods for automatically detecting and documenting privacy-related aspects of computer software |
US11586700B2 (en) | 2016-06-10 | 2023-02-21 | OneTrust, LLC | Data processing systems and methods for automatically blocking the use of tracking tools |
US10318761B2 (en) | 2016-06-10 | 2019-06-11 | OneTrust, LLC | Data processing systems and methods for auditing data request compliance |
US10769301B2 (en) | 2016-06-10 | 2020-09-08 | OneTrust, LLC | Data processing systems for webform crawling to map processing activities and related methods |
US10776517B2 (en) | 2016-06-10 | 2020-09-15 | OneTrust, LLC | Data processing systems for calculating and communicating cost of fulfilling data subject access requests and related methods |
US10997315B2 (en) | 2016-06-10 | 2021-05-04 | OneTrust, LLC | Data processing systems for fulfilling data subject access requests and related methods |
US11074367B2 (en) | 2016-06-10 | 2021-07-27 | OneTrust, LLC | Data processing systems for identity validation for consumer rights requests and related methods |
US11625502B2 (en) | 2016-06-10 | 2023-04-11 | OneTrust, LLC | Data processing systems for identifying and modifying processes that are subject to data subject access requests |
US11475136B2 (en) | 2016-06-10 | 2022-10-18 | OneTrust, LLC | Data processing systems for data transfer risk identification and related methods |
US10944725B2 (en) | 2016-06-10 | 2021-03-09 | OneTrust, LLC | Data processing systems and methods for using a data model to select a target data asset in a data migration |
US10740487B2 (en) | 2016-06-10 | 2020-08-11 | OneTrust, LLC | Data processing systems and methods for populating and maintaining a centralized database of personal data |
US10503926B2 (en) | 2016-06-10 | 2019-12-10 | OneTrust, LLC | Consent receipt management systems and related methods |
US11366909B2 (en) | 2016-06-10 | 2022-06-21 | OneTrust, LLC | Data processing and scanning systems for assessing vendor risk |
US10848523B2 (en) | 2016-06-10 | 2020-11-24 | OneTrust, LLC | Data processing systems for data-transfer risk identification, cross-border visualization generation, and related methods |
US10839102B2 (en) | 2016-06-10 | 2020-11-17 | OneTrust, LLC | Data processing systems for identifying and modifying processes that are subject to data subject access requests |
US11636171B2 (en) | 2016-06-10 | 2023-04-25 | OneTrust, LLC | Data processing user interface monitoring systems and related methods |
US10467432B2 (en) | 2016-06-10 | 2019-11-05 | OneTrust, LLC | Data processing systems for use in automatically generating, populating, and submitting data subject access requests |
US11200341B2 (en) | 2016-06-10 | 2021-12-14 | OneTrust, LLC | Consent receipt management systems and related methods |
US10878127B2 (en) | 2016-06-10 | 2020-12-29 | OneTrust, LLC | Data subject access request processing systems and related methods |
US10776518B2 (en) | 2016-06-10 | 2020-09-15 | OneTrust, LLC | Consent receipt management systems and related methods |
US11301796B2 (en) | 2016-06-10 | 2022-04-12 | OneTrust, LLC | Data processing systems and methods for customizing privacy training |
US11025675B2 (en) | 2016-06-10 | 2021-06-01 | OneTrust, LLC | Data processing systems and methods for performing privacy assessments and monitoring of new versions of computer code for privacy compliance |
US10496846B1 (en) | 2016-06-10 | 2019-12-03 | OneTrust, LLC | Data processing and communications systems and methods for the efficient implementation of privacy by design |
US10242228B2 (en) | 2016-06-10 | 2019-03-26 | OneTrust, LLC | Data processing systems for measuring privacy maturity within an organization |
US10846433B2 (en) | 2016-06-10 | 2020-11-24 | OneTrust, LLC | Data processing consent management systems and related methods |
US11228620B2 (en) | 2016-06-10 | 2022-01-18 | OneTrust, LLC | Data processing systems for data-transfer risk identification, cross-border visualization generation, and related methods |
US10509920B2 (en) | 2016-06-10 | 2019-12-17 | OneTrust, LLC | Data processing systems for processing data subject access requests |
US11057356B2 (en) | 2016-06-10 | 2021-07-06 | OneTrust, LLC | Automated data processing systems and methods for automatically processing data subject access requests using a chatbot |
US10796260B2 (en) | 2016-06-10 | 2020-10-06 | OneTrust, LLC | Privacy management systems and methods |
US10873606B2 (en) | 2016-06-10 | 2020-12-22 | OneTrust, LLC | Data processing systems for data-transfer risk identification, cross-border visualization generation, and related methods |
US11157600B2 (en) | 2016-06-10 | 2021-10-26 | OneTrust, LLC | Data processing and scanning systems for assessing vendor risk |
US11416589B2 (en) | 2016-06-10 | 2022-08-16 | OneTrust, LLC | Data processing and scanning systems for assessing vendor risk |
US12045266B2 (en) | 2016-06-10 | 2024-07-23 | OneTrust, LLC | Data processing systems for generating and populating a data inventory |
US10586075B2 (en) | 2016-06-10 | 2020-03-10 | OneTrust, LLC | Data processing systems for orphaned data identification and deletion and related methods |
US10607028B2 (en) | 2016-06-10 | 2020-03-31 | OneTrust, LLC | Data processing systems for data testing to confirm data deletion and related methods |
US10585968B2 (en) | 2016-06-10 | 2020-03-10 | OneTrust, LLC | Data processing systems for fulfilling data subject access requests and related methods |
US10565161B2 (en) | 2016-06-10 | 2020-02-18 | OneTrust, LLC | Data processing systems for processing data subject access requests |
US11295316B2 (en) | 2016-06-10 | 2022-04-05 | OneTrust, LLC | Data processing systems for identity validation for consumer rights requests and related methods |
US11675929B2 (en) | 2016-06-10 | 2023-06-13 | OneTrust, LLC | Data processing consent sharing systems and related methods |
US11222142B2 (en) | 2016-06-10 | 2022-01-11 | OneTrust, LLC | Data processing systems for validating authorization for personal data collection, storage, and processing |
US11100444B2 (en) | 2016-06-10 | 2021-08-24 | OneTrust, LLC | Data processing systems and methods for providing training in a vendor procurement process |
US11562097B2 (en) | 2016-06-10 | 2023-01-24 | OneTrust, LLC | Data processing systems for central consent repository and related methods |
US12118121B2 (en) | 2016-06-10 | 2024-10-15 | OneTrust, LLC | Data subject access request processing systems and related methods |
US11651106B2 (en) | 2016-06-10 | 2023-05-16 | OneTrust, LLC | Data processing systems for fulfilling data subject access requests and related methods |
US10592692B2 (en) | 2016-06-10 | 2020-03-17 | OneTrust, LLC | Data processing systems for central consent repository and related methods |
US11210420B2 (en) | 2016-06-10 | 2021-12-28 | OneTrust, LLC | Data subject access request processing systems and related methods |
US10706379B2 (en) | 2016-06-10 | 2020-07-07 | OneTrust, LLC | Data processing systems for automatic preparation for remediation and related methods |
US11392720B2 (en) | 2016-06-10 | 2022-07-19 | OneTrust, LLC | Data processing systems for verification of consent and notice processing and related methods |
US11294939B2 (en) | 2016-06-10 | 2022-04-05 | OneTrust, LLC | Data processing systems and methods for automatically detecting and documenting privacy-related aspects of computer software |
US10949170B2 (en) | 2016-06-10 | 2021-03-16 | OneTrust, LLC | Data processing systems for integration of consumer feedback with data subject access requests and related methods |
US11138299B2 (en) | 2016-06-10 | 2021-10-05 | OneTrust, LLC | Data processing and scanning systems for assessing vendor risk |
US10885485B2 (en) | 2016-06-10 | 2021-01-05 | OneTrust, LLC | Privacy management systems and methods |
US11651104B2 (en) | 2016-06-10 | 2023-05-16 | OneTrust, LLC | Consent receipt management systems and related methods |
US10776514B2 (en) | 2016-06-10 | 2020-09-15 | OneTrust, LLC | Data processing systems for the identification and deletion of personal data in computer systems |
US10509894B2 (en) | 2016-06-10 | 2019-12-17 | OneTrust, LLC | Data processing and scanning systems for assessing vendor risk |
US11144622B2 (en) | 2016-06-10 | 2021-10-12 | OneTrust, LLC | Privacy management systems and methods |
US10949565B2 (en) | 2016-06-10 | 2021-03-16 | OneTrust, LLC | Data processing systems for generating and populating a data inventory |
US12052289B2 (en) | 2016-06-10 | 2024-07-30 | OneTrust, LLC | Data processing systems for data-transfer risk identification, cross-border visualization generation, and related methods |
US10496803B2 (en) | 2016-06-10 | 2019-12-03 | OneTrust, LLC | Data processing systems and methods for efficiently assessing the risk of privacy campaigns |
US10169609B1 (en) | 2016-06-10 | 2019-01-01 | OneTrust, LLC | Data processing systems for fulfilling data subject access requests and related methods |
US11520928B2 (en) | 2016-06-10 | 2022-12-06 | OneTrust, LLC | Data processing systems for generating personal data receipts and related methods |
US11277448B2 (en) | 2016-06-10 | 2022-03-15 | OneTrust, LLC | Data processing systems for data-transfer risk identification, cross-border visualization generation, and related methods |
US11416109B2 (en) | 2016-06-10 | 2022-08-16 | OneTrust, LLC | Automated data processing systems and methods for automatically processing data subject access requests using a chatbot |
US11481710B2 (en) | 2016-06-10 | 2022-10-25 | OneTrust, LLC | Privacy management systems and methods |
US11188862B2 (en) | 2016-06-10 | 2021-11-30 | OneTrust, LLC | Privacy management systems and methods |
US11238390B2 (en) | 2016-06-10 | 2022-02-01 | OneTrust, LLC | Privacy management systems and methods |
US10896394B2 (en) | 2016-06-10 | 2021-01-19 | OneTrust, LLC | Privacy management systems and methods |
US10708305B2 (en) | 2016-06-10 | 2020-07-07 | OneTrust, LLC | Automated data processing systems and methods for automatically processing requests for privacy-related information |
US10282700B2 (en) | 2016-06-10 | 2019-05-07 | OneTrust, LLC | Data processing systems for generating and populating a data inventory |
US11146566B2 (en) | 2016-06-10 | 2021-10-12 | OneTrust, LLC | Data processing systems for fulfilling data subject access requests and related methods |
US10803200B2 (en) | 2016-06-10 | 2020-10-13 | OneTrust, LLC | Data processing systems for processing and managing data subject access in a distributed environment |
US10762236B2 (en) | 2016-06-10 | 2020-09-01 | OneTrust, LLC | Data processing user interface monitoring systems and related methods |
US11222309B2 (en) | 2016-06-10 | 2022-01-11 | OneTrust, LLC | Data processing systems for generating and populating a data inventory |
US11341447B2 (en) | 2016-06-10 | 2022-05-24 | OneTrust, LLC | Privacy management systems and methods |
US10606916B2 (en) | 2016-06-10 | 2020-03-31 | OneTrust, LLC | Data processing user interface monitoring systems and related methods |
US11416590B2 (en) | 2016-06-10 | 2022-08-16 | OneTrust, LLC | Data processing and scanning systems for assessing vendor risk |
US10592648B2 (en) | 2016-06-10 | 2020-03-17 | OneTrust, LLC | Consent receipt management systems and related methods |
US10783256B2 (en) | 2016-06-10 | 2020-09-22 | OneTrust, LLC | Data processing systems for data transfer risk identification and related methods |
US11343284B2 (en) | 2016-06-10 | 2022-05-24 | OneTrust, LLC | Data processing systems and methods for performing privacy assessments and monitoring of new versions of computer code for privacy compliance |
US11366786B2 (en) | 2016-06-10 | 2022-06-21 | OneTrust, LLC | Data processing systems for processing data subject access requests |
US10997318B2 (en) | 2016-06-10 | 2021-05-04 | OneTrust, LLC | Data processing systems for generating and populating a data inventory for processing data access requests |
US11188615B2 (en) | 2016-06-10 | 2021-11-30 | OneTrust, LLC | Data processing consent capture systems and related methods |
US10353673B2 (en) | 2016-06-10 | 2019-07-16 | OneTrust, LLC | Data processing systems for integration of consumer feedback with data subject access requests and related methods |
US10284604B2 (en) | 2016-06-10 | 2019-05-07 | OneTrust, LLC | Data processing and scanning systems for generating and populating a data inventory |
US11403377B2 (en) | 2016-06-10 | 2022-08-02 | OneTrust, LLC | Privacy management systems and methods |
US11336697B2 (en) | 2016-06-10 | 2022-05-17 | OneTrust, LLC | Data processing systems for data-transfer risk identification, cross-border visualization generation, and related methods |
US11354435B2 (en) | 2016-06-10 | 2022-06-07 | OneTrust, LLC | Data processing systems for data testing to confirm data deletion and related methods |
US10706176B2 (en) | 2016-06-10 | 2020-07-07 | OneTrust, LLC | Data-processing consent refresh, re-prompt, and recapture systems and related methods |
US10798133B2 (en) | 2016-06-10 | 2020-10-06 | OneTrust, LLC | Data processing systems for data-transfer risk identification, cross-border visualization generation, and related methods |
US10565397B1 (en) | 2016-06-10 | 2020-02-18 | OneTrust, LLC | Data processing systems for fulfilling data subject access requests and related methods |
US10282559B2 (en) | 2016-06-10 | 2019-05-07 | OneTrust, LLC | Data processing systems for identifying, assessing, and remediating data processing risks using data modeling techniques |
US11023842B2 (en) | 2016-06-10 | 2021-06-01 | OneTrust, LLC | Data processing systems and methods for bundled privacy policies |
US10572686B2 (en) | 2016-06-10 | 2020-02-25 | OneTrust, LLC | Consent receipt management systems and related methods |
US10416966B2 (en) | 2016-06-10 | 2019-09-17 | OneTrust, LLC | Data processing systems for identity validation of data subject access requests and related methods |
US10614247B2 (en) | 2016-06-10 | 2020-04-07 | OneTrust, LLC | Data processing systems for automated classification of personal information from documents and related methods |
US11418492B2 (en) | 2016-06-10 | 2022-08-16 | OneTrust, LLC | Data processing systems and methods for using a data model to select a target data asset in a data migration |
US11461500B2 (en) | 2016-06-10 | 2022-10-04 | OneTrust, LLC | Data processing systems for cookie compliance testing with website scanning and related methods |
US10713387B2 (en) | 2016-06-10 | 2020-07-14 | OneTrust, LLC | Consent conversion optimization systems and related methods |
US10909488B2 (en) | 2016-06-10 | 2021-02-02 | OneTrust, LLC | Data processing systems for assessing readiness for responding to privacy-related incidents |
US11038925B2 (en) | 2016-06-10 | 2021-06-15 | OneTrust, LLC | Data processing systems for data-transfer risk identification, cross-border visualization generation, and related methods |
US10915644B2 (en) | 2017-05-15 | 2021-02-09 | Forcepoint, LLC | Collecting data for centralized use in an adaptive trust profile event via an endpoint |
US10862927B2 (en) | 2017-05-15 | 2020-12-08 | Forcepoint, LLC | Dividing events into sessions during adaptive trust profile operations |
US10999296B2 (en) | 2017-05-15 | 2021-05-04 | Forcepoint, LLC | Generating adaptive trust profiles using information derived from similarly situated organizations |
US10129269B1 (en) | 2017-05-15 | 2018-11-13 | Forcepoint, LLC | Managing blockchain access to user profile information |
US9882918B1 (en) | 2017-05-15 | 2018-01-30 | Forcepoint, LLC | User behavior profile in a blockchain |
US10917423B2 (en) | 2017-05-15 | 2021-02-09 | Forcepoint, LLC | Intelligently differentiating between different types of states and attributes when using an adaptive trust profile |
US10999297B2 (en) | 2017-05-15 | 2021-05-04 | Forcepoint, LLC | Using expected behavior of an entity when prepopulating an adaptive trust profile |
EP3410329B1 (en) * | 2017-05-31 | 2022-04-06 | Deutsche Telekom AG | Method and system for detecting irregular inputs for data processing applications |
EP3410328A1 (en) * | 2017-05-31 | 2018-12-05 | Deutsche Telekom AG | Method and system to distinguish between a human and a robot as a user of a mobile smart device |
US10013577B1 (en) | 2017-06-16 | 2018-07-03 | OneTrust, LLC | Data processing systems for identifying whether cookies contain personally identifying information |
JP6972689B2 (en) * | 2017-06-16 | 2021-11-24 | コニカミノルタ株式会社 | Data processing device, data processing execution control method and program |
US10318729B2 (en) * | 2017-07-26 | 2019-06-11 | Forcepoint, LLC | Privacy protection during insider threat monitoring |
US11294898B2 (en) | 2017-07-31 | 2022-04-05 | Pearson Education, Inc. | System and method of automated assessment generation |
US10719592B1 (en) | 2017-09-15 | 2020-07-21 | Wells Fargo Bank, N.A. | Input/output privacy tool |
US10719832B1 (en) | 2018-01-12 | 2020-07-21 | Wells Fargo Bank, N.A. | Fraud prevention tool |
WO2019213376A1 (en) * | 2018-05-02 | 2019-11-07 | Arizona Board Of Regents On Behalf Of The University Of Arizona | Systems and methods for detecting unauthorized file access |
US10803202B2 (en) | 2018-09-07 | 2020-10-13 | OneTrust, LLC | Data processing systems for orphaned data identification and deletion and related methods |
US11144675B2 (en) | 2018-09-07 | 2021-10-12 | OneTrust, LLC | Data processing systems and methods for automatically protecting sensitive data within privacy management systems |
US11544409B2 (en) | 2018-09-07 | 2023-01-03 | OneTrust, LLC | Data processing systems and methods for automatically protecting sensitive data within privacy management systems |
US11159520B1 (en) | 2018-12-20 | 2021-10-26 | Wells Fargo Bank, N.A. | Systems and methods for passive continuous session authentication |
US11095641B1 (en) | 2018-12-20 | 2021-08-17 | Wells Fargo Bank, N.A. | Systems and methods for passive continuous session authentication |
CA3132057A1 (en) * | 2019-03-01 | 2020-09-10 | Mastercard Technologies Canada ULC | Multi-page online application origination (oao) service for fraud prevention systems |
WO2020178209A1 (en) | 2019-03-07 | 2020-09-10 | British Telecommunications Public Limited Company | Multi-level classifier based access control |
US10853496B2 (en) | 2019-04-26 | 2020-12-01 | Forcepoint, LLC | Adaptive trust profile behavioral fingerprint |
US11928683B2 (en) | 2019-10-01 | 2024-03-12 | Mastercard Technologies Canada ULC | Feature encoding in online application origination (OAO) service for a fraud prevention system |
CN111603183B (en) * | 2020-05-11 | 2022-09-16 | 天津印测科技有限公司 | System and method for obtaining symptom-free data through mobile internet progressive exploration stimulation |
US11797528B2 (en) | 2020-07-08 | 2023-10-24 | OneTrust, LLC | Systems and methods for targeted data discovery |
WO2022026564A1 (en) | 2020-07-28 | 2022-02-03 | OneTrust, LLC | Systems and methods for automatically blocking the use of tracking tools |
WO2022032072A1 (en) | 2020-08-06 | 2022-02-10 | OneTrust, LLC | Data processing systems and methods for automatically redacting unstructured data from a data subject access request |
US11436373B2 (en) | 2020-09-15 | 2022-09-06 | OneTrust, LLC | Data processing systems and methods for detecting tools for the automatic blocking of consent requests |
WO2022061270A1 (en) | 2020-09-21 | 2022-03-24 | OneTrust, LLC | Data processing systems and methods for automatically detecting target data transfers and target data processing |
US11397819B2 (en) | 2020-11-06 | 2022-07-26 | OneTrust, LLC | Systems and methods for identifying data processing activities based on data discovery results |
US11687528B2 (en) | 2021-01-25 | 2023-06-27 | OneTrust, LLC | Systems and methods for discovery, classification, and indexing of data in a native computing system |
WO2022170047A1 (en) | 2021-02-04 | 2022-08-11 | OneTrust, LLC | Managing custom attributes for domain objects defined within microservices |
US11494515B2 (en) | 2021-02-08 | 2022-11-08 | OneTrust, LLC | Data processing systems and methods for anonymizing data samples in classification analysis |
US20240098109A1 (en) | 2021-02-10 | 2024-03-21 | OneTrust, LLC | Systems and methods for mitigating risks of third-party computing system functionality integration into a first-party computing system |
WO2022178089A1 (en) | 2021-02-17 | 2022-08-25 | OneTrust, LLC | Managing custom workflows for domain objects defined within microservices |
WO2022178219A1 (en) | 2021-02-18 | 2022-08-25 | OneTrust, LLC | Selective redaction of media content |
US11533315B2 (en) | 2021-03-08 | 2022-12-20 | OneTrust, LLC | Data transfer discovery and analysis systems and related methods |
US11562078B2 (en) | 2021-04-16 | 2023-01-24 | OneTrust, LLC | Assessing and managing computational risk involved with integrating third party computing functionality within a computing system |
US12079423B2 (en) | 2021-05-27 | 2024-09-03 | Jonathan White | Rapidly capturing user input |
IT202100016136A1 (en) | 2021-06-21 | 2022-12-21 | G Lab S R L | User identification method during access via computer, and related system |
US11620142B1 (en) | 2022-06-03 | 2023-04-04 | OneTrust, LLC | Generating and customizing user interfaces for demonstrating functions of interactive user environments |
Family Cites Families (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6307956B1 (en) * | 1998-04-07 | 2001-10-23 | Gerald R. Black | Writing implement for identity verification system |
US6539101B1 (en) * | 1998-04-07 | 2003-03-25 | Gerald R. Black | Method for identity verification |
US20020054082A1 (en) | 1999-01-02 | 2002-05-09 | Karpf Ronald S. | System and method for providing accurate geocoding of responses to location questions in a computer assisted self interview |
US7961917B2 (en) * | 1999-02-10 | 2011-06-14 | Pen-One, Inc. | Method for identity verification |
US6871287B1 (en) * | 2000-01-21 | 2005-03-22 | John F. Ellingson | System and method for verification of identity |
US7222360B1 (en) * | 2002-11-27 | 2007-05-22 | Sprint Communications Company L.P. | Continuous biometric authentication using frame preamble for biometric data |
US20040221171A1 (en) | 2003-05-02 | 2004-11-04 | Ahmed Ahmed Awad E. | Intrusion detector based on mouse dynamics analysis |
US7245218B2 (en) * | 2003-09-12 | 2007-07-17 | Curtis Satoru Ikehara | Input device to continuously detect biometrics |
US20070191691A1 (en) | 2005-05-19 | 2007-08-16 | Martin Polanco | Identification of guilty knowledge and malicious intent |
US8739278B2 (en) * | 2006-04-28 | 2014-05-27 | Oracle International Corporation | Techniques for fraud monitoring and detection using application fingerprinting |
US8086730B2 (en) * | 2009-05-13 | 2011-12-27 | International Business Machines Corporation | Method and system for monitoring a workstation |
US9531733B2 (en) * | 2010-11-29 | 2016-12-27 | Biocatch Ltd. | Device, system, and method of detecting a remote access user |
US9536071B2 (en) * | 2010-11-29 | 2017-01-03 | Biocatch Ltd. | Method, device, and system of differentiating among users based on platform configurations |
US9703953B2 (en) * | 2010-11-29 | 2017-07-11 | Biocatch Ltd. | Method, device, and system of differentiating among users based on user classification |
US10037421B2 (en) * | 2010-11-29 | 2018-07-31 | Biocatch Ltd. | Device, system, and method of three-dimensional spatial user authentication |
US8793790B2 (en) | 2011-10-11 | 2014-07-29 | Honeywell International Inc. | System and method for insider threat detection |
US20140078061A1 (en) * | 2012-09-20 | 2014-03-20 | Teledyne Scientific & Imaging, Llc | Cognitive biometrics using mouse perturbation |
US10248804B2 (en) * | 2014-01-31 | 2019-04-02 | The Arizona Board Of Regents On Behalf Of The University Of Arizona | Fraudulent application detection system and method of use |
-
2014
- 2014-06-18 US US14/899,865 patent/US10524713B2/en active Active
- 2014-06-18 WO PCT/US2014/043056 patent/WO2014205148A1/en active Application Filing
- 2014-06-18 WO PCT/US2014/043057 patent/WO2014205149A1/en active Application Filing
-
2019
- 2019-11-22 US US16/691,637 patent/US20200163605A1/en not_active Abandoned
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20220270716A1 (en) * | 2019-04-05 | 2022-08-25 | Ellipsis Health, Inc. | Confidence evaluation to measure trust in behavioral health survey results |
IT202100021104A1 (en) | 2021-08-05 | 2021-11-05 | Cia Puglia Servizi S R L | System and method for identifying security anomalies in the use of data contained in multi-access databases in front-office and back-office services |
US12010152B2 (en) | 2021-12-08 | 2024-06-11 | Bank Of America Corporation | Information security systems and methods for cyber threat event prediction and mitigation |
Also Published As
Publication number | Publication date |
---|---|
US10524713B2 (en) | 2020-01-07 |
WO2014205148A1 (en) | 2014-12-24 |
US20160143570A1 (en) | 2016-05-26 |
WO2014205149A1 (en) | 2014-12-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20200163605A1 (en) | Automated detection method for insider threat | |
US9703952B2 (en) | Device and method for providing intent-based access control | |
US10248804B2 (en) | Fraudulent application detection system and method of use | |
Charlton et al. | Emotional experiences and motivating factors associated with fingerprint analysis | |
Meixner et al. | A mock terrorism application of the P300‐based concealed information test | |
Dror et al. | The vision in “blind” justice: Expert perception, judgment, and visual cognition in forensic pattern recognition | |
Khan et al. | Targeted mimicry attacks on touch input based implicit authentication schemes | |
Almehmadi et al. | On the possibility of insider threat prevention using intent-based access control (IBAC) | |
Revett et al. | A survey of user authentication based on mouse dynamics | |
Shen et al. | Performance evaluation of anomaly-detection algorithms for mouse dynamics | |
Verschuere et al. | What’s on your mind? | |
US20170119295A1 (en) | Automated Scientifically Controlled Screening Systems (ASCSS) | |
US10559145B1 (en) | Systems and methods for providing behavioral based intention detection | |
Sutrop et al. | From identity verification to behavior prediction: Ethical implications of second generation biometrics | |
US20180365784A1 (en) | Methods and systems for detection of faked identity using unexpected questions and computer input dynamics | |
Monaro et al. | Identity verification using a kinematic memory detection technique | |
Noonan | Spy the lie: Detecting malicious insiders | |
Labkovsky et al. | A novel dual probe complex trial protocol for detection of concealed information | |
Almehmadi et al. | On the possibility of insider threat detection using physiological signal monitoring | |
Liapis et al. | Subjective assessment of stress in HCI: a study of the valence-arousal scale using skin conductance | |
Ryu et al. | A comprehensive survey of context-aware continuous implicit authentication in online learning environments | |
Siahaan et al. | Spoofing keystroke dynamics authentication through synthetic typing pattern extracted from screen-recorded video | |
US20200250547A1 (en) | Behavioral application detection system | |
Furnham et al. | Attitudes toward surveillance: Personality, belief and value correlates | |
Ozer et al. | Extreme Reactions to Globalization: Investigating Indirect, Longitudinal, and Experimental Effects of the Globalization–Radicalization Nexus |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |