US20190081968A1 - Method and Apparatus for Network Fraud Detection and Remediation Through Analytics - Google Patents

Method and Apparatus for Network Fraud Detection and Remediation Through Analytics Download PDF

Info

Publication number
US20190081968A1
US20190081968A1 US15/703,943 US201715703943A US2019081968A1 US 20190081968 A1 US20190081968 A1 US 20190081968A1 US 201715703943 A US201715703943 A US 201715703943A US 2019081968 A1 US2019081968 A1 US 2019081968A1
Authority
US
United States
Prior art keywords
event
entity
risk assessment
engine
service
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US15/703,943
Inventor
Yanlin Wang
Weizhi Li
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cyberark Software Ltd
Original Assignee
Idaptive LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Idaptive LLC filed Critical Idaptive LLC
Priority to US15/703,943 priority Critical patent/US20190081968A1/en
Assigned to GOLUB CAPITAL LLC, AS AGENT reassignment GOLUB CAPITAL LLC, AS AGENT INTELLECTUAL PROPERTY SECURITY AGREEMENT Assignors: CENTRIFY CORPORATION
Assigned to CENTRIFY CORPORATION reassignment CENTRIFY CORPORATION RELEASE OF SECURITY INTEREST UNDER REEL/FRAME 46081/0609 Assignors: GOLUB CAPITAL LLC
Assigned to IDAPTIVE, LLC reassignment IDAPTIVE, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CENTRIFY CORPORATION
Assigned to CENTRIFY CORPORATION reassignment CENTRIFY CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LI, WEIZHI, WANG, YANLIN
Assigned to APPS & ENDPOINT COMPANY, LLC reassignment APPS & ENDPOINT COMPANY, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CENTRIFY CORPORATION
Assigned to IDAPTIVE, LLC reassignment IDAPTIVE, LLC CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: APPS & ENDPOINT COMPANY, LLC
Publication of US20190081968A1 publication Critical patent/US20190081968A1/en
Assigned to CYBERARK SOFTWARE LTD. reassignment CYBERARK SOFTWARE LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CYBERARK SOFTWARE, INC.
Assigned to CYBERARK SOFTWARE, INC. reassignment CYBERARK SOFTWARE, INC. MERGER (SEE DOCUMENT FOR DETAILS). Assignors: IDAPTIVE, LLC
Priority to US17/108,612 priority patent/US11902307B2/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1408Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
    • H04L63/1425Traffic logging, e.g. anomaly detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/55Detecting local intrusion or implementing counter-measures
    • G06F21/552Detecting local intrusion or implementing counter-measures involving long-term monitoring or reporting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/10Network architectures or network communication protocols for network security for controlling access to devices or network resources
    • H04L63/102Entity profiles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1441Countermeasures against malicious traffic
    • H04L67/22
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N99/005
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/20Network architectures or network communication protocols for network security for managing network security; network security policies in general
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/535Tracking the activity of the user

Definitions

  • Authentication and authorization are critical to the security of any computer network. Authentication verifies the identity of an entity (person, process, or device) that wants to access a network and its devices and services. Authorization determines what privileges that an entity, once authenticated, has on a network during the entity's session, which lasts from log-on through log-off. Some privileged entities may access all resources, while other entities are limited to resources where they may do little to no harm. Without successful authentication and authorization, a network and its resources are open to fraudulent intrusion.
  • Authentication requires a method to identify and verify an entity that is requesting access. There is a wide variety of methods to do this that range from a simple username/password combination to smart cards, fingerprint readers, retinal scans, passcode generators, and other techniques. Multifactor authentication combines two or more of these authentication methods for added security. All of these methods have evolved to meet the constant challenge of defeating increasingly sophisticated unauthorized access.
  • Authorization occurs after authentication, typically through a network directory service or identity management service that lists privileges for each entity that may log into the network.
  • the privileges define what network resources—services, computers, and other devices—the logged-in entity may access.
  • the resource checks the entity's privileges and then either grants or denies access accordingly. If an entity does not have access to a resource, the network may allow the entity to log in again, providing a new identity and verification to gain additional privileges in a new session.
  • the standard authentication method of username and password is very easy to use—it requires only the simple text entry of two values—but it is also not very secure. It is often easy for an identity thief to guess or steal the username/password combination.
  • Requiring smart card insertion is more secure than a username/password combination, as is biometric authentication such as reading a fingerprint or scanning a retina.
  • biometric authentication such as reading a fingerprint or scanning a retina.
  • these methods require special readers or scanners plugged into a log-in device that can be expensive or inconvenient to attach. They can also be defeated by a determined identity thief who can steal a card or copy a fingerprint or retina pattern.
  • These authentication methods have an added disadvantage: they do not work for non-human entities such as a computer or a service.
  • Multifactor authentication increases security by requiring two or more different authentication methods such as a user/password combination followed by a smart-card insertion or a network request to the user's cell phone to confirm authentication. As the number of factors increases, authentication security increases exponentially.
  • MFA is not foolproof. It is still possible to steal or phish for data and devices necessary to satisfy each factor. And MFA can be difficult to use for anyone trying to log into a network. Each added factor takes added time and effort before a session starts. It is especially time-consuming when a user has to find one or more devices like a cell phone that may require its own authentication and then confirmation for network authentication. If a user has to perform multiple authentications within a session (to gain extra authorization or use special services, for example), it compounds MFA's difficulty.
  • Adaptive MFA is one solution to the difficulty of using MFA.
  • a network with adaptive MFA can change its authentication requirements depending on detected conditions at log-in. If the conditions indicate a secure environment, the network can require minimal authentication factors to make authentication simple for the entity. If conditions indicate a possible security threat, the network can require additional factors.
  • the network may require no more authentication than that. If the entity is logging in from an unknown IP address with multiple unsuccessful attempts at a username/password combination before getting it right, the network might require a smart card in addition before authentication is successful.
  • Adaptive MFA is rule-based, though, which limits its effectiveness because those rules are static. Adaptive MFA can check conditions at log-in and change MFA requirements based on preset rules that grant access, but it is unaware of an entity's history—its past behavior and whether or not it has been usual or unusual before the current login—and wouldn't know how to act on that history even if known.
  • an entity Once an entity has logged into a network, it can use any network resources for which it has privileges. If a malignant entity is fraudulently logged in, the entity may severely compromise confidential data, change network settings to aid further unauthorized intrusions, install and run malicious software, and cause significant damage to network resources.
  • Network security processes may notify a human administrator of a suspicious action such as an attempt to access extremely confidential information, but by the time a human can look into the entity's actions, it may be too late to take remedial action, especially if the entity is a computer process that acts swiftly. In many cases, malicious behavior remains undetected until a human administrator notices changes or until damage becomes so significant that it becomes readily apparent. By that time it is usually too late to take any kind of remedial action: the damage is done.
  • Some network security systems may employ machine learning to analyze potential threats to a network.
  • the machine learning may establish a risk assessment model that determines which entity events (such as log-in events) may pose a threat and which may not.
  • An entity event or simply event is defined as any activity carried out by a user, device, or process that is detected and reported by network monitoring mechanisms.
  • other entity events detected and reported by network monitoring mechanisms include but are not limited to starting or ending applications, making requests of running applications, reading or writing files, changing an entity's authorization, monitoring network traffic, and logging out.
  • the systems typically work using historical event data collected after a user or a set of users gives permission for those events to be analyzed.
  • the systems do not revise their risk assessment model with new events as they happen, so the model may be out-of-date and ineffective.
  • the system may establish, for example, that logins from a specific location pose a threat, but if the network incorporates a branch office at that location, the location may no longer be enough to pose a threat.
  • the risk assessment model may not be updated in time to avoid a number of incorrectly evaluated login attempts.
  • Embodiments of this invention monitor and store in real time entity events such as network logins and significant user activities after login such as application and device use.
  • An embodiment uses those events to learn standard entity behavior without human assistance, to detect possible risky behavior, and to take remedial action to prevent access or limit activity by a suspicious entity. Because embodiments do not require human assistance, they are much easier to use than prior art.
  • Each login event stored by an embodiment of the invention includes the conditions when the event occurred such as the device used to log in, location, date and time, and so on. Events after login may include the type of event, date and time, and any other data that might be pertinent to the event.
  • an embodiment of the invention builds an entity profile using an entity's live event stream and looks for patterns in the events to determine normal behavior patterns for the entity.
  • An entity profile is a collection of an entity's past events graphed by their parameters in a multi-dimensional array as described below. Each time an embodiment of the invention notes a new entity event, it compares the event to the entity's behavior history as recorded in its entity profile to determine aberrant behavior and therefore increased risk.
  • an embodiment of the invention compares an event to behavior history, the embodiment simultaneously considers multiple event parameters which gives it more accuracy in determining aberrant behavior than prior art that considers a single parameter at a pass.
  • An embodiment also uses each new entity event to keep the entity profile up-to-date, an improvement over prior art where entity events are evaluated as historical data at infrequent intervals.
  • an embodiment of the invention determines increased risk during or just prior to login (a new location, multiple password attempts, and/or an unusual time of day, for example), it can change login to require more authentication factors. It could, for example, normally require just a username and password for login, but if a login attempt looks risky based on behavior, it could also require using a smart card or even more for particularly risky attempts. If a login attempt looks unusually risky, it could even notify a human administrator.
  • an embodiment of the invention can take remedial action. It might, for example, terminate the entity's session, close access to certain resources, change the entity's authorization to limit it to a safe subset of privileges, or contact a human administrator. This is an improvement over prior art that focuses on policing a single type of entity activity, typically login.
  • An embodiment of the invention defines a set of rules that an administrator can easily adjust to set sensitivity to aberrant behavior and an embodiment's assessment of what it considers risky behavior.
  • the administrator can also easily set the way an embodiment of the invention behaves when it detects risk—how it changes login requirements or authorization in risky situations, for example, or how it handles risky behavior within the network.
  • FIG. 1 shows a standard network security system without an embodiment of the invention.
  • An access control service 11 authenticates entities and can change authentication factors at login and at other authentication events according to preset rules within the service. It checks entity authentication and authorization with a directory service 13 , such as Active Directory®, that defines authentication requirements and authorization for each entity.
  • a directory service 13 such as Active Directory®
  • the access control service 11 reports login attempts to an event reporting agent 15 .
  • the agent 15 may collect other events within the network as described later.
  • the agent 15 reports its collected events to an event logging service 17 that stores the details of the events for later retrieval.
  • a human administrator may use an admin web browser 19 as a console to manage and read data from the event logging service 17 , the event reporting agent 15 , the access control service 11 , and the directory service 13 .
  • FIG. 1 is a block diagram that shows the components of a prior art network security system as they work without an embodiment of the invention.
  • FIG. 2 is a block diagram that shows the components of an embodiment of the invention as they exist in a web portal along with non-invention components also within the portal that assist an embodiment of the invention.
  • the diagram shows possible event flows through the invention with thick arrows, and shows possible communication among components via API calls with thin arrows.
  • FIG. 3 is a diagram of a profile's event mapping in an N-dimensional array, in this example two dimensions that correspond to login location and login time.
  • FIG. 4 is a sequence diagram that shows how an embodiment of the invention handles an event that has already occurred within the web portal, in this case a user in a suspect location running a powerful application at an unusual time.
  • FIG. 5 is a sequence diagram that shows how an embodiment of the invention handles an attempted event, in this case a user request to access the web portal when the user is in an unusual location at a suspect time.
  • An embodiment of the invention operates within a computer network, web portal, or other computing environment that requires authentication and authorization to use the environment's resources.
  • FIG. 2 shows the embodiment's components 22 and the assisting non-invention components 21 .
  • An embodiment of the invention works with these assisting non-invention components 21 :
  • An event reporting agent 15 detects entity behavior and reports it to an embodiment of the invention as events, each event with a set of parameters.
  • the event reporting agent 15 can be part of a larger identity service such as a commercially available product known as Centrify Server Suite® available from Centrify Corporation.
  • Entity events typically come from an access control service 11 and can include:
  • Login events which can include parameters such as the IP address of the device used, the type of device used, physical location, number of login attempts, date and time, and more.
  • Application access events which can specify what application is used, application type, date and time of use, and more.
  • Privileged resource events such as launching an ssh session or an RDP session as an administrator.
  • Mobile device management events such as enrolling or un-enrolling a mobile device with an identity management service.
  • CLI command-use events such as UNIX commands or MS-DOS commands, which can specify the commands used, date and time of use, and more.
  • Authorization escalation events such as logging in as a super-user in a UNIX environment, which can specify login parameters listed above.
  • Risk-based access feedback events which report an embodiment of the invention's evaluations of the entity. For example, when the access control service 11 requests a risk evaluation from an embodiment of the invention at entity log-in, the action generates an event that contains the resulting evaluation and any resulting action based on the evaluation.
  • An access control service 11 authenticates entities and can change authentication factor requirements at login and at other authentication events.
  • the access control service may be part of a larger identity service such as the Centrify Server Suite®.
  • a directory service 13 such as Active Directory® defines authentication requirements and authorization for each entity.
  • the directory service may be part of a larger identity service such as the Centrify Server Suite®.
  • An admin web browser 19 that an administrator can use to control an embodiment of the invention.
  • An embodiment of the invention has five primary components. Four of these components reside in the embodiment's core 23 where they have secure access to each other:
  • the event ingestion service 25 accepts event data from the event reporting agent 15 , filters out events that are malformed or irrelevant, deletes unnecessary event data, and converts event data into values that the risk assessment engine 27 can use.
  • the risk assessment engine 27 accepts entity events from the event ingestion service 25 and uses them to build an entity profile for each entity. Whenever requested, the risk assessment engine 27 can compare an event or attempted event to the entity's profile to determine a threat level for the event.
  • the streaming threat remediation engine 29 accepts a steady stream of events from the risk assessment engine 27 .
  • the streaming threat remediation engine 29 stores a rule queue. Each rule in the queue tests an incoming event and may take action if the rule detects certain conditions in the event.
  • a rule may, for example, check the event type, contact the risk assessment engine 27 to determine risk for the event and, if fraud risk is high, require additional login or terminate an entity's session.
  • the risk assessment service 31 is a front end for the risk assessment engine 27 .
  • the service 31 allows components outside the embodiment core 23 to make authenticated connections to embodiment core components and then request service from the risk assessment engine 27 .
  • Service is typically something such as assessing risk for a provided event or for an attempted event such as log-in.
  • An embodiment of the invention has a fifth component that resides outside the embodiment core 23 where non-invention components 21 may easily access it:
  • the on-demand threat remediation engine 33 is very similar to the streaming threat remediation engine 29 . It contains a rule queue. The rules here, though, test attempted events such as log-in requests or authorization changes that may require threat assessment before the requests are granted and the event takes place. An outside component such as the access control service 11 may contact the engine 33 with an attempted event.
  • the engine 33 can request risk assessment from an embodiment of the invention through the risk assessment service 31 .
  • the event ingestion service 25 receives event data from the event reporting agent 15 through any of a variety of methods. It might, for example, subscribe to an event-reporting service maintained by the event reporting agent or query through an event-reporting API.
  • the event reporting agent 15 typically reports some events that aren't of interest for entity risk analysis. They may be invalid events, for example, that have a missing time stamp, have a missing or wrong version number, or may not be reported from a valid data collection event reporting agent 15 . Some event types may not be useful for entity behavior analysis—non-entity events such as cloud status reports, or entity events that report behavior not currently used for risk analysis such as a financial billing event.
  • the event ingestion service 25 is set to recognize these events and filter them out.
  • the event reporting agent 15 may also report useful events that may be in a format that is not usable by the risk assessment engine 27 .
  • the data in the event may be in text format, for example, or it may include information that has nothing to do with risk analysis.
  • the event ingestion service 25 removes unusable data, converts data into values usable by the risk assessment engine 27 , and passes the converted events on to the risk assessment engine 27 .
  • the event ingestion engine 25 looks for applicable event attributes within the event. Some of those attributes have numerical values, others have categorical values such as device type. The event ingestion engine 25 uses well-known statistical techniques such as one-hot conversion and binary conversion to convert categorical values into numerical values. The event ingestion engine 25 then scales and normalizes numerical values using well-known statistical techniques so that the values fall within a consistent and centered range when plotted in an array.
  • the risk assessment engine 27 receives a stream of entity events from the event ingestion service 25 .
  • the engine 27 uses well-known unsupervised real-time machine learning techniques to build an entity profile for each entity with reported events and then, when requested, to determine unusual behavior on the part of that entity.
  • FIG. 3 shows how the risk assessment engine 27 builds an entity profile.
  • the risk assessment engine 27 plots each of an entity's events on a multi-dimensional array 83 .
  • the array 83 has an axis 85 for each type of event parameter. It could, for example, be a seven-dimensional array 83 with an axis 85 , each axis for an event's date, time, location latitude, location longitude, device type, IP address, and number of log-in attempts.
  • the array 83 in practice may have many more dimensions to record other event parameter types. In FIG.
  • the array 83 has only two axes 85 to provide a simple visual representation of a multi-dimensional array 83 : a vertical axis 87 plotting physical location of a login attempt and a horizontal axis 89 plotting the time of day when a login attempt occurs.
  • An entity event's parameters are numerical values that represent the character of the parameter—the number of seconds past midnight for time of day, for example.
  • the engine 27 plots the event location 91 in the entity profile array 83 using those parameter values.
  • clusters 93 of events with similar parameter values appear.
  • Those clusters 93 represent typical behavior for the entity.
  • two clusters 93 appear, one for a time of day when a user attempts login early in the day at or near her place of work, the other for a time of day when the user attempts login after work at or near her home.
  • the risk assessment engine 27 detects those clusters 93 .
  • another component typically one of the two remediation engines 29 and 33 .
  • the engine 27 checks the event location's 91 proximity to existing clusters 93 . If an event is too far from a cluster, it is an anomaly 95 because its parameters show unusual behavior by the entity.
  • the risk assessment engine 27 assigns a risk score and confidence score for an assessed event.
  • the risk score is based on the event location's 91 distance from existing clusters 93 —the further the distance, the higher the risk score.
  • the confidence score is based on the number of events recorded in an entity's profile and the length of time over which the events have been reported—more events over a greater number of days provides more confidence because there is more data to analyze and a greater chance to detect behavior patterns that vary over time. Fewer events over a shorter number of days provides less confidence in behavior analysis.
  • the risk assessment engine 27 may use the risk and confidence scores to assign one of five fraud risk levels to the assessed event:
  • the risk assessment engine 27 can decay clusters 93 in the entity profile—that is, give older clusters 93 less weight in analysis and possibly remove them entirely if they get too old. This helps the accuracy of behavior analysis by accommodating changing entity behavior over time. For example, a user might move to a new country where his location, IP address, and other behavior parameters change. After long enough in the new country, new event clusters 93 appear in the user's profile while old clusters 93 fade and eventually disappear.
  • the risk assessment engine 27 can return an event's risk score, confidence score, and fraud risk level to the requester, which can take action if appropriate.
  • An administrator can control the risk assessment engine's 27 behavior through a console that is typically provided through a web browser 19 connected to the engine 27 or another part of an embodiment of the invention connected to the engine.
  • the administrator can adjust behavior such as anomaly 95 detection, risk and confidence score assignment, and event decay time.
  • the risk assessment engine 27 is also capable of adjusting itself as it learns more effective analysis techniques with repeated exposure to events.
  • the streaming threat remediation engine 29 accepts the stream of events that came from the event ingestion service 25 and passed through the risk assessment engine 27 .
  • the remediation engine 29 runs each event through a rule queue.
  • Each rule is a piece of code that executes to test the event's attributes such as event type, time of execution, and others.
  • a rule can request risk assessment of the event from the risk assessment engine 27 as an additional event attribute.
  • a rule can take action or not. That action might be to execute an associated script.
  • the script can work with other network components such as the access control service 11 or the directory service 13 to take remedial action for an event with assessed fraud risk.
  • the script might log an entity out, for example, or change the entity's authorization level.
  • the rule's action might also be to jump to another rule in the queue.
  • An administrator can control the streaming threat remediation engine 29 through the engine's console on a web browser 19 or through other interfaces such as an API.
  • the administrator may create rules, reorganize the rule queue, associate scripts to carry out remedial actions, and perform other administrative actions.
  • Components outside the embodiment core 23 cannot directly request risk assessment from the risk assessment engine 27 .
  • Outside access to the risk assessment engine 27 is important, though, for assessing attempted events such as log-in requests that are not yet granted and have not yet become internally processed events.
  • the risk assessment service 31 provides a front end for the risk assessment engine 27 : it provides a contact point where external components can authenticate, establish a secure connection, and then request attempted event risk assessment from the risk assessment engine 27 .
  • the risk assessment service 31 converts data in the supplied attempted event into values that the risk assessment engine 27 can use.
  • the risk assessment service 31 converts data in the same way that the event ingestion engine 25 converts streaming event data into values usable by the risk assessment engine 27 . After data conversion, the risk assessment service 31 passes attempted event risk evaluation requests on to the risk assessment engine 27 and returns the results to the requester.
  • the on-demand threat remediation engine 33 is similar to the streaming threat remediation engine 29 . It contains a rule queue that tests and carries out conditional actions that may include executing scripts. It has two principal differences, though: it resides outside the embodiment core 23 so that it is easily accessible to external components, and it handles attempted events (events that are waiting permission to execute) rather than already-executed events.
  • Attempted events are typically authentication requests such as log-in requests that come in through the access control service 11 .
  • the request must wait for access control service 11 approval before it is granted and the log-in becomes an executed event. While the request is pending, the access control service 11 can contact the on-demand threat remediation engine 33 with the attempted event, which includes pertinent properties such as the time and place of the request, the originating device for the request, and so on.
  • the on-demand threat remediation engine 33 runs an attempted event through its rule queue just as the streaming threat remediation engine 29 runs an executed event through its rule queue.
  • the rules in the queue test the attempted events properties and may request risk assessment for some attempted events.
  • the on-demand threat remediation engine 33 contacts the risk assessment service 31 when it requests risk assessment.
  • the the risk assessment service 31 passes the attempted event in a form that the risk assessment engine 27 can treat as an executed event.
  • the assessment engine 27 compares the attempted event's event location 91 with clusters 93 in the requesting entity's profile just as it would an executed event to determine risk and confidence scores and fraud risk.
  • the assessment returns to the risk assessment service 31 and then back to the on-demand threat remediation engine 33 .
  • the rule that triggered the assessment may then take action such as denying log-in through the access control service 11 if the attempted log-in's threat level is too high.
  • FIG. 4 shows how an implementation of the invention handles an executed event.
  • an executed event triggers remedial action through the implementation's streaming threat remediation engine 29 .
  • the user 35 who is logged into a web portal that incorporates the invention, starts 37 an administration application that can be used to examine other users' email.
  • the user is a fraudulent user in a suspect location who is active during a time when the user account is not normally used.
  • the event reporting agent 15 reports 39 the application start event to the event ingestion service 25 .
  • the event contains among other things the application type, the user's location, and the date and time of the application start.
  • the event ingestion service 25 filters and converts 41 the event: the service 25 ensures that the event is not an extraneous event that shouldn't be analyzed, then converts the data in the event into a form that the risk assessment engine 27 can use.
  • the event ingestion service 25 sends 43 the converted event to the risk assessment engine 27 .
  • the risk assessment engine 27 adds 45 the event to the user's entity profile, where the event's event location 91 is plotted in the profile's multiple-dimensional array 83 .
  • the risk assessment engine 27 sends 47 the event to the streaming threat remediation engine 29 .
  • the streaming threat remediation engine 29 runs 49 the event through the engine's 29 rule chain.
  • the application type triggers 51 a request for risk and confidence scores for the event. It does so because one of the rules in the rule chain tests for application type, notices a high-risk application, and requests risk and confidence scores from the risk assessment engine 27 for the event.
  • the risk assessment engine 27 compares 53 the event location 91 to clusters 93 in the entity's profile and notices that the location and time are an aberration 95 because they are not usual for the user.
  • the engine 27 calculates a high risk score because of that. Because (in this example) there are many events in the profile, the engine calculates a high confidence score.
  • the risk assessment engine 27 returns the high risk and confidence scores to the streaming threat remediation engine 29 .
  • the high scores trigger 57 a script that requests user 35 disconnection from the access control service 11 .
  • the access control service 11 disconnects 59 the user 35 .
  • FIG. 5 shows how an implementation of the invention handles an attempted event.
  • the attempted event (a log-in) triggers remedial action through the implementation's on-demand threat remediation engine 33 .
  • the user 35 requests 61 log-in to a web portal that incorporates the invention.
  • the log-in request goes to the portal's access control service 11 .
  • the access control service 11 sends 63 the access attempt with the attempt parameters (including request location and date and time) to the on-demand threat remediation engine 33 .
  • the on-demand threat remediation engine 33 runs 65 the access attempt event through the engine's 33 rule queue.
  • the event triggers a rule that recognizes that the event attempts access, which requires risk assessment, so the engine 33 requests 67 risk assessment of the access attempt from the risk assessment service 31 .
  • the risk assessment service 31 converts 68 the data in the access attempt into a form the risk assessment engine 27 can use, then requests 69 risk assessment for the access attempt from the risk assessment engine 27 .
  • the risk assessment engine 27 compares 71 the access attempt to access event clusters 93 in the entity's profile and notices that the location and time are not usual for the user 35 .
  • the engine 27 calculates 71 risk scores for the access attempt just as it would for an executed access event. In this case, it calculates high risk scores.
  • the risk assessment engine 27 returns 73 the high scores to the risk assessment service 31 .
  • the risk assessment service 31 returns 75 the high scores to the on-demand threat remediation engine 33 .
  • the high scores returned to the engine 33 triggers an access denial that the engine 33 sends 77 to the access control service 11 .
  • the access control service 11 denies 79 the user's 35 log-in request.
  • the event reporting agent 15 reports 81 the denied access event to the event ingestion service 25 .
  • the denied access event goes through an embodiment of the invention just as any other event would, as described previously.
  • the event is recorded in the risk assessment engine's 27 multi-dimensional array 83 and passes through the streaming threat remediation engine 29 for possible action on the event.
  • An embodiment of the invention may, for example, run within an organization's private network, or across large interconnected networks.
  • Embodiments of the invention may locate components in different locations that may be together within a core or scattered across various locations, and they may consolidate multiple components within a single component that performs the same functions as the consolidated components.
  • Embodiments of the invention may use methods other than multi-dimensional arrays 83 to assess an event's possible threat.
  • An embodiment of the invention may be a machine-readable medium having stored thereon instructions which cause a processor to perform operations as described above. In other embodiments/the operations might be performed by specific hardware components that contain hardwired logic. Those operations might alternatively be performed by any combination of programmed computer components and custom hardware components.
  • a machine-readable medium may include any mechanism for storing or transmitting information in a form readable by any type of processing device.

Abstract

A system and method for assessing the identity fraud risk of an entity's (a user's, computer process's, or device's) behavior within a computer network and then to take appropriate action. The system uses real-time machine learning for its assessment. It records the entity's log-in behavior (conditions at log-in) and behavior once logged in to create an entity profile that helps identify behavior patterns. The system compares new entity behavior with the entity profile to determine a risk score and a confidence level for the behavior. If the risk score and confidence level indicate a credible identity fraud risk at log-in, the system can require more factors of authentication before log-in succeeds. If the system detects risky behavior after log-in, it can take remedial action such as ending the entity's session, curtailing the entity's privileges, or notifying a human administrator.

Description

    BACKGROUND
  • Authentication and authorization are critical to the security of any computer network. Authentication verifies the identity of an entity (person, process, or device) that wants to access a network and its devices and services. Authorization determines what privileges that an entity, once authenticated, has on a network during the entity's session, which lasts from log-on through log-off. Some privileged entities may access all resources, while other entities are limited to resources where they may do little to no harm. Without successful authentication and authorization, a network and its resources are open to fraudulent intrusion.
  • Authentication requires a method to identify and verify an entity that is requesting access. There is a wide variety of methods to do this that range from a simple username/password combination to smart cards, fingerprint readers, retinal scans, passcode generators, and other techniques. Multifactor authentication combines two or more of these authentication methods for added security. All of these methods have evolved to meet the constant challenge of defeating increasingly sophisticated unauthorized access.
  • Authorization occurs after authentication, typically through a network directory service or identity management service that lists privileges for each entity that may log into the network. The privileges define what network resources—services, computers, and other devices—the logged-in entity may access. When an entity requests to use a network resource, the resource checks the entity's privileges and then either grants or denies access accordingly. If an entity does not have access to a resource, the network may allow the entity to log in again, providing a new identity and verification to gain additional privileges in a new session.
  • Problems with Prior Art
  • As authentication methods increase in effectiveness, they often get harder to use and may still have weaknesses that allow unauthorized access.
  • The standard authentication method of username and password is very easy to use—it requires only the simple text entry of two values—but it is also not very secure. It is often easy for an identity thief to guess or steal the username/password combination.
  • Requiring smart card insertion is more secure than a username/password combination, as is biometric authentication such as reading a fingerprint or scanning a retina. But these methods require special readers or scanners plugged into a log-in device that can be expensive or inconvenient to attach. They can also be defeated by a determined identity thief who can steal a card or copy a fingerprint or retina pattern. These authentication methods have an added disadvantage: they do not work for non-human entities such as a computer or a service.
  • Multifactor Authentication
  • Multifactor authentication (MFA) increases security by requiring two or more different authentication methods such as a user/password combination followed by a smart-card insertion or a network request to the user's cell phone to confirm authentication. As the number of factors increases, authentication security increases exponentially.
  • That said, MFA is not foolproof. It is still possible to steal or phish for data and devices necessary to satisfy each factor. And MFA can be difficult to use for anyone trying to log into a network. Each added factor takes added time and effort before a session starts. It is especially time-consuming when a user has to find one or more devices like a cell phone that may require its own authentication and then confirmation for network authentication. If a user has to perform multiple authentications within a session (to gain extra authorization or use special services, for example), it compounds MFA's difficulty.
  • Adaptive MFA
  • Adaptive MFA is one solution to the difficulty of using MFA. A network with adaptive MFA can change its authentication requirements depending on detected conditions at log-in. If the conditions indicate a secure environment, the network can require minimal authentication factors to make authentication simple for the entity. If conditions indicate a possible security threat, the network can require additional factors.
  • For example, if an entity is logging in from a known IP address with a single correct attempt at a username/password combination, the network may require no more authentication than that. If the entity is logging in from an unknown IP address with multiple unsuccessful attempts at a username/password combination before getting it right, the network might require a smart card in addition before authentication is successful.
  • Adaptive MFA is rule-based, though, which limits its effectiveness because those rules are static. Adaptive MFA can check conditions at log-in and change MFA requirements based on preset rules that grant access, but it is ignorant of an entity's history—its past behavior and whether or not it has been usual or unusual before the current login—and wouldn't know how to act on that history even if known.
  • For example, if a user works a night shift and typically logs in from midnight to 8 a.m., preset rules might require additional authentication factors if login occurs outside the business hours of 9 a.m. to 5 p.m. The night-time user would always have to provide extra authentication even though his behavior is normal and predictable and there is no extra risk to his login.
  • Malicious Behavior within a Session
  • Once an entity has logged into a network, it can use any network resources for which it has privileges. If a malignant entity is fraudulently logged in, the entity may severely compromise confidential data, change network settings to aid further unauthorized intrusions, install and run malicious software, and cause significant damage to network resources.
  • This kind of malicious damage often remains undetected for long periods of time—entity behavior is usually unmonitored within a network. If behavior is monitored by security processes, those processes may follow rules to help deter damaging actions such as requiring further authentication, but the processes cannot judge whether the entity's overall actions are suspicious or not and cannot take remedial action.
  • Network security processes may notify a human administrator of a suspicious action such as an attempt to access extremely confidential information, but by the time a human can look into the entity's actions, it may be too late to take remedial action, especially if the entity is a computer process that acts swiftly. In many cases, malicious behavior remains undetected until a human administrator notices changes or until damage becomes so significant that it becomes readily apparent. By that time it is usually too late to take any kind of remedial action: the damage is done.
  • Machine Learning for Threat Analysis
  • Some network security systems may employ machine learning to analyze potential threats to a network. The machine learning may establish a risk assessment model that determines which entity events (such as log-in events) may pose a threat and which may not. An entity event or simply event is defined as any activity carried out by a user, device, or process that is detected and reported by network monitoring mechanisms. In addition to log-in events, other entity events detected and reported by network monitoring mechanisms include but are not limited to starting or ending applications, making requests of running applications, reading or writing files, changing an entity's authorization, monitoring network traffic, and logging out.
  • These systems typically work using historical event data collected after a user or a set of users gives permission for those events to be analyzed. The systems do not revise their risk assessment model with new events as they happen, so the model may be out-of-date and ineffective. The system may establish, for example, that logins from a specific location pose a threat, but if the network incorporates a branch office at that location, the location may no longer be enough to pose a threat. The risk assessment model may not be updated in time to avoid a number of incorrectly evaluated login attempts.
  • Current machine learning systems may also require human supervision where humans evaluate and annotate a set of events before giving them to the machine learning system to “teach” the system which types of events are good and which are bad. This takes time and effort, and further ensures that the system's evaluation parameters will be out-of-date.
  • Current machine learning systems set up a risk assessment model for each event parameter (login time, for example, or location, or number of login attempts), evaluate each parameter of an event, and then aggregate the evaluations to determine total risk. Because the risk assessment does not consider an event's parameters in combination, it misses parameter combinations that might raise alarms even though the individual parameters may not seem risky—for example, a login time that does not seem risky in itself from a login location that also does not seem risky, but taken together raise alarms because logins do not typically occur at that time in that location.
  • Current machine learning systems typically apply their risk assessment model to a single network function, usually authentication. The model is not easily adapted to detect threats to other functions such as user activity within the network. Reworking the machine learning system to apply to other functions takes human administrative work that may prevent applying the system to those other functions.
  • SUMMARY OF THE INVENTION
  • Embodiments of this invention monitor and store in real time entity events such as network logins and significant user activities after login such as application and device use. An embodiment uses those events to learn standard entity behavior without human assistance, to detect possible risky behavior, and to take remedial action to prevent access or limit activity by a suspicious entity. Because embodiments do not require human assistance, they are much easier to use than prior art.
  • Each login event stored by an embodiment of the invention includes the conditions when the event occurred such as the device used to log in, location, date and time, and so on. Events after login may include the type of event, date and time, and any other data that might be pertinent to the event.
  • To analyze an entity's events, an embodiment of the invention builds an entity profile using an entity's live event stream and looks for patterns in the events to determine normal behavior patterns for the entity. An entity profile is a collection of an entity's past events graphed by their parameters in a multi-dimensional array as described below. Each time an embodiment of the invention notes a new entity event, it compares the event to the entity's behavior history as recorded in its entity profile to determine aberrant behavior and therefore increased risk.
  • When an embodiment of the invention compares an event to behavior history, the embodiment simultaneously considers multiple event parameters which gives it more accuracy in determining aberrant behavior than prior art that considers a single parameter at a pass. An embodiment also uses each new entity event to keep the entity profile up-to-date, an improvement over prior art where entity events are evaluated as historical data at infrequent intervals.
  • If an embodiment of the invention determines increased risk during or just prior to login (a new location, multiple password attempts, and/or an unusual time of day, for example), it can change login to require more authentication factors. It could, for example, normally require just a username and password for login, but if a login attempt looks risky based on behavior, it could also require using a smart card or even more for particularly risky attempts. If a login attempt looks unusually risky, it could even notify a human administrator.
  • If an embodiment of the invention notices unusual behavior after login, such as using unusual applications or accessing sensitive devices, an embodiment of the invention can take remedial action. It might, for example, terminate the entity's session, close access to certain resources, change the entity's authorization to limit it to a safe subset of privileges, or contact a human administrator. This is an improvement over prior art that focuses on policing a single type of entity activity, typically login.
  • An embodiment of the invention defines a set of rules that an administrator can easily adjust to set sensitivity to aberrant behavior and an embodiment's assessment of what it considers risky behavior. The administrator can also easily set the way an embodiment of the invention behaves when it detects risk—how it changes login requirements or authorization in risky situations, for example, or how it handles risky behavior within the network.
  • Prior Art Security
  • FIG. 1 shows a standard network security system without an embodiment of the invention.
  • An access control service 11 authenticates entities and can change authentication factors at login and at other authentication events according to preset rules within the service. It checks entity authentication and authorization with a directory service 13, such as Active Directory®, that defines authentication requirements and authorization for each entity.
  • The access control service 11 reports login attempts to an event reporting agent 15. The agent 15 may collect other events within the network as described later. The agent 15 reports its collected events to an event logging service 17 that stores the details of the events for later retrieval.
  • A human administrator may use an admin web browser 19 as a console to manage and read data from the event logging service 17, the event reporting agent 15, the access control service 11, and the directory service 13.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Embodiments of the invention are illustrated by way of example and not by way of limitation in the figures of the accompanying drawings in which like references indicate similar elements. Note that references to “an” or “one” embodiment in this disclosure are not necessarily to the same embodiment, and such references mean “at least one.”
  • FIG. 1 is a block diagram that shows the components of a prior art network security system as they work without an embodiment of the invention.
  • FIG. 2 is a block diagram that shows the components of an embodiment of the invention as they exist in a web portal along with non-invention components also within the portal that assist an embodiment of the invention. The diagram shows possible event flows through the invention with thick arrows, and shows possible communication among components via API calls with thin arrows.
  • FIG. 3 is a diagram of a profile's event mapping in an N-dimensional array, in this example two dimensions that correspond to login location and login time.
  • FIG. 4 is a sequence diagram that shows how an embodiment of the invention handles an event that has already occurred within the web portal, in this case a user in a suspect location running a powerful application at an unusual time.
  • FIG. 5 is a sequence diagram that shows how an embodiment of the invention handles an attempted event, in this case a user request to access the web portal when the user is in an unusual location at a suspect time.
  • DETAILED DESCRIPTION OF THE INVENTION
  • An embodiment of the invention operates within a computer network, web portal, or other computing environment that requires authentication and authorization to use the environment's resources.
  • Assisting Non-Invention Components 21
  • FIG. 2 shows the embodiment's components 22 and the assisting non-invention components 21. An embodiment of the invention works with these assisting non-invention components 21:
  • An event reporting agent 15 detects entity behavior and reports it to an embodiment of the invention as events, each event with a set of parameters. The event reporting agent 15 can be part of a larger identity service such as a commercially available product known as Centrify Server Suite® available from Centrify Corporation. Entity events typically come from an access control service 11 and can include:
  • a) Login events, which can include parameters such as the IP address of the device used, the type of device used, physical location, number of login attempts, date and time, and more.
  • b) Application access events, which can specify what application is used, application type, date and time of use, and more.
  • c) Privileged resource events such as launching an ssh session or an RDP session as an administrator.
  • d) Mobile device management events such as enrolling or un-enrolling a mobile device with an identity management service.
  • e) CLI command-use events such as UNIX commands or MS-DOS commands, which can specify the commands used, date and time of use, and more.
  • f) Authorization escalation events, such as logging in as a super-user in a UNIX environment, which can specify login parameters listed above.
  • g) Risk-based access feedback events, which report an embodiment of the invention's evaluations of the entity. For example, when the access control service 11 requests a risk evaluation from an embodiment of the invention at entity log-in, the action generates an event that contains the resulting evaluation and any resulting action based on the evaluation.
  • An access control service 11 authenticates entities and can change authentication factor requirements at login and at other authentication events. The access control service may be part of a larger identity service such as the Centrify Server Suite®.
  • A directory service 13 such as Active Directory® defines authentication requirements and authorization for each entity. The directory service may be part of a larger identity service such as the Centrify Server Suite®.
  • An admin web browser 19 that an administrator can use to control an embodiment of the invention.
  • Embodiment Components 22
  • An embodiment of the invention has five primary components. Four of these components reside in the embodiment's core 23 where they have secure access to each other:
  • The event ingestion service 25 accepts event data from the event reporting agent 15, filters out events that are malformed or irrelevant, deletes unnecessary event data, and converts event data into values that the risk assessment engine 27 can use.
  • The risk assessment engine 27 accepts entity events from the event ingestion service 25 and uses them to build an entity profile for each entity. Whenever requested, the risk assessment engine 27 can compare an event or attempted event to the entity's profile to determine a threat level for the event.
  • The streaming threat remediation engine 29 accepts a steady stream of events from the risk assessment engine 27. The streaming threat remediation engine 29 stores a rule queue. Each rule in the queue tests an incoming event and may take action if the rule detects certain conditions in the event. A rule may, for example, check the event type, contact the risk assessment engine 27 to determine risk for the event and, if fraud risk is high, require additional login or terminate an entity's session.
  • The risk assessment service 31 is a front end for the risk assessment engine 27. The service 31 allows components outside the embodiment core 23 to make authenticated connections to embodiment core components and then request service from the risk assessment engine 27. Service is typically something such as assessing risk for a provided event or for an attempted event such as log-in.
  • An embodiment of the invention has a fifth component that resides outside the embodiment core 23 where non-invention components 21 may easily access it:
  • The on-demand threat remediation engine 33 is very similar to the streaming threat remediation engine 29. It contains a rule queue. The rules here, though, test attempted events such as log-in requests or authorization changes that may require threat assessment before the requests are granted and the event takes place. An outside component such as the access control service 11 may contact the engine 33 with an attempted event. The engine 33 can request risk assessment from an embodiment of the invention through the risk assessment service 31.
  • The Event Ingestion Service 25
  • The event ingestion service 25 receives event data from the event reporting agent 15 through any of a variety of methods. It might, for example, subscribe to an event-reporting service maintained by the event reporting agent or query through an event-reporting API.
  • The event reporting agent 15 typically reports some events that aren't of interest for entity risk analysis. They may be invalid events, for example, that have a missing time stamp, have a missing or wrong version number, or may not be reported from a valid data collection event reporting agent 15. Some event types may not be useful for entity behavior analysis—non-entity events such as cloud status reports, or entity events that report behavior not currently used for risk analysis such as a financial billing event. The event ingestion service 25 is set to recognize these events and filter them out.
  • The event reporting agent 15 may also report useful events that may be in a format that is not usable by the risk assessment engine 27. The data in the event may be in text format, for example, or it may include information that has nothing to do with risk analysis. The event ingestion service 25 removes unusable data, converts data into values usable by the risk assessment engine 27, and passes the converted events on to the risk assessment engine 27.
  • To convert event data into values usable by the risk assessment engine 27, the event ingestion engine 25 looks for applicable event attributes within the event. Some of those attributes have numerical values, others have categorical values such as device type. The event ingestion engine 25 uses well-known statistical techniques such as one-hot conversion and binary conversion to convert categorical values into numerical values. The event ingestion engine 25 then scales and normalizes numerical values using well-known statistical techniques so that the values fall within a consistent and centered range when plotted in an array.
  • The Risk Assessment Engine 27
  • The risk assessment engine 27 receives a stream of entity events from the event ingestion service 25. The engine 27 uses well-known unsupervised real-time machine learning techniques to build an entity profile for each entity with reported events and then, when requested, to determine unusual behavior on the part of that entity. FIG. 3 shows how the risk assessment engine 27 builds an entity profile.
  • To build an entity profile, the risk assessment engine 27 plots each of an entity's events on a multi-dimensional array 83. The array 83 has an axis 85 for each type of event parameter. It could, for example, be a seven-dimensional array 83 with an axis 85, each axis for an event's date, time, location latitude, location longitude, device type, IP address, and number of log-in attempts. The array 83 in practice may have many more dimensions to record other event parameter types. In FIG. 3, the array 83 has only two axes 85 to provide a simple visual representation of a multi-dimensional array 83: a vertical axis 87 plotting physical location of a login attempt and a horizontal axis 89 plotting the time of day when a login attempt occurs.
  • An entity event's parameters are numerical values that represent the character of the parameter—the number of seconds past midnight for time of day, for example. The engine 27 plots the event location 91 in the entity profile array 83 using those parameter values. As the events accumulate in the array 83, clusters 93 of events with similar parameter values appear. Those clusters 93 represent typical behavior for the entity. In FIG. 3, two clusters 93 appear, one for a time of day when a user attempts login early in the day at or near her place of work, the other for a time of day when the user attempts login after work at or near her home.
  • The risk assessment engine 27 detects those clusters 93. When another component (typically one of the two remediation engines 29 and 33) requests risk assessment for an event, the engine 27 checks the event location's 91 proximity to existing clusters 93. If an event is too far from a cluster, it is an anomaly 95 because its parameters show unusual behavior by the entity.
  • The risk assessment engine 27 assigns a risk score and confidence score for an assessed event. The risk score is based on the event location's 91 distance from existing clusters 93—the further the distance, the higher the risk score. The confidence score is based on the number of events recorded in an entity's profile and the length of time over which the events have been reported—more events over a greater number of days provides more confidence because there is more data to analyze and a greater chance to detect behavior patterns that vary over time. Fewer events over a shorter number of days provides less confidence in behavior analysis.
  • The risk assessment engine 27 may use the risk and confidence scores to assign one of five fraud risk levels to the assessed event:
  • a) Unknown: there are not enough events in the entity profile over a long enough period of time to successfully determine fraud risk.
  • b) Normal: the event looks legitimate.
  • c) Low Risk: some aspects of the event are abnormal, but not many.
  • d) Medium Risk: some important aspects of the event are abnormal, some are not.
  • e) High Risk: many key aspects of the event are abnormal.
  • The risk assessment engine 27 can decay clusters 93 in the entity profile—that is, give older clusters 93 less weight in analysis and possibly remove them entirely if they get too old. This helps the accuracy of behavior analysis by accommodating changing entity behavior over time. For example, a user might move to a new country where his location, IP address, and other behavior parameters change. After long enough in the new country, new event clusters 93 appear in the user's profile while old clusters 93 fade and eventually disappear.
  • The risk assessment engine 27 can return an event's risk score, confidence score, and fraud risk level to the requester, which can take action if appropriate.
  • An administrator can control the risk assessment engine's 27 behavior through a console that is typically provided through a web browser 19 connected to the engine 27 or another part of an embodiment of the invention connected to the engine. The administrator can adjust behavior such as anomaly 95 detection, risk and confidence score assignment, and event decay time. The risk assessment engine 27 is also capable of adjusting itself as it learns more effective analysis techniques with repeated exposure to events.
  • The Streaming Threat Remediation Engine 29
  • The streaming threat remediation engine 29 accepts the stream of events that came from the event ingestion service 25 and passed through the risk assessment engine 27. The remediation engine 29 runs each event through a rule queue. Each rule is a piece of code that executes to test the event's attributes such as event type, time of execution, and others. A rule can request risk assessment of the event from the risk assessment engine 27 as an additional event attribute.
  • Depending on the results of the event's property tests (which can include testing the risk assessment attribute), a rule can take action or not. That action might be to execute an associated script. The script can work with other network components such as the access control service 11 or the directory service 13 to take remedial action for an event with assessed fraud risk. The script might log an entity out, for example, or change the entity's authorization level. The rule's action might also be to jump to another rule in the queue.
  • If a rule takes no action, the event passes to the next rule in the queue. Most events pass completely through the rule queue without triggering any action.
  • An administrator can control the streaming threat remediation engine 29 through the engine's console on a web browser 19 or through other interfaces such as an API. The administrator may create rules, reorganize the rule queue, associate scripts to carry out remedial actions, and perform other administrative actions.
  • The Risk Assessment Service 31
  • Components outside the embodiment core 23—on another server, for example—cannot directly request risk assessment from the risk assessment engine 27. Outside access to the risk assessment engine 27 is important, though, for assessing attempted events such as log-in requests that are not yet granted and have not yet become internally processed events.
  • The risk assessment service 31 provides a front end for the risk assessment engine 27: it provides a contact point where external components can authenticate, establish a secure connection, and then request attempted event risk assessment from the risk assessment engine 27. The risk assessment service 31 converts data in the supplied attempted event into values that the risk assessment engine 27 can use. The risk assessment service 31 converts data in the same way that the event ingestion engine 25 converts streaming event data into values usable by the risk assessment engine 27. After data conversion, the risk assessment service 31 passes attempted event risk evaluation requests on to the risk assessment engine 27 and returns the results to the requester.
  • The On-Demand Threat Remediation Engine 33
  • The on-demand threat remediation engine 33 is similar to the streaming threat remediation engine 29. It contains a rule queue that tests and carries out conditional actions that may include executing scripts. It has two principal differences, though: it resides outside the embodiment core 23 so that it is easily accessible to external components, and it handles attempted events (events that are waiting permission to execute) rather than already-executed events.
  • Attempted events are typically authentication requests such as log-in requests that come in through the access control service 11. The request must wait for access control service 11 approval before it is granted and the log-in becomes an executed event. While the request is pending, the access control service 11 can contact the on-demand threat remediation engine 33 with the attempted event, which includes pertinent properties such as the time and place of the request, the originating device for the request, and so on.
  • The on-demand threat remediation engine 33 runs an attempted event through its rule queue just as the streaming threat remediation engine 29 runs an executed event through its rule queue. The rules in the queue test the attempted events properties and may request risk assessment for some attempted events.
  • The on-demand threat remediation engine 33 contacts the risk assessment service 31 when it requests risk assessment. The the risk assessment service 31 passes the attempted event in a form that the risk assessment engine 27 can treat as an executed event. When the risk assessment service 31 passes the attempted event on to the risk assessment engine 27, the assessment engine 27 compares the attempted event's event location 91 with clusters 93 in the requesting entity's profile just as it would an executed event to determine risk and confidence scores and fraud risk. The assessment returns to the risk assessment service 31 and then back to the on-demand threat remediation engine 33. The rule that triggered the assessment may then take action such as denying log-in through the access control service 11 if the attempted log-in's threat level is too high.
  • Handling an Executed Event
  • FIG. 4 shows how an implementation of the invention handles an executed event. In this example, an executed event triggers remedial action through the implementation's streaming threat remediation engine 29.
  • The user 35, who is logged into a web portal that incorporates the invention, starts 37 an administration application that can be used to examine other users' email. The user is a fraudulent user in a suspect location who is active during a time when the user account is not normally used.
  • The event reporting agent 15 reports 39 the application start event to the event ingestion service 25. The event contains among other things the application type, the user's location, and the date and time of the application start.
  • The event ingestion service 25 filters and converts 41 the event: the service 25 ensures that the event is not an extraneous event that shouldn't be analyzed, then converts the data in the event into a form that the risk assessment engine 27 can use.
  • The event ingestion service 25 sends 43 the converted event to the risk assessment engine 27.
  • The risk assessment engine 27 adds 45 the event to the user's entity profile, where the event's event location 91 is plotted in the profile's multiple-dimensional array 83.
  • The risk assessment engine 27 sends 47 the event to the streaming threat remediation engine 29.
  • The streaming threat remediation engine 29 runs 49 the event through the engine's 29 rule chain.
  • In the streaming threat remediation engine 29, the application type triggers 51 a request for risk and confidence scores for the event. It does so because one of the rules in the rule chain tests for application type, notices a high-risk application, and requests risk and confidence scores from the risk assessment engine 27 for the event.
  • The risk assessment engine 27 compares 53 the event location 91 to clusters 93 in the entity's profile and notices that the location and time are an aberration 95 because they are not usual for the user. The engine 27 calculates a high risk score because of that. Because (in this example) there are many events in the profile, the engine calculates a high confidence score.
  • The risk assessment engine 27 returns the high risk and confidence scores to the streaming threat remediation engine 29.
  • In the streaming threat remediation engine 29, the high scores trigger 57 a script that requests user 35 disconnection from the access control service 11.
  • The access control service 11 disconnects 59 the user 35.
  • Handling an Attempted Event
  • FIG. 5 shows how an implementation of the invention handles an attempted event. The attempted event (a log-in) triggers remedial action through the implementation's on-demand threat remediation engine 33.
  • The user 35, a hacker from a suspicious location at an unusual time who is not the real user, requests 61 log-in to a web portal that incorporates the invention. The log-in request goes to the portal's access control service 11.
  • The access control service 11 sends 63 the access attempt with the attempt parameters (including request location and date and time) to the on-demand threat remediation engine 33.
  • The on-demand threat remediation engine 33 runs 65 the access attempt event through the engine's 33 rule queue.
  • In the on-demand threat remediation engine 33, the event triggers a rule that recognizes that the event attempts access, which requires risk assessment, so the engine 33 requests 67 risk assessment of the access attempt from the risk assessment service 31.
  • The risk assessment service 31 converts 68 the data in the access attempt into a form the risk assessment engine 27 can use, then requests 69 risk assessment for the access attempt from the risk assessment engine 27.
  • The risk assessment engine 27 compares 71 the access attempt to access event clusters 93 in the entity's profile and notices that the location and time are not usual for the user 35. The engine 27 calculates 71 risk scores for the access attempt just as it would for an executed access event. In this case, it calculates high risk scores.
  • The risk assessment engine 27 returns 73 the high scores to the risk assessment service 31.
  • The risk assessment service 31 returns 75 the high scores to the on-demand threat remediation engine 33.
  • In the on-demand threat remediation engine 33, the high scores returned to the engine 33 triggers an access denial that the engine 33 sends 77 to the access control service 11.
  • The access control service 11 denies 79 the user's 35 log-in request.
  • The event reporting agent 15 reports 81 the denied access event to the event ingestion service 25.
  • From this point on, the denied access event goes through an embodiment of the invention just as any other event would, as described previously. The event is recorded in the risk assessment engine's 27 multi-dimensional array 83 and passes through the streaming threat remediation engine 29 for possible action on the event.
  • Other Implementations of the Invention
  • The invention may be implemented in alternative ways. An embodiment of the invention may, for example, run within an organization's private network, or across large interconnected networks. Embodiments of the invention may locate components in different locations that may be together within a core or scattered across various locations, and they may consolidate multiple components within a single component that performs the same functions as the consolidated components. Embodiments of the invention may use methods other than multi-dimensional arrays 83 to assess an event's possible threat.
  • An embodiment of the invention may be a machine-readable medium having stored thereon instructions which cause a processor to perform operations as described above. In other embodiments/the operations might be performed by specific hardware components that contain hardwired logic. Those operations might alternatively be performed by any combination of programmed computer components and custom hardware components.
  • A machine-readable medium may include any mechanism for storing or transmitting information in a form readable by any type of processing device.
  • Although specific examples of how the invention may be implemented are described, the invention is not limited by the specified examples, and is limited only by the scope of the following claims.

Claims (12)

1. A system including an event reporting agent, an access control service, a directory service, and an administrative access portal for detecting and remediating fraudulent attempts to access a network, said system comprising:
a) an event ingestion service which i) receives data corresponding to an event from said event reporting agent, ii) filters out malformed or irrelevant events corresponding to said received event data, and iii) prepares each event's data by deleting unnecessary event data and converting remaining event data, if necessary, into values;
b) a risk assessment engine which i) receives said values from said event ingestion service and builds and periodically updates an entity profile for each entity using said values corresponding to said filtered and prepared event data for that entity, and ii) accepts requests from one of a streaming threat remediation engine and a risk assessment service to perform a risk assessment for an event by comparing said filtered and prepared event data to the entity's entity profile and returning a result of said risk assessment to one of said streaming threat remediation engine and risk assessment service;
c) said streaming threat remediation engine which receives said filtered and prepared event data from said risk assessment engine and applies an ordered sequence of rules to said filtered and prepared event data, each rule testing each event for conditions that may require action, which action said streaming threat remediation engine then initiates if required;
d) said risk assessment service which i) accepts authenticated connections from an on demand threat remediation engine receives risk assessment requests for events or attempted events from said on demand threat remediation engine requests said risk assessments from said risk assessment engine, and iv) returns said risk assessments for each request to said on demand threat remediation engine;
e) said on-demand threat remediation engine which i) receives data corresponding to requests for risk assessment of an external event or attempted external event and ii) applies said ordered sequence of rules to said data corresponding to said external event or attempted external event, each rule testing each external event or attempted external event for conditions that may require action, which action said on-demand threat remediation engine then initiates if required.
2. The system defined by claim 1 wherein said request for said risk assessment for said event from said risk assessment engine initiated by said streaming threat remediation engine, is followed by a further application of rules within said ordered sequence of rules for testing each event for conditions that may require action, which action said streaming threat remediation engine then initiates if required.
3. The system defined by claim 1 wherein said request for said risk assessment for said event from said risk assessment service initiated by said on-demand threat remediation engine for said event or attempted event, is followed by a further application of rules within said ordered sequence of rules for testing each event or attempted event for conditions that may require action, which action said on-demand threat remediation engine then initiates if required.
4. The system defined by claim 1 wherein said requests for risk assessment of said external event or attempted external event are initiated by said access control service, or said directory service.
5. A method for detecting and remediating risky entity activity in a network based on an executed entity event comprising:
a) sending an entity event to an event ingestion service which determines (41) whether said entity event is an event appropriate for analysis;
b) if said event is appropriate for analysis, said entity event ingestion service converting said entity event's data into a form usable for analysis and sending said converted entity event to a risk assessment engine;
c) said risk assessment engine receiving said converted entity event and using it to build and periodically update an entity profile for each entity by adding said converted entity event to previously converted entity events for the same entity;
d) said risk assessment engine passing each of said received entity events along to a streaming threat remediation engine;
e) said streaming threat remediation engine evaluating said entity event through an ordered sequence of rules;
f) if said rule sequence detects a condition in said entity event that requires risk assessment, then said streaming threat remediation engine requesting a risk assessment for said entity event from said risk assessment engine;
g) said risk assessment engine calculating a risk assessment score and confidence score by comparing said entity event to said entity profile and then providing said risk assessment score and confidence score to said streaming threat remediation engine;
h) if said streaming threat remediation engine determines that said risk assessment score and confidence score constitute a threat to the network, then instructing an access control service to take appropriate action to mitigate said entity's activity.
6. The method defined by claim 5 wherein said requesting for said risk assessment is followed by a further application of rules within said ordered sequence of rules testing each event for conditions that may require action, which action said streaming threat remediation engine then initiates if required.
7. The method defined by claim 6 wherein said action initiated by said streaming threat remediation engine is instructing said access control service to take appropriate action to mitigate said entity's activity.
8. The method defined by claim 5 wherein said requesting for risk assessment of said executed entity event is initiated by said access control service, or a directory service.
9. A method for detecting and remediating a fraudulent attempt to access a network based on an attempted entity event comprising:
a) sending an attempted entity event to an on-demand threat remediation engine which determines whether said entity event is an event which requires a threat assessment prior to being authorized;
b) if said entity event requires said threat assessment, said on-demand threat remediation engine passing said entity event along with a risk assessment request to a risk assessment service;
c) said risk assessment service converting values of said entity event into a form usable for risk assessment and passing said converted entity event to a risk assessment engine with a request to assess risk for said event;
c) said risk assessment engine receiving said converted entity event and comparing said entity event to an entity profile created for each entity with reported entity events, forming a risk assessment score and confidence score and providing said risk assessment score and confidence score to said risk assessment service;
d) said risk assessment service sending said risk assessment score and confidence score to said on-demand threat remediation engine.
10. The method defined by claim 9 wherein said request for said risk assessment is followed by a further application of rules within said ordered sequence of rules testing each event for conditions that may require action, which action said on-demand threat remediation engine then initiates if required.
11. The method defined by claim 9 wherein said action initiated by said on-demand threat remediation engine is instructing said access control service to take appropriate action to mitigate said attempted entity event.
12. The method defined by claim 9 wherein said request for risk assessment of said attempted entity event is initiated by said access control service or a directory service.
US15/703,943 2017-09-13 2017-09-13 Method and Apparatus for Network Fraud Detection and Remediation Through Analytics Abandoned US20190081968A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US15/703,943 US20190081968A1 (en) 2017-09-13 2017-09-13 Method and Apparatus for Network Fraud Detection and Remediation Through Analytics
US17/108,612 US11902307B2 (en) 2017-09-13 2020-12-01 Method and apparatus for network fraud detection and remediation through analytics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US15/703,943 US20190081968A1 (en) 2017-09-13 2017-09-13 Method and Apparatus for Network Fraud Detection and Remediation Through Analytics

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/108,612 Continuation US11902307B2 (en) 2017-09-13 2020-12-01 Method and apparatus for network fraud detection and remediation through analytics

Publications (1)

Publication Number Publication Date
US20190081968A1 true US20190081968A1 (en) 2019-03-14

Family

ID=65631838

Family Applications (2)

Application Number Title Priority Date Filing Date
US15/703,943 Abandoned US20190081968A1 (en) 2017-09-13 2017-09-13 Method and Apparatus for Network Fraud Detection and Remediation Through Analytics
US17/108,612 Active 2037-11-06 US11902307B2 (en) 2017-09-13 2020-12-01 Method and apparatus for network fraud detection and remediation through analytics

Family Applications After (1)

Application Number Title Priority Date Filing Date
US17/108,612 Active 2037-11-06 US11902307B2 (en) 2017-09-13 2020-12-01 Method and apparatus for network fraud detection and remediation through analytics

Country Status (1)

Country Link
US (2) US20190081968A1 (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190116193A1 (en) * 2017-10-17 2019-04-18 Yanlin Wang Risk assessment for network access control through data analytics
US20200042723A1 (en) * 2018-08-03 2020-02-06 Verizon Patent And Licensing Inc. Identity fraud risk engine platform
CN111078757A (en) * 2019-12-19 2020-04-28 武汉极意网络科技有限公司 Autonomous learning business wind control rule engine system and risk assessment method
US20210105262A1 (en) * 2019-10-02 2021-04-08 Sharp Kabushiki Kaisha Information processing system, information processing method, and storage medium
US11075901B1 (en) * 2021-01-22 2021-07-27 King Abdulaziz University Systems and methods for authenticating a user accessing a user account
US20210374252A1 (en) * 2018-10-23 2021-12-02 Jpmorgan Chase Bank, N.A. Systems and methods for using an application control prioritization index
US11218503B2 (en) * 2019-07-19 2022-01-04 Jpmorgan Chase Bank, N.A. System and method for implementing a vulnerability management module
US20220158889A1 (en) * 2020-11-18 2022-05-19 Vmware, Inc. Efficient event-type-based log/event-message processing in a distributed log-analytics system
US11399045B2 (en) * 2017-12-15 2022-07-26 T-Mobile Usa, Inc. Detecting fraudulent logins
US20230021936A1 (en) * 2017-05-15 2023-01-26 Forcepoint Llc Adaptive Trust Profile Reference Architecture

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050018618A1 (en) * 2003-07-25 2005-01-27 Mualem Hezi I. System and method for threat detection and response
US20150281287A1 (en) * 2009-11-20 2015-10-01 Alert Enterprise, Inc. Policy/rule engine, multi-compliance framework and risk remediation
US9754256B2 (en) * 2010-10-19 2017-09-05 The 41St Parameter, Inc. Variable risk engine

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9027134B2 (en) * 2013-03-15 2015-05-05 Zerofox, Inc. Social threat scoring
US9639820B2 (en) * 2013-03-15 2017-05-02 Alert Enterprise Systems, structures, and processes for interconnected devices and risk management

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050018618A1 (en) * 2003-07-25 2005-01-27 Mualem Hezi I. System and method for threat detection and response
US7463590B2 (en) * 2003-07-25 2008-12-09 Reflex Security, Inc. System and method for threat detection and response
US20150281287A1 (en) * 2009-11-20 2015-10-01 Alert Enterprise, Inc. Policy/rule engine, multi-compliance framework and risk remediation
US9754256B2 (en) * 2010-10-19 2017-09-05 The 41St Parameter, Inc. Variable risk engine

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11757902B2 (en) * 2017-05-15 2023-09-12 Forcepoint Llc Adaptive trust profile reference architecture
US20230021936A1 (en) * 2017-05-15 2023-01-26 Forcepoint Llc Adaptive Trust Profile Reference Architecture
US20210273951A1 (en) * 2017-10-17 2021-09-02 Cyberark Software Ltd. Risk assessment for network access control through data analytics
US20190116193A1 (en) * 2017-10-17 2019-04-18 Yanlin Wang Risk assessment for network access control through data analytics
US11399045B2 (en) * 2017-12-15 2022-07-26 T-Mobile Usa, Inc. Detecting fraudulent logins
US11017100B2 (en) * 2018-08-03 2021-05-25 Verizon Patent And Licensing Inc. Identity fraud risk engine platform
US20200042723A1 (en) * 2018-08-03 2020-02-06 Verizon Patent And Licensing Inc. Identity fraud risk engine platform
US20210374252A1 (en) * 2018-10-23 2021-12-02 Jpmorgan Chase Bank, N.A. Systems and methods for using an application control prioritization index
US11853433B2 (en) * 2018-10-23 2023-12-26 Jpmorgan Chase Bank, N.A. Systems and methods for using an application control prioritization index
US11218503B2 (en) * 2019-07-19 2022-01-04 Jpmorgan Chase Bank, N.A. System and method for implementing a vulnerability management module
US11799896B2 (en) 2019-07-19 2023-10-24 Jpmorgan Chase Bank, N.A. System and method for implementing a vulnerability management module
US20210105262A1 (en) * 2019-10-02 2021-04-08 Sharp Kabushiki Kaisha Information processing system, information processing method, and storage medium
US11695754B2 (en) * 2019-10-02 2023-07-04 Sharp Kabushiki Kaisha Information processing system, information processing method, and storage medium
CN111078757A (en) * 2019-12-19 2020-04-28 武汉极意网络科技有限公司 Autonomous learning business wind control rule engine system and risk assessment method
US20220158889A1 (en) * 2020-11-18 2022-05-19 Vmware, Inc. Efficient event-type-based log/event-message processing in a distributed log-analytics system
US11665047B2 (en) * 2020-11-18 2023-05-30 Vmware, Inc. Efficient event-type-based log/event-message processing in a distributed log-analytics system
US11075901B1 (en) * 2021-01-22 2021-07-27 King Abdulaziz University Systems and methods for authenticating a user accessing a user account
US11228585B1 (en) * 2021-01-22 2022-01-18 King Abdulaziz University Systems and methods for authenticating a user accessing a user account

Also Published As

Publication number Publication date
US11902307B2 (en) 2024-02-13
US20210084062A1 (en) 2021-03-18

Similar Documents

Publication Publication Date Title
US20210084062A1 (en) Method and Apparatus for Network Fraud Detection and Remediation Through Analytics
US10491630B2 (en) System and method for providing data-driven user authentication misuse detection
US11290464B2 (en) Systems and methods for adaptive step-up authentication
US20210152555A1 (en) System and method for unauthorized activity detection
Salem et al. A survey of insider attack detection research
US20110314549A1 (en) Method and apparatus for periodic context-aware authentication
US7908645B2 (en) System and method for fraud monitoring, detection, and tiered user authentication
US20080222706A1 (en) Globally aware authentication system
US8695097B1 (en) System and method for detection and prevention of computer fraud
US11206281B2 (en) Validating the use of user credentials in a penetration testing campaign
US20110314558A1 (en) Method and apparatus for context-aware authentication
KR102024142B1 (en) A access control system for detecting and controlling abnormal users by users’ pattern of server access
US20120151559A1 (en) Threat Detection in a Data Processing System
US20050216955A1 (en) Security attack detection and defense
US20210234877A1 (en) Proactively protecting service endpoints based on deep learning of user location and access patterns
CN111510453A (en) Business system access method, device, system and medium
Matsuda et al. Detecting apt attacks against active directory using machine leaning
CN116708210A (en) Operation and maintenance processing method and terminal equipment
US8776228B2 (en) Transaction-based intrusion detection
Fujimoto et al. Detecting abuse of domain administrator privilege using windows event log
CN117527430A (en) Zero-trust network security dynamic evaluation system and method
Furnell et al. A conceptual architecture for real‐time intrusion monitoring
JP6842951B2 (en) Unauthorized access detectors, programs and methods
Karasaridis et al. Artificial Intelligence for Cybersecurity
CN116541815B (en) Computer equipment operation and maintenance data safety management system

Legal Events

Date Code Title Description
STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

AS Assignment

Owner name: GOLUB CAPITAL LLC, AS AGENT, ILLINOIS

Free format text: INTELLECTUAL PROPERTY SECURITY AGREEMENT;ASSIGNOR:CENTRIFY CORPORATION;REEL/FRAME:046081/0609

Effective date: 20180504

AS Assignment

Owner name: CENTRIFY CORPORATION, CALIFORNIA

Free format text: RELEASE OF SECURITY INTEREST UNDER REEL/FRAME 46081/0609;ASSIGNOR:GOLUB CAPITAL LLC;REEL/FRAME:046854/0246

Effective date: 20180815

AS Assignment

Owner name: IDAPTIVE, LLC, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:CENTRIFY CORPORATION;REEL/FRAME:047559/0103

Effective date: 20180815

AS Assignment

Owner name: CENTRIFY CORPORATION, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:WANG, YANLIN;LI, WEIZHI;REEL/FRAME:047758/0647

Effective date: 20170908

Owner name: APPS & ENDPOINT COMPANY, LLC, DELAWARE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:CENTRIFY CORPORATION;REEL/FRAME:047759/0071

Effective date: 20180815

Owner name: IDAPTIVE, LLC, DELAWARE

Free format text: CHANGE OF NAME;ASSIGNOR:APPS & ENDPOINT COMPANY, LLC;REEL/FRAME:049010/0738

Effective date: 20180913

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER

STCV Information on status: appeal procedure

Free format text: NOTICE OF APPEAL FILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

AS Assignment

Owner name: CYBERARK SOFTWARE LTD., ISRAEL

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:CYBERARK SOFTWARE, INC.;REEL/FRAME:054333/0889

Effective date: 20201109

AS Assignment

Owner name: CYBERARK SOFTWARE, INC., MASSACHUSETTS

Free format text: MERGER;ASSIGNOR:IDAPTIVE, LLC;REEL/FRAME:054507/0763

Effective date: 20200731

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION