WO2023129351A1 - Detecting compromised cloud users - Google Patents

Detecting compromised cloud users Download PDF

Info

Publication number
WO2023129351A1
WO2023129351A1 PCT/US2022/052193 US2022052193W WO2023129351A1 WO 2023129351 A1 WO2023129351 A1 WO 2023129351A1 US 2022052193 W US2022052193 W US 2022052193W WO 2023129351 A1 WO2023129351 A1 WO 2023129351A1
Authority
WO
WIPO (PCT)
Prior art keywords
anomalous
users
groups
machine learning
hierarchy
Prior art date
Application number
PCT/US2022/052193
Other languages
French (fr)
Inventor
Eran Goldstein
Idan Hen
Shalom Shay SHAVIT
Original Assignee
Microsoft Technology Licensing, Llc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US17/689,330 external-priority patent/US20230216871A1/en
Application filed by Microsoft Technology Licensing, Llc. filed Critical Microsoft Technology Licensing, Llc.
Publication of WO2023129351A1 publication Critical patent/WO2023129351A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/55Detecting local intrusion or implementing counter-measures
    • G06F21/552Detecting local intrusion or implementing counter-measures involving long-term monitoring or reporting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/57Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
    • G06F21/577Assessing vulnerabilities and evaluating computer system security
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1408Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1433Vulnerability analysis

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Computing Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Storage Device Security (AREA)

Abstract

Compromised user accounts are identified by detecting anomalous cloud activities. Cloud activities are determined to be anomalous by comparing the behavior of a particular user with the previous behavior of that user as well as the previous behavior of other, related users. In some configurations, the related users are organized into one or more hierarchies, such as by geographic location or by a logical structure of a cloud service. The behavior of the related users is modeled at different levels in the hierarchy. Anomaly scores from different groups and levels of the hierarchy are compiled and filtered before being used to determine whether to send a security alert. In some configurations, the security alert indicates that the anomalous operation was detected, why the operation was determined to be anomalous, and in some cases, what harm the operation could lead to if the user is in fact compromised.

Description

DETECTING COMPROMISED CLOUD USERS
BACKGROUND
Cloud computing platforms are a common target for hackers. Hackers may compromise a user account by stealing a username and password, exploiting a vulnerability in software that utilizes cloud computing resources, or exploiting a vulnerability in the cloud computing infrastructure itself. Once a hacker has gained access to a user’s account, sensitive data may be stolen, data may be surreptitiously altered or destroyed, the user may be impersonated to the detriment of their reputation, computing resources may be misappropriated, security measures may be disabled, etc. As such, there is a continued interest in identifying when a cloud computing user’s account has been compromised.
Existing techniques attempt to identify a compromised user account by identifying abnormal user behavior. Behavior is considered abnormal if it is uncommon or unexpected based on a model of the user’s previous behavior. For example, an operation that reads a particular piece of data for the first time may be flagged as abnormal for a particular user. However, these models tend to misclassify newly-observed behavior as anomalous, even if it is benign.
If too many false positives are reported, then actual anomalous behavior may be lost within the noise. Furthermore, customers may ignore the results entirely if the burden of distinguishing false positives outweighs the benefits of identifying compromised accounts. It is with respect to these technical issues and others that the present disclosure is made.
SUMMARY
Compromised user accounts are identified by detecting anomalous cloud activities. Cloud activities are determined to be anomalous by comparing the behavior of a particular user with the previous behavior of that user as well as the previous behavior of other, related users. In some configurations, the related users are organized into one or more hierarchies, such as by geographic location or by a logical structure of a cloud service. The behavior of the related users is modeled at different levels in the hierarchy. For example, the behavior of users from the same city may be modeled, as well as the behavior of users from the same country, the same time zone, etc. When analyzing a particular action taken by the particular user, each model generates an anomaly score, a confidence score, and an explainability score. Anomaly scores indicate how anomalous the operation is, confidence scores indicate how sure the model is that the operation is anomalous, and explainability scores indicate why the operation was anomalous. These scores are compiled and filtered before being used to determine whether to send a security alert. In some configurations, the security alert indicates that the anomalous operation was detected, why the operation was determined to be anomalous, and in some cases, what harm the operation could lead to if the user is in fact compromised.
Cloud-based systems consist of various types of resources and users. Each resource and each user has a set of entities they may interact with within a cloud service, as well as a specific set of operations with which to access the entities. For example, a user may be limited to read-only operations when accessing a database. Cloud-based systems employ a resource manager for enforcing these access constraints.
If a hacker were to gain access to a user’s cloud portal - e.g. a website through which the user allocates cloud resources - or if the hacker were to compromise a user’s account, the hacker may be able to perform impactful operations. However, many impactful operations are commonly performed by users themselves as part of managing and interacting with the cloud service, and therefore it is not trivial to distinguish between malicious and benign use of such operations.
In some configurations, machine learning techniques are used to alert when compromised user accounts are detected. For example, a compromised user account may be detected by identifying anomalous invocations of impactful operations. As referred to herein, an operation is impactful if it is important, such as accessing sensitive data, performing sensitive actions, etc. An operation is determined to be anomalous based on machine learning models of past behavior of the user and machine learning models of past behavior of groups of related users - e.g. hierarchies of related users. Together, the results of these models are used to detect anomalous cloud operations.
In some configurations, user behavior is modeled with a multivariate anomaly detection model trained on various features of cloud operations. Features of cloud operations may include, for example, the day of the week when the operation was performed, the name of the operation that was executed, a user identifier, and a resource identifier. A multivariate model enables a rich, robust and expressive behavioral profile. Operations having an abnormal combination of such features can be a good indication of abnormal activity of the user - an indication that may have been missed if these features were inspected individually.
However, benign activity can appear as “malicious” if the user alters or expands their behavior - e.g., interacting with new resources, applying new types of operations, etc. In order to reduce such false alerts, an assessment is made whether the operation is considered anomalous at different levels in the hierarchy. This approach allows cloud operations to be modeled on several levels of granularity, offering additional perspectives on whether the operation is anomalous. By looking at a “bigger picture”, an operation that would be abnormal for a specific user may be considered non-anomalous if one or more levels of the hierarchy consider it not anomalous. In this way, determining that an operation is anomalous may be performed in part by determining that that operation is not non-anomalous.
In some configurations, multiple hierarchies may be evaluated. For example, in addition to a hierarchy based on the geographic locations of users, a hierarchy based on corporate structure or a hierarchy based on how cloud resources are provisioned may also be consulted. Modeling a user’s action at different levels of granularity across different hierarchies further improves the accuracy of an anomaly assessment compared to a single model that produces a single anomaly score.
In some configurations, when an anomalous operation is detected, one or more filters are applied to determine whether to generate an alert. A specific operation filter identifies specific operations or types of operations to be ignored. For example, an operation that adds a description to a cloud object may be anomalous, but it will be ignored if it is considered benign by the specific operation filter. Additionally, or alternatively, a minimum score filter may ignore any anomalous operations that do not have a minimum anomaly score, a minimum confidence score, a minimum explainability score, or a combination thereof.
Security alerts may include a description of the anomalous operation. Security alerts may also include an explanation of potential consequences of not addressing the alert and/or other contextual information indicating why the alert was issued. In some configurations, explanations are generated based on explainability scores. Explanations may also be enhanced with domain specific knowledge. For example, security-domain enrichments, such as a list of potential attack types associated with specific impactful operations, may be used to indicate why an identified anomaly is considered dangerous.
Features and technical benefits other than those explicitly described above will be apparent from a reading of the following Detailed Description and a review of the associated drawings. This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter. The term “techniques,” for instance, may refer to system(s), method(s), computer-readable instructions, module(s), algorithms, hardware logic, and/or operation(s) as permitted by the context described above and throughout the document.
BRIEF DESCRIPTION OF THE DRAWINGS
The Detailed Description is described with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The same reference numbers in different figures indicate similar or identical items. References made to individual items of a plurality of items can use a reference number with a letter of a sequence of letters to refer to each individual item. Generic references to the items may use the specific reference number without the sequence of letters. FIGURE 1A illustrates a cloud service including command logs that record operations made by users of the cloud service.
FIGURE IB illustrates a logical hierarchy of users based on a resource allocation scheme of a cloud service.
FIGURE 1C illustrates an operation being performed by a cloud service.
FIGURE 2 illustrates generating an alert based on determinations made by models of each level in a hierarchy of users.
FIGURE 3 illustrates a cloud event being processed by models of each level of a logical hierarchy based on cloud resource provisioning.
FIGURE 4 illustrates a cloud event being processed by models of each level of a geographic hierarchy.
FIGURE 5 illustrates how anomaly scores and explainability scores are used to generate a rich, meaningful alert.
FIGURE 6 is a flow diagram showing aspects of a routine for the disclosed techniques.
FIGURE 7 is a computer architecture diagram illustrating an illustrative computer hardware and software architecture for a computing system capable of implementing aspects of the techniques and technologies presented herein.
DETAILED DESCRIPTION
FIGURE 1A illustrates a cloud service 102 including command logs 110 that store event entries 112 for operations performed by cloud service 102 on behalf of users 104. Cloud service 102 consists of various types of resources 108 and users 104. Resources 108 may refer to any physical or virtual asset hosted by or otherwise provided by cloud service 102, such as virtual machines, table storage, “blob” storage, etc. Users 104 may refer to user accounts associated with people, corporations, schools, or any other organization.
Each of users 104 and resources 108 may be associated with a set of entities - e.g. users, resources, network endpoints, security groups, or any other aspect of cloud service 102 - that they are authorized to interact with. Furthermore, users 104 and resources 108 may be authorized to interact with these entities with specific sets of operations. As referred to herein, operations refer to invocations of functionality provided by cloud service 102, such as a file upload operation. Resource manager 106 is the part of cloud service 102 that authenticates and authorizes operations requested by a user 104 to be performed using one of resources 108. One example of cloud service 102 is Microsoft® Azure.
As illustrated, command log 110A includes event entry 112 A, which stores parameters that were used to invoke an operation. Event entry 112A includes parameters representing the day of week 114 A, operation name 115A, principle identifier 116A, and resource identifier 117A. Day of week 114A indicates which day the operation was invoked on, e.g. Monday, Tuesday, etc. Operation name 115A is the name of the operation that was invoked. Principle identifier 116A identifies the user 104 that is invoking the operation. Resource identifier 117A identifies a resource that is impacted by the operation, e.g. read from, modified, deleted, etc. While a single resource identifier 117A is depicted, it is similarly contemplated that multiple resource identifiers 117 may be associated with particular operations. Command logs may record parameters from successful and unsuccessful invocations of operations.
FIGURE IB illustrates a logical hierarchy 118 of users based on a resource allocation scheme of a cloud service. Cloud service providers often enable customers to define structures within which cloud-based resources are allocated and accessed. The example illustrated in FIG. IB uses terminology for Microsoft® Azure, but comparable hierarchies from other cloud service providers are similarly contemplated. Hierarchy 118 includes tenant 120, which is the highest level in hierarchy 118. As part of Microsoft® Azure, a tenant refers to all of the computing services and resources associated with a particular directory server - e.g. Azure Active Directory. As illustrated, all users 104 and all resources 108 are part of tenant 120.
Tenant 120 is divided into two subscriptions, 122A and 122B. Subscriptions are logical collections of cloud resources that remain independent of one another. For example, different subscriptions may be created for different divisions within a corporation. Each of subscriptions 122 may be implemented on one or more of clusters 124, which represent collections of computers used to provide resources 108.
FIGURE 1C illustrates an operation 132 being performed by a cloud service 102. Client computing device 130 submits operation 132 to resource manager 106. Operation 132 includes principle identifier 116A, indicating which user 104 is making the request. Operation 132 also includes resource identifier 117A, indicating which resource 108 will be affected. Operation 132 is provided to resource manager 106, which authorizes or denies the requested operation 132. If the identified user is authenticated and authorized to perform the requested operation 132 on the identified resource 108 A, then the operation is allowed to be performed. In either case, event entry 112A is added to log 110A.
FIGURE 2 illustrates generating an alert 230 based on determinations made by models of each level in a hierarchy of users. Properties extracted from event entry 112A are provided to tenant model 210, subscription model 212, resource model 214, and user model 216. Each of these models is part of a hierarchy of models 202, where each model is associated with a group of users that includes the user identified by principal identifier 116A.
Each of models 210, 212, 214, and 216 generate anomaly scores 220, each of which includes one of confidence scores 224, and explainability scores 222. As referred to herein, anomaly scores 220 refer to a degree to which operation 132 is anomalous. In some configurations, operations that have historically been performed by the same user will have low anomaly scores, while operations that have rarely if ever been performed by the same user will have high anomaly scores. By analyzing the past behaviors of different but related groups of users, different perspectives on what operations are anomalous may be revealed.
Anomaly scores 220 and explainability scores 222 are provided to alert decision logic 204, which filters and compiles scores from different levels in a hierarchy of groups of users - if not from different hierarchies. Filtering and aggregating anomaly scores and explainability scores is described in more detail below in conjunction with FIG. 5. Once anomaly scores have been filtered, the remaining scores may be used to determine if the operation 132 is anomalous - and if so, the confidence with which the determination is made. Once an operation 132 has been identified as anomalous, alert 230 may be generated to indicate the possibility that operation 132 is from a compromised user account. External knowledge associated with operation 132 may then be retrieved and used to augment alert 230 with text indicating why operation 132 was determined to be anomalous.
FIGURE 3 illustrates a cloud event entry 112A being processed by models of each level of a logical hierarchy based on cloud resource provisioning subdivisions. As discussed above in conjunction with FIG. IB, a logical hierarchy of users 104 and resources 108 may be established under tenant 120. As illustrated in FIG. 3, tenant model 304 is trained to generate anomaly score 220A and explainability score 222A based on the operations and other actions taken by users that are part of tenant 120.
Similarly, subscription model 310 is a machine learning model trained to generate anomaly scores 220B and explainability scores 222B for one of subscriptions 122. Specifically, whichever subscription 122 the user identified by principle identifier 116A belongs to is used to infer anomaly score 220B and explainability score 222B.
In the same way, cluster model 320 is a machine learning model trained to generate anomaly scores 220C and explainability scores 222C from the cluster 124 that contains the user 104 or the resource 108 identified by event entry 112 A.
In some configurations, if the operation 132 referenced by event entry 112A is determined to be not anomalous - i.e. normal - by any of the levels of hierarchy 302, then the event is determined to be not anomalous, and no alert 230 is raised. However, it is similarly contemplated that only when a majority of levels of the hierarchy indicate the event is not anomalous is the event not considered anomalous. When weighing the results of different machine learning models, different levels in the hierarchy may be weighted differently. In some configurations, a rule-based approach is applied to the outputs of the models 304, 310, and 320 to determine whether event entry 112A is anomalous.
In some configurations, each group of users of hierarchy 302 - e.g. tenant 120, subscriptions 122A and 122B, and clusters 124A-D are evaluated individually to determine whether the operation 132 is anomalous or not. In some configurations, if a machine learning model associated with any group at any level of the hierarchy 302 indicates that the operation is non-anomalous, then the operation will be interpreted as non-anomalous. In other scenarios, only if a threshold number or percentage of machine learning models associated with groups of users indicate that the operation is non-anomalous will the operation be identified as non-anomalous. In some configurations, a group-based analysis of the operation is combined with a hierarchical level-based analysis of the operation.
FIGURE 4 illustrates a event entry 112A being processed by models of each level of a geographic hierarchy. Similar to the hierarchy 302 depicted in FIG. 3, FIG. 4 depicts a hierarchy 402 of user 104A, resource 108 A, region identifier 422, and country identifier 432. User model 404 is trained to generate anomaly score 220D and explainability score 222D, resource model 410 is trained to generate anomaly score 220E and explainability score 222E, geographic region model 420 is trained to generate anomaly score 220F and explainability score 222F, and country model 430 is trained to generate anomaly score 220G and explainability score 222G. By providing event entry 112A to each of these models to infer the corresponding anomaly scores and explainability scores, a more accurate determination can be made whether the operation referenced in event entry 112A is anomalous and therefore likely to be made by a compromised user account.
FIGURE 5 illustrates how anomaly scores 220 and explainability scores 222 are used to generate a rich, meaningful alert 230. As illustrated in conjunction with FIGS. 3-4, machine learning models trained on cloud events generated by hierarchical groups of users are used to infer anomaly scores 220 and explainability scores 222 for a particular event. These scores are then filtered before being augmented with domain specific knowledge used when generating alert 230.
For example, impactful operation filter 502 removes from consideration any operation 132 that has been predefined by a security researcher to be benign. For example, an operation that labels cloud resources may be determined to be anomalous based on the anomaly score 220 and corresponding confidence level 224, but deemed harmless by it’s nature as an operation that merely names something. In other configurations, impactful operation filter 502 refers to a list of operations that are most interesting - e.g. that have the most impact - as determined by security researchers. Examples of interesting operations are running a command on a virtual machine, changing security settings - e.g. changing the settings about which operations are considered impactful, or installation of a script.
Additionally, or alternatively, anomaly scores 220 and explainability scores 222 are processed by anomaly, confidence, and explainability filter 504. This filter culls any operations that have anomaly scores below a defined threshold value. Various combinations of thresholds for anomaly scores, confidence scores, and explainability scores are similarly contemplated as thresholds for determining when and when not to consider a cloud event as a candidate for an alert 230.
If a cloud event is determined to be impactful by impactful operation filter 502, and if the anomaly, confidence, and explainability scores surpass any predefined thresholds applied by filter 504, then alert decision logic 204 will generate an alert 230 that includes a description of the anomalous operation 520.
However, in order to convey to customers why the operation 132 is anomalous, domain specific knowledge enhancement module 506 retrieves external knowledge related to operation 132. For example, a pre-defined map 508 may associate impactful operations with attack types. For example, if an anomalous operation uploads a script file to a virtual machine and executes it, the impactful operation to attack type map 508 may indicate that this operation allows arbitrary code execution. In this way, instead of saying ‘we found an anomalous operation that was performed on your behalf, alert 230 can indicate what the anomalous operation can lead to. E.g. “We detected some activity in your account that, in in later stages, can cause execution of malicious code.”
Another type of external knowledge that can be referenced when generating alert 230 is a categorization of operations into “intent buckets”. Intent in this context refers to the intent of a hacker that has compromised a user account, and that has performed an operation illicitly. Examples of “intent buckets” include 1) Code execution, 2) Evade defenses, 3) Establish persistence, 4) Attack other resources. By referring to which “intent bucket” an operation falls into, the message of alert 230 can be augmented to indicate what the hacker may be up to.
Once the description of the anomaly 520 and the description of why the operation was determined to be anomalous 522 are available they may be combined as part of alert 230 and sent to interested parties for review.
FIGURE 6 is a flow diagram showing aspects of a routine for the disclosed techniques. Routine 600 begins at step 602, where parameters 114-117 of an operation 132 are received.
Routine 600 then proceeds to step 604, where one or more hierarchies 118 of groups of users are identified that include the user 104.
The routine then proceeds to step 606, the received parameters are provided to machine learning models that correspond to the groups of users.
The routine then proceeds to step 608, where an anomaly score 220, a confidence score 224, and an explainability score 222 are received from the machine learning models.
The routine then proceeds to step 610, where a determination is made that operation 132 is anomalous based on the anomaly scores 220 and the confidence scores 224.
The routine then proceeds to step 612, where a description as to why the operation 132 was identified as anomalous is generated based on the explainability scores 222.
The routine then proceeds to step 614, where an alert 230 is generated that includes the description of the anomalous operation 132 and the description of why the operation 132 was identified as anomalous.
It should be understood that the operations of the methods disclosed herein are not necessarily presented in any particular order and that performance of some or all of the operations in an alternative order(s) is possible and is contemplated. The operations have been presented in the demonstrated order for ease of description and illustration. Operations may be added, omitted, and/or performed simultaneously, without departing from the scope of the appended claims.
It also should be understood that the illustrated methods can end at any time and need not be performed in its entirety. Some or all operations of the methods, and/or substantially equivalent operations, can be performed by execution of computer-readable instructions included on a computer- storage media and computer-readable media, as defined herein. The term “computer- readable instructions,” and variants thereof, as used in the description and claims, is used expansively herein to include routines, applications, application modules, program modules, programs, components, data structures, algorithms, and the like. Computer-readable instructions can be implemented on various system configurations, including single-processor or multiprocessor systems, minicomputers, mainframe computers, personal computers, hand-held computing devices, microprocessor-based, programmable consumer electronics, combinations thereof, and the like.
Thus, it should be appreciated that the logical operations described herein are implemented (1) as a sequence of computer implemented acts or program modules running on a computing system and/or (2) as interconnected machine logic circuits or circuit modules within the computing system. The implementation is a matter of choice dependent on the performance and other requirements of the computing system. Accordingly, the logical operations described herein are referred to variously as states, operations, structural devices, acts, or modules. These operations, structural devices, acts, and modules may be implemented in software, in firmware, in special purpose digital logic, and any combination thereof.
Although FIG. 7 refers to the components depicted in the present application, it can be appreciated that the operations of the routine 700 may be also implemented in many other ways. For example, the routine 700 may be implemented, at least in part, by a processor of another remote computer or a local circuit. In addition, one or more of the operations of the routine 700 may alternatively or additionally be implemented, at least in part, by a chipset working alone or in conjunction with other software modules.
FIG. 7 shows additional details of an example computer architecture 700 for a device, such as computing device 101, capable of executing computer instructions (e.g., a module or a program component described herein). The computer architecture 700 illustrated in FIG. 7 includes processing unit(s) 702, a system memory 704, including a random-access memory 706 (“RAM”) and a read-only memory (“ROM”) 708, and a system bus 710 that couples the memory 704 to the processing unit(s) 702.
Processing unit(s), such as processing unit(s) 702, can represent, for example, a CPU-type processing unit, a GPU-type processing unit, a field-programmable gate array (FPGA), another class of digital signal processor (DSP), or other hardware logic components that may, in some instances, be driven by a CPU. For example, and without limitation, illustrative types of hardware logic components that can be used include Application-Specific Integrated Circuits (ASICs), Application-Specific Standard Products (ASSPs), System-on-a-Chip Systems (SOCs), Complex Programmable Logic Devices (CPLDs), etc.
A basic input/output system containing the basic routines that help to transfer information between elements within the computer architecture 700, such as during startup, is stored in the ROM 708. The computer architecture 700 further includes a mass storage device 712 for storing an operating system 714, application(s) 716 (e.g., models 202 or alert decision logic 204), and other data described herein.
The mass storage device 712 is connected to processing unit(s) 702 through a mass storage controller connected to the bus 710. The mass storage device 712 and its associated computer- readable media provide non-volatile storage for the computer architecture 700. Although the description of computer-readable media contained herein refers to a mass storage device, it should be appreciated by those skilled in the art that computer-readable media can be any available computer-readable storage media or communication media that can be accessed by the computer architecture 700.
Computer-readable media can include computer-readable storage media and/or communication media. Computer-readable storage media can include one or more of volatile memory, nonvolatile memory, and/or other persistent and/or auxiliary computer storage media, removable and nonremovable computer storage media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules, or other data. Thus, computer storage media includes tangible and/or physical forms of media included in a device and/or hardware component that is part of a device or external to a device, including but not limited to random access memory (RAM), static random-access memory (SRAM), dynamic random-access memory (DRAM), phase change memory (PCM), read-only memory (ROM), erasable programmable read-only memory (EPROM), electrically erasable programmable readonly memory (EEPROM), flash memory, compact disc read-only memory (CD-ROM), digital versatile disks (DVDs), optical cards or other optical storage media, magnetic cassettes, magnetic tape, magnetic disk storage, magnetic cards or other magnetic storage devices or media, solid- state memory devices, storage arrays, network attached storage, storage area networks, hosted computer storage or any other storage memory, storage device, and/or storage medium that can be used to store and maintain information for access by a computing device.
In contrast to computer-readable storage media, communication media can embody computer- readable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave, or other transmission mechanism. As defined herein, computer storage media does not include communication media. That is, computer-readable storage media does not include communications media consisting solely of a modulated data signal, a carrier wave, or a propagated signal, per se.
According to various configurations, the computer architecture 700 may operate in a networked environment using logical connections to remote computers through the network 718. The computer architecture 700 may connect to the network 718 through a network interface unit 720 connected to the bus 710. The computer architecture 700 also may include an input/output controller 722 for receiving and processing input from a number of other devices, including a keyboard, mouse, touch, or electronic stylus or pen. Similarly, the input/output controller 722 may provide output to a display screen, speaker, or other type of output device.
It should be appreciated that the software components described herein may, when loaded into the processing unit(s) 702 and executed, transform the processing unit(s) 702 and the overall computer architecture 700 from a general-purpose computing system into a special-purpose computing system customized to facilitate the functionality presented herein. The processing unit(s) 702 may be constructed from any number of transistors or other discrete circuit elements, which may individually or collectively assume any number of states. More specifically, the processing unit(s) 702 may operate as a finite-state machine, in response to executable instructions contained within the software modules disclosed herein. These computer-executable instructions may transform the processing unit(s) 702 by specifying how the processing unit(s) 702 transition between states, thereby transforming the transistors or other discrete hardware elements constituting the processing unit(s) 702.
In closing, although the various configurations have been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended representations is not necessarily limited to the specific features or acts described. Rather, the specific features and acts are disclosed as example forms of implementing the claimed subject matter.

Claims

1. A method for generating a security alert, comprising: receiving a parameter of an operation that was performed by a user on a cloud service; determining a hierarchy of groups of users that include the user; for each of the groups of users, providing the received parameter to a corresponding machine learning model of a plurality of machine learning models; receiving, from each of the plurality of machine learning models, an anomaly score, a confidence score, and an explainability score; determining that the operation is anomalous based on the anomaly scores and the confidence scores; generating a description of why the operation was identified as anomalous based on the explainability scores; and generating an alert that includes a description of the anomalous operation and the description of why the operation was identified as anomalous.
2. The method of claim 1, further comprising: performing a security countermeasure to mitigate a security threat indicated by the identified anomalous operation.
3. The method of claim 1, further comprising: determining that the operation is non-anomalous based on a determination that an anomaly score of at least one of the groups of users of the hierarchy of groups of users indicates that the operation is non-anomalous.
4. The method of claim 3, wherein determining that the operation is anomalous is based in part on the determination that the operation is non-anomalous.
5. The method of claim 1, further comprising: determining that the operation is non-anomalous based on a determination that a defined number or a defined percentage of anomaly scores associated with the groups of users of the hierarchy of groups of users indicate that the operation is non-anomalous.
6. The method of claim 1 , wherein the hierarchy of groups of users is one of a plurality of hierarchies of groups of users, and wherein determining that the operation is anomalous is based in part on anomaly scores generated by machine learning models associated with groups of users from the plurality of hierarchies.
7. The method of claim 1, wherein individual machine learning models of the plurality of machine learning models generate anomaly scores indicating anomalous behavior when historical interaction data associated with the user indicates a frequency of performing the operation is beneath a defined threshold.
8. A device comprising: one or more processors; and a computer-readable storage medium having encoded thereon computer-executable instructions that cause the one or more processors to: receive a parameter of an operation that was performed by a user on a cloud service; determine a hierarchy of groups of users that include the user; for each of the groups of users, provide the received parameter to a corresponding machine learning model of a plurality of machine learning models; receive, from each of the plurality of machine learning models, an anomaly score, a confidence score, and an explainability score; determine that the operation is anomalous based on the anomaly scores and the confidence scores; generate a description of why the operation was identified as anomalous based on the explainability scores; and generate an alert that includes a description of the anomalous operation and the description of why the operation was identified as anomalous.
9. The device of claim 8, wherein the instructions further cause the one or more processors to determine that the operation is non-anomalous based on inclusion in a list of benign operations.
10. The device of claim 9, wherein determining that the operation is anomalous is based in part on the determination that the operation is non-anomalous.
11. The device of claim 8, wherein generating the alert further comprises augmenting the description of the anomalous operation with external knowledge about the operation, wherein the external knowledge comprises an indication of an intention commonly held by hackers when performing the anomalous operation or an indication of a potential consequence if the anomalous operation is part of an attack performed by hackers.
12. The device of claim 8, wherein the hierarchy of groups of users is one of a plurality of hierarchies of groups of users, wherein a first hierarchy of groups of users is a geographic hierarchy and a second hierarchy of groups of users is based on a resource allocation scheme of the cloud service.
13. A computer-readable storage medium storing instructions that, when executed by a processor, cause the processor to: receive a parameter of an operation that was performed by a user on a cloud service; determine a hierarchy of groups of users that include the user; for each of the groups of users, provide the received parameter to a corresponding machine learning model of a plurality of machine learning models; receive, from each of the plurality of machine learning models, an anomaly score and an explainability score; determine that the operation is anomalous based on the anomaly scores; generate a description of why the operation was identified as anomalous based on the explainability scores; and generate an alert that includes a description of the anomalous operation and the description of why the operation was identified as anomalous.
14. The computer-readable storage medium of claim 13, wherein the hierarchy of groups of users includes a top level and a lower level, and wherein a group of users in the top level comprises users found in two or more groups of users from the lower level.
15. The computer-readable storage medium of claim 14, wherein an individual anomaly score associated with the group of users in the top level is generated by a top level machine learning model trained on operations performed by the group of users associated with the top level, wherein properties of the operation are provided as input to an inference operation of the top level machine learning model to generate the individual anomaly score, wherein an individual anomaly score associated with a first group of users in the lower level is generated by a lower level machine learning model trained on operations performed by the first group of users associated with the lower level, and wherein properties of the operation are provided as input to an inference operation of the lower level machine learning model to generate the individual anomaly score.
15
PCT/US2022/052193 2021-12-30 2022-12-07 Detecting compromised cloud users WO2023129351A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US202163295119P 2021-12-30 2021-12-30
US63/295,119 2021-12-30
US17/689,330 US20230216871A1 (en) 2021-12-30 2022-03-08 Detecting compromised cloud users
US17/689,330 2022-03-08

Publications (1)

Publication Number Publication Date
WO2023129351A1 true WO2023129351A1 (en) 2023-07-06

Family

ID=85036519

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2022/052193 WO2023129351A1 (en) 2021-12-30 2022-12-07 Detecting compromised cloud users

Country Status (1)

Country Link
WO (1) WO2023129351A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210075794A1 (en) * 2019-09-08 2021-03-11 Microsoft Technology Licensing, Llc Hardening based on access capability exercise sufficiency
US20210194924A1 (en) * 2019-08-29 2021-06-24 Darktrace Limited Artificial intelligence adversary red team
US20210273960A1 (en) * 2020-02-28 2021-09-02 Darktrace Limited Cyber threat defense system and method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210194924A1 (en) * 2019-08-29 2021-06-24 Darktrace Limited Artificial intelligence adversary red team
US20210075794A1 (en) * 2019-09-08 2021-03-11 Microsoft Technology Licensing, Llc Hardening based on access capability exercise sufficiency
US20210273960A1 (en) * 2020-02-28 2021-09-02 Darktrace Limited Cyber threat defense system and method

Similar Documents

Publication Publication Date Title
US20220303136A1 (en) Security privilege escalation exploit detection and mitigation
CN112567367A (en) Similarity-based method for clustering and accelerating multiple accident surveys
Singh et al. Experimental analysis of Android malware detection based on combinations of permissions and API-calls
US11089024B2 (en) System and method for restricting access to web resources
Pierazzi et al. A data-driven characterization of modern Android spyware
US11593473B2 (en) Stack pivot exploit detection and mitigation
WO2020046458A1 (en) Increasing security of network resources utilizing virtual honeypots
US20210112096A1 (en) Generating false data for suspicious users
US20220109677A1 (en) Methods and systems for detecting inadvertent unauthorized account access
WO2019144548A1 (en) Security test method, apparatus, computer device and storage medium
US20170155683A1 (en) Remedial action for release of threat data
US11750634B1 (en) Threat detection model development for network-based systems
US20230259616A1 (en) Log tampering prevention for high availability environments
US10614215B2 (en) Malware collusion detection
US11151273B2 (en) Controlling installation of unauthorized drivers on a computer system
Daghmehchi Firoozjaei et al. Memory forensics tools: a comparative analysis
US20200007559A1 (en) Web Threat Investigation Using Advanced Web Crawling
RU2587424C1 (en) Method of controlling applications
US20230216871A1 (en) Detecting compromised cloud users
US20230379346A1 (en) Threat detection for cloud applications
WO2023129351A1 (en) Detecting compromised cloud users
CN112560033B (en) Baseline scanning method and device based on user context
US11468074B1 (en) Approximate search of character strings
US20240137376A1 (en) Detecting suspicious data access by a rogue cloud resource
Rudametkin Improving the Security and Privacy of the Web through Browser Fingerprinting

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22847193

Country of ref document: EP

Kind code of ref document: A1