CN117596078A - Model-driven user risk behavior discriminating method based on rule engine implementation - Google Patents
Model-driven user risk behavior discriminating method based on rule engine implementation Download PDFInfo
- Publication number
- CN117596078A CN117596078A CN202410073356.1A CN202410073356A CN117596078A CN 117596078 A CN117596078 A CN 117596078A CN 202410073356 A CN202410073356 A CN 202410073356A CN 117596078 A CN117596078 A CN 117596078A
- Authority
- CN
- China
- Prior art keywords
- behavior
- rule
- risk
- rules
- abnormal
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 20
- 230000006399 behavior Effects 0.000 claims abstract description 141
- 230000002159 abnormal effect Effects 0.000 claims abstract description 54
- 238000011156 evaluation Methods 0.000 claims abstract description 21
- 238000012502 risk assessment Methods 0.000 claims abstract description 17
- 206010000117 Abnormal behaviour Diseases 0.000 claims abstract description 14
- 238000012850 discrimination method Methods 0.000 claims description 9
- 238000004140 cleaning Methods 0.000 claims description 5
- 238000000605 extraction Methods 0.000 claims description 5
- 238000012896 Statistical algorithm Methods 0.000 claims description 4
- 238000004458 analytical method Methods 0.000 claims description 4
- 238000010801 machine learning Methods 0.000 claims description 4
- 238000012545 processing Methods 0.000 claims description 4
- 230000005856 abnormality Effects 0.000 claims description 3
- 230000002452 interceptive effect Effects 0.000 claims description 3
- 230000004044 response Effects 0.000 claims description 3
- 230000035945 sensitivity Effects 0.000 claims description 3
- 238000012216 screening Methods 0.000 claims description 2
- 230000006870 function Effects 0.000 description 15
- 238000001514 detection method Methods 0.000 description 5
- 239000000306 component Substances 0.000 description 4
- 238000009472 formulation Methods 0.000 description 3
- 239000000203 mixture Substances 0.000 description 3
- 238000012544 monitoring process Methods 0.000 description 3
- 239000008186 active pharmaceutical agent Substances 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 241000700605 Viruses Species 0.000 description 1
- 230000002155 anti-virotic effect Effects 0.000 description 1
- 230000003542 behavioural effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000007621 cluster analysis Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 239000008358 core component Substances 0.000 description 1
- 230000007123 defense Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000013450 outlier detection Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/14—Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
- H04L63/1408—Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
- H04L63/1425—Traffic logging, e.g. anomaly detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/50—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
- G06F21/55—Detecting local intrusion or implementing counter-measures
- G06F21/56—Computer malware detection or handling, e.g. anti-virus arrangements
- G06F21/566—Dynamic detection, i.e. detection performed at run-time, e.g. emulation, suspicious activities
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/20—Network architectures or network communication protocols for network security for managing network security; network security policies in general
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/1396—Protocols specially adapted for monitoring users' activity
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/535—Tracking the activity of the user
Landscapes
- Engineering & Computer Science (AREA)
- Computer Security & Cryptography (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Virology (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Debugging And Monitoring (AREA)
Abstract
The invention discloses a model driven user risk behavior discriminating method based on rule engine realization, which comprises the following steps: s1: collecting log data comprising user operation behaviors; s2: extracting an operation behavior main body; s3: constructing a subject behavior baseline; s4: discovering the main body abnormal behavior and generating an abnormal event; s5: constructing a plurality of risk scene models; s6: analyzing the risk scene model through a rule engine, and performing risk assessment based on the abnormal event and the behavior log; s7: according to the evaluation result of the rule engine, whether the user behavior forms a risk is judged, the risk behavior of the user can be judged more accurately, the false alarm rate is reduced, and a new risk scene and a new behavior mode are adapted rapidly under a complex and various service environment.
Description
Technical Field
The invention belongs to the technical field of data security, and particularly relates to a model-driven user risk behavior discriminating method based on a rule engine.
Background
In recent years, traditional network attack methods such as viruses, trojans, phishing and the like are not the main threat of enterprise systems, but are converted into main threats such as APT, social workers, internal personnel violations and the like. This change suggests that network threats continue to evolve and that attackers are adopting more advanced and sophisticated means to gain unauthorized access to enterprise systems and data.
Under the new threat environment, enterprises can not only rely on traditional antivirus and firewall software as main defense means, but also need to take more advanced and powerful security measures, such as deploying and using data anti-leakage products, data security monitoring products and the like.
These products, while differing in implementation and emphasis, share the common feature of using behavior monitoring technology. By monitoring and analyzing the access behaviors of the users, and comparing the access behaviors with the policy library, the risk abnormal behaviors which possibly threaten the enterprise data security are found out, and the risk abnormal behaviors are timely alarmed or blocked.
Currently, methods of risk behavior discrimination can be generally classified into the following three types:
rule detection method: typically involves formulating a set of rules or policies to define normal and abnormal behavior. When the user's behavior does not conform to these rules, it is considered that there is abnormal behavior.
Abnormality detection method: for historical behavior data, an abnormal mode which is inconsistent with most behaviors is identified through statistical or machine learning technologies, such as variance-standard deviation calculation, box diagram, cluster analysis, outlier detection and the like, and the abnormal mode is identified as the abnormal behavior.
Historical behavior modeling: a model is built based on historical behavior data of a user, a behavior baseline of the user in a specific dimension is formed, and an alarm is generated when unusual activities deviating from the baseline are detected.
Limitations of these methods are as follows:
the rule detection method provides better flexibility by supporting the formulation of rule strategies. However, rule formulation, especially formulation of thresholds in rules, is seriously dependent on experience of security specialists in specific fields, and due to scene diversity and variability of user behaviors, rules formulated in different enterprises are difficult to be commonly used, so that a higher use threshold is provided, and false alarm are caused once the thresholds are unreasonably set.
Although the anomaly detection method can find an anomaly pattern which does not accord with most behaviors in the historical behavior data, the anomaly pattern is difficult to accurately classify by the method. In addition, because enterprise systems often have complex business and multiple user roles, the situation that the behavior patterns of partial role users are different from those of most users exists objectively, and the accurate distinction is difficult by using the method. In addition, the anomaly detection method generally detects and analyzes all historical behavior data in a certain time interval, and as a post analysis method, good real-time performance cannot be achieved basically.
According to the historical behavior model method, a behavior base line of a user is formed through learning and analyzing real historical data, the problem that rule threshold is difficult to formulate can be solved to a certain extent, whether the user behavior deviates from the base line or not can be timely judged according to newly generated user behaviors, and risk behaviors are found. However, the method is difficult to adapt to complex business scenes, and only a corresponding baseline learning model can be custom developed for single scene dimensions which are combed in advance, but a baseline model can not be dynamically built for new scene dimensions in the running process.
The invention aims to overcome the limitation of the method, and provides a model-driven user risk behavior judging method based on a rule engine, which can judge the user risk behavior more accurately, reduce the false alarm rate and adapt to new risk scenes and behavior modes rapidly in a dynamic environment.
Disclosure of Invention
In order to solve the problems in the background art, the invention provides a model-driven user risk behavior judging method based on a rule engine, which solves the problems that false alarm and missing alarm can be caused in the prior art, good instantaneity can not be achieved, and complex service scenes can not be adapted to.
In order to achieve the above purpose, the present invention provides the following technical solutions:
a model-driven user risk behavior discriminating method based on rule engine implementation includes the following steps:
s1: collecting log data comprising user operation behaviors;
s2: extracting an operation behavior main body; acquiring a network site accessed by operation behaviors through request metadata information in log data, configuring an account extraction rule, and extracting a user account as an operation behavior main body;
s3: constructing a behavior base line of the operation behavior main body; based on log data, combining the operation behavior main body extracted in the step S2, and learning to obtain various behavior baselines of the operation behavior main body through machine learning or a statistical algorithm under the preset label baseline granularity;
s4: finding out an abnormal behavior of the operation behavior main body, and generating an abnormal event; analyzing log data in real time according to various behavior baselines of the operation behavior main body learned in the step S3 through a baseline abnormality analysis module, judging whether various behavior indexes of the operation behavior main body deviate from various baselines, generating an abnormal event and entering the next step if the various behavior indexes deviate, and returning to the step S1 if the various behavior indexes do not deviate;
s5: constructing a plurality of risk scene models; each risk scene model comprises model basic information, effective range configuration and rule configuration, wherein risk rules are configured in the rule configuration;
s6: analyzing the risk scene model through a rule engine, and performing risk assessment based on the abnormal event and the behavior log;
the rule engine dynamically loads all currently defined risk scene models and analyzes risk rules in the models, and the rule engine carries out risk assessment on abnormal events of the operation behavior main body according to the risk rules in all rule configurations;
s7: judging whether the user behavior forms a risk or not according to the evaluation result of the rule engine;
when any risk assessment hits, screening other risk assessment hit events of the same operation behavior body from a risk assessment hit cache, evaluating whether the operation behavior body hits a risk scene model, and judging whether the user behavior forms risks.
Preferably, the operation of collecting log data in S1 specifically includes the steps of:
s1.1: accessing the interactive network traffic by the bypass access enterprise intranet service system, analyzing and restoring the multiprotocol traffic based on the network protocol acquisition engine, and continuously generating operation log data of enterprise system users;
s1.2, extracting key indexes, request instructions, request objects and access contents in restored original log data layer by layer;
s1.3: and combining the identification sensitivity and classification of the uplink and downlink request response data, completing the preliminary cleaning and marking of the operation log data, and pushing the operation log data to a message component.
Preferably, the method for acquiring the user account in S2 specifically includes: and combing and combining all network sites into a service system, configuring account extraction rules aiming at a single service system, and synchronously extracting and obtaining a user account of the system when cleaning log data.
Preferably, in S4, after generating the abnormal event, scoring the event according to the baseline weight and the deviation factor;
the abnormal event comprises an abnormal subject, an abnormal baseline, an associated operation log and an abnormal score;
an exception event may be pushed to the message component.
Preferably, in S5, the basic information includes a name, a purpose, a state, and an alarm object type of the risk scene model;
the effective range configuration defines a log range which can trigger risk assessment;
rule configuration defines a specific risk behavior decision rule;
decision rules include must rules, optional rules, baseline rules, parameter rules, single rules, and combination rules.
Preferably, the necessary rule and the optional rule are respectively specified as:
rule must be: the operation behavior of the operation behavior main body can be confirmed to accord with the definition of the risk scene only when the necessary rule is hit;
optional rules: under the condition that a plurality of selectable rules exist, each selectable rule can be independently missed, and if the selectable rules with the number larger than or equal to the preset number are hit at the same time, the operation behavior of the operation behavior main body is determined to accord with the definition of the risk scene.
Preferably, the baseline rule and the parameter rule are respectively specified as follows:
baseline rules: when the behavior of the operation behavior main body deviates from the baseline, generating a baseline abnormal event, and judging whether the baseline rule hits or not by a risk analysis program through asserting a baseline abnormal behavior log, an abnormal main body and an abnormal risk score;
parameter rules: and the risk analysis program asserts the flow log and the window statistical data based on rule constraint conditions by carrying out real-time or near-real-time flow batch processing on the flow log by taking the manually configured behavior threshold parameters as scene rules of main judgment basis, and judges whether the parameter rules hit or not.
Preferably, the single rule and the combination rule are respectively specified as follows:
single rule: an independent, specific rule, the single rule comprising a set of constraints and a corresponding predicate evaluation, the single rule triggering the corresponding predicate evaluation when a given constraint is satisfied;
combining rules: a rule system consisting of a plurality of single rules or combination rules that can be interrelated, nested or executed in a pre-set order, the combination rules being used to handle complex relationships between conditions or take actions in different contexts.
Preferably, the specific operation steps of S6 are: the risk behavior judging program in the rule engine is started, all risk scene models which are currently defined are loaded into the memory cache, subscription model changing information is supported, all judging rules in the risk scene models are extracted by the rule engine, and all judging rules are grouped;
the risk behavior judging program subscribes to the user operation log message and the baseline abnormal event message in real time, and logic of different judging rules is executed respectively according to the consumed message types.
Compared with the prior art, the invention has the beneficial effects that:
1. accuracy: for each user main body learning behavior base line, abnormal events are automatically generated according to the deviation degree of the user behaviors and the base line, so that manual threshold configuration is reduced, and an operation threshold is lowered; the combination of the base line rule and the parameter rule is supported, the combination logic is supported at each level of model rule, rule constraint, rule assertion, assertion operation and the like, the time relation of mapping specific situations through preposed rule definition and preposed judgment is supported, and the real complex risk scene is mapped. By combining the characteristics, the invention can more accurately judge the risk behaviors of the user and reduce the false alarm rate.
2. Dynamic properties: the user risk behavior judgment is driven by a disclosed risk scene model, a risk scene is allowed to be dynamically defined based on the model according to actual requirements, a rule engine supports dynamic loading of the risk scene model, different judgment logics are driven according to rules defined by the model, and a new risk behavior mode is adapted.
3. High efficiency: by carrying out batch processing on the user operation log and the real-time and near-real-time data stream of the abnormal event message and carrying out risk assessment by means of a model-driven rule engine, risk discrimination can be rapidly carried out under large-scale user behavior data, and efficiency is improved.
In general, the invention can more accurately judge the risk behaviors of the user, reduce the false alarm rate and quickly adapt to new risk scenes and behavior modes in complex and diverse business environments.
Drawings
Fig. 1 is a schematic flow chart of the present application.
Detailed Description
The present invention will be further described in detail below with reference to the accompanying drawings and specific examples in order to facilitate understanding of the technical content of the present invention by those skilled in the art. It should be understood that the specific examples described herein are intended to illustrate the invention and are not intended to limit the invention.
Example 1:
as shown in fig. 1, a model-driven user risk behavior discrimination method implemented based on a rule engine includes the following steps:
s1: collecting log data of user operation behaviors;
and accessing the interactive network traffic by the bypass access enterprise intranet service system, analyzing and restoring the multi-protocol traffic based on the network protocol acquisition engine, and continuously generating operation log data of enterprise system users. And extracting key indexes, request instructions, request objects and access contents in the restored original log data layer by layer, combining the identification sensitivity and classification of the uplink and downlink request response data, finishing the preliminary cleaning and marking of the operation log data, and pushing the operation log data to a message component.
S2: extracting an operation behavior main body;
and based on the request metadata information such as the target IP port in the operation log, finding out the network stations accessed in the traffic, and combing and combining the network stations into a service system according to the actual. Aiming at a single service system, an account extraction rule is configured, and then a user account of a discovery system is synchronously extracted when log data are cleaned, and the user account is taken as an operation behavior main body.
S3: learning a subject behavioral baseline;
based on the system user operation log, combining the operation behavior main body extracted in the previous step, and learning to obtain various behavior base lines of the main body through machine learning or a statistical algorithm under the preset label base line granularity.
S4: discovering the main body abnormal behavior and generating an abnormal event;
and the baseline anomaly analysis module is used for analyzing operation log data in real time according to the body behavior baseline obtained by the study in the previous step, judging whether the behavior index of the body deviates from the baseline or not by combining a statistical algorithm, a time sequence algorithm and the like, generating an anomaly event if the behavior index deviates from the baseline remarkably, and scoring the event according to factors such as baseline weight, deviation degree and the like. The abnormal event includes an abnormal subject, an abnormal base line, an associated operation log, an abnormal score, and the like. An exception event may be pushed to the message component.
S5: constructing a risk scene model;
the invention discloses a risk scene model for risk behavior discrimination, which supports dynamic definition and allows an administrator to adjust and optimize real risk scenes of an enterprise system according to requirements.
The risk scene model mainly comprises three parts, namely model basic information, effective range configuration and rule configuration. The basic information section describes the name, purpose, status and alarm object type of the model; an effective range configuration part defining a log range which can trigger risk assessment; and a rule configuration part defining specific risk behavior judgment rules.
The rule configuration part of the risk scene model is the core of model definition. The risk scene model disclosed by the invention divides rules into necessary rules and optional rules from the necessity perspective; from the implementation point of view, the method is divided into a baseline rule and a parameter rule; from the construction point of view, it is divided into simple rules and combination rules. The following description will be given respectively:
rule must be: indicating that the scene rules must hit in order to identify subject behavior as conforming to the risk scene definition.
Optional rules: indicating that the scenario rule may not hit, but that all optional rules should hit at least the "minimum optional rule hit number" bar rule, the party can assume that the subject behavior meets the risk scenario definition.
Baseline rules: and taking the abnormal events generated by the body behavior base line and the deviation base line as scene rules of main judgment basis. When the behavior of the main body deviates from the baseline remarkably, a baseline abnormal event is generated, and the risk analysis program judges whether the baseline rule hits or not by asserting factors such as a baseline abnormal behavior log, an abnormal main body, an abnormal risk score and the like.
Parameter rules: and taking the manually configured behavior threshold parameters as scene rules of main judgment basis. By carrying out real-time or near real-time flow batch processing on the flow logs, the risk analysis program asserts factors such as the flow logs and window statistical data based on rule constraint conditions, and whether parameter rules hit or not is judged.
Single rule: an independent, specific rule that typically contains a set of constraints and a corresponding predicate. When a given constraint is satisfied, a single rule triggers a corresponding assertion evaluation. For example, one simple single rule example scenario may be: within the working time (09:00-18:00), an alarm is required to invoke an account number of more than 100 APIs. In this rule, the constraint is "access time is between 09:00 and 18:00", and the assertion is "the number of API interfaces accessed is greater than 100".
Combining rules: more complex rule systems consisting of a combination of single or combined rules. These rules may be interrelated, nested, or executed in a pre-set order to support more complex contextual behavior. Combining rules are typically used to handle complex relationships between multiple conditions or take multiple actions in different contexts. For example, business systems are accessed more than 1000 times in an hour and sensitive data is downloaded in large amounts (more than 100M) or more frequently (100 times) or more frequently than 50m+ times. In this scenario, "access to the business system more than 1000 times in an hour" is a single rule, "downloading sensitive data in large amounts (more than 100M) or accessing sensitive data more frequently (100 times) or downloading more than 50m+ more frequently than 50 times" is a combination rule, and these two rules form a new combination rule through logical and operation.
A single rule in the risk scene model generally contains a list of pre-validation rules of the rule in addition to general attributes such as names, belonging scene models and the like, so as to represent the sequence among the rules and map the time relation of a specific situation in a real complex scene.
In particular, any single rule may be a baseline rule or a parameter rule. The former requires specification of baseline anomalies that trigger rule evaluation, and the latter requires specification of parameter constraints that trigger rule evaluation. Furthermore, both require specifying assertions that evaluate whether the decision rule hits.
The rule assertions mainly contain the following: defining objects, window functions, statistics windows, window statistics additional dimensions, assertion operations, etc. The limiting object may be a score of an abnormal event, an attribute of an operation log, an attribute of an abnormal body or an attribute of a pre-rule determination result, representing a left operand of an anticounterfeit operation; the window function supports preliminary window evaluation on the limiting object, the supported window functions comprise summation, averaging, minimum value, maximum value, counting, de-duplication and the like, and in particular, the de-duplication function can be overlapped with other window functions for calculation; the statistical window defines a window function calculated time interval; the additional dimension allows for finer statistical granularity to be specified; finally, an assertion operation, typically a comparison operation, may specify the left operand calculated by the window function based on the defined object, and specify the comparison operation of the right operand. When the operation is true, a rule evaluation decision hit is indicated. In particular, rule assertions and predicate operations therein also support logical combinations.
Baseline rules distinguish subject baseline abnormal events that can trigger rule evaluation by associating base baselines. The parameter rule is that by setting parameter constraint, rule evaluation is triggered, which is similar to assertion in structure, but no longer contains relevant attribute and logic of window statistics.
S6: analyzing a risk scene model rule through a rule engine, and performing risk assessment based on abnormal events and behavior logs;
the invention discloses a rule engine capable of analyzing a risk scene model rule and carrying out risk assessment. The rule engine is a core component of a user risk behavior judging program, can dynamically load all currently defined risk scene models, analyzes rules in the models, evaluates and judges whether the user behavior is risk behavior according to rule definition, and makes corresponding alarms. The details are set forth below:
when the risk behavior judging program is started, all the currently defined risk scene models are loaded into the memory cache, and the subscription model change message is supported, so that the risk scene models in the memory cache can be updated in time. The rules engine will extract all rules in the program loaded risk scene model, including simple rules and combined rules, and then group all rules according to whether the model rules have pre-rules and whether the type is a baseline rule or a parameter rule. For the rule with the requirement of the pre-rule, when the pre-rule is not hit, the hit judgment can be omitted, so that the running efficiency of the program is improved.
The risk behavior judging program can subscribe the user operation log message and the baseline abnormal event message in real time, and execute different logics respectively according to the consumed message types. For user operation log information, the judging program executes parameter rule evaluation logic; for baseline abnormal event messages, the discriminant program will execute baseline rule evaluation logic.
And executing parameter rule evaluation, analyzing rule constraint by the rule engine, judging whether the currently consumed user operation log accords with constraint conditions according to constraint definition, if so, further judging rule assertion, and if assertion succeeds, considering the parameter rule hit. For parameter rule assertions, the assertion defining object can optionally be configured as an oplog attribute, a body attribute, a pre-rule hit result attribute, and the like. If the assertion is configured to not execute the window function, extracting the attribute of the limiting object based on a single operation log, performing comparison operation, and evaluating whether the assertion is successful or not according to an operation result; if the assertion is configured to execute the window function, the object is validated according to the rule, the additional dimension is counted by combining with the window, the log set which also accords with the constraint condition in the statistic window is screened, the window function is executed, the window function result is compared, and whether the assertion is successful or not is evaluated according to the operation result.
And executing the evaluation of the base line rule, wherein the rule engine firstly screens and configures the base line rule corresponding to the base line according to the abnormal base line deviated by the abnormal event, extracts or inquires other information of the abnormal event, such as event score, association log set, abnormal subject and the like, makes further rule assertion judgment, and considers the base line rule to hit if assertion is successful. For baseline rule assertions, the assertion defining object can be optionally configured as an event score, an abnormal behavior log attribute, an abnormal principal attribute, a pre-rule hit result attribute, and the like. When the assertion limiting object is the abnormal behavior log attribute, if the assertion is configured to not execute the window function, the correlation attribute of the latest log in the value abnormal correlation log set is compared, and whether the assertion is successful is evaluated according to the operation result; if the assertion is configured to execute the window function, executing the window function on the corresponding log attribute of the abnormal association log set, then comparing the function result, and evaluating whether the assertion is successful or not according to the operation result.
If the rule for evaluating the hit is a member of the combination rule, it is further necessary to determine whether the combination rule hits according to the combination operator. If the combination operation is logical AND, whether the main body of the hit rule hits other rules in the affiliated combination rule is needed to be judged, and when all rules hit, the combination rule is hit; if the combining operation is a logical OR, the combining rule hits directly.
When any rule evaluates hit, whether it is a single rule or a combination rule, the cache records the rule hit event, and re-determines whether there is a model rule with a pre-rule hit requirement that has already reached the pre-requirement, if so, the model rule that reaches the pre-requirement enters an evaluation sequence, waiting for rule evaluation to be performed.
S7: judging whether the user behavior forms a risk or not according to the evaluation result of the rule engine;
when any rule evaluates hit, other rule hit events of the same main body are screened from the rule hit cache according to the main body of the hit rule, and whether the hit risk scene model is hit is evaluated by combining the limit of the number of the necessary rule, the optional rule and the minimum optional rule hit number of the risk scene model, so that whether the user behavior forms risk is judged.
Claims (9)
1. A model driven user risk behavior discriminating method based on rule engine is characterized by comprising the following steps:
s1: collecting log data comprising user operation behaviors;
s2: extracting an operation behavior main body; acquiring a network site accessed by operation behaviors through request metadata information in log data, configuring an account extraction rule, and extracting a user account as an operation behavior main body;
s3: constructing a behavior base line of the operation behavior main body; based on log data, combining the operation behavior main body extracted in the step S2, and learning to obtain various behavior baselines of the operation behavior main body through machine learning or a statistical algorithm under the preset label baseline granularity;
s4: finding out an abnormal behavior of the operation behavior main body, and generating an abnormal event; analyzing log data in real time according to various behavior baselines of the operation behavior main body learned in the step S3 through a baseline abnormality analysis module, judging whether various behavior indexes of the operation behavior main body deviate from various baselines, generating an abnormal event and entering the next step if the various behavior indexes deviate, and returning to the step S1 if the various behavior indexes do not deviate;
s5: constructing a plurality of risk scene models; each risk scene model comprises model basic information, effective range configuration and rule configuration, wherein risk rules are configured in the rule configuration;
s6: analyzing the risk scene model through a rule engine, and performing risk assessment based on the abnormal event and the behavior log;
the rule engine dynamically loads all currently defined risk scene models and analyzes risk rules in the models, and the rule engine carries out risk assessment on abnormal events of the operation behavior main body according to the risk rules in all rule configurations;
s7: judging whether the user behavior forms a risk or not according to the evaluation result of the rule engine;
when any risk assessment hits, screening other risk assessment hit events of the same operation behavior body from a risk assessment hit cache, evaluating whether the operation behavior body hits a risk scene model, and judging whether the user behavior forms risks.
2. The model driven user risk behavior discrimination method based on the rule engine implementation of claim 1, wherein the operation of collecting log data in S1 specifically comprises the following steps:
s1.1: accessing the interactive network traffic by the bypass access enterprise intranet service system, analyzing and restoring the multiprotocol traffic based on the network protocol acquisition engine, and continuously generating operation log data of enterprise system users;
s1.2, extracting key indexes, request instructions, request objects and access contents in restored original log data layer by layer;
s1.3: and combining the identification sensitivity and classification of the uplink and downlink request response data, completing the preliminary cleaning and marking of the operation log data, and pushing the operation log data to a message component.
3. The model-driven user risk behavior discrimination method based on the rule engine implementation of claim 1, wherein the method for acquiring the user account in S2 is specifically: and combing and combining all network sites into a service system, configuring account extraction rules aiming at a single service system, and synchronously extracting and obtaining a user account of the system when cleaning log data.
4. The model driven user risk behavior discrimination method based on the rule engine implementation of claim 1, wherein in S4, after generating an abnormal event, scoring the event according to a baseline weight and a deviation factor;
the abnormal event comprises an abnormal subject, an abnormal baseline, an associated operation log and an abnormal score;
an exception event may be pushed to the message component.
5. The model driven user risk behavior discrimination method based on rule engine implementation of claim 1, wherein in S5, the basic information includes name, purpose, state and alarm object type of risk scene model;
the effective range configuration defines a log range which can trigger risk assessment;
rule configuration defines a specific risk behavior decision rule;
decision rules include must rules, optional rules, baseline rules, parameter rules, single rules, and combination rules.
6. The model driven user risk behavior discrimination method based on rule engine implementation of claim 5, wherein the necessary rule and the optional rule are respectively specified as follows:
rule must be: the operation behavior of the operation behavior main body can be confirmed to accord with the definition of the risk scene only when the necessary rule is hit;
optional rules: under the condition that a plurality of selectable rules exist, each selectable rule can be independently missed, and if the selectable rules with the number larger than or equal to the preset number are hit at the same time, the operation behavior of the operation behavior main body is determined to accord with the definition of the risk scene.
7. The model driven user risk behavior discrimination method based on rule engine implementation of claim 5, wherein the baseline rule and the parameter rule are respectively specified as follows:
baseline rules: when the behavior of the operation behavior main body deviates from the baseline, generating a baseline abnormal event, and judging whether the baseline rule hits or not by a risk analysis program through asserting a baseline abnormal behavior log, an abnormal main body and an abnormal risk score;
parameter rules: and the risk analysis program asserts the flow log and the window statistical data based on rule constraint conditions by carrying out real-time or near-real-time flow batch processing on the flow log by taking the manually configured behavior threshold parameters as scene rules of main judgment basis, and judges whether the parameter rules hit or not.
8. The model driven user risk behavior discrimination method based on rule engine implementation of claim 5, wherein the single rule and the combined rule are respectively specifically:
single rule: an independent, specific rule, the single rule comprising a set of constraints and a corresponding predicate evaluation, the single rule triggering the corresponding predicate evaluation when a given constraint is satisfied;
combining rules: a rule system consisting of a plurality of single rules or combination rules that can be interrelated, nested or executed in a pre-set order, the combination rules being used to handle complex relationships between conditions or take actions in different contexts.
9. The model driven user risk behavior discrimination method based on the rule engine implementation of claim 1, wherein the specific operation steps of S6 are as follows: the risk behavior judging program in the rule engine is started, all risk scene models which are currently defined are loaded into the memory cache, subscription model changing information is supported, all judging rules in the risk scene models are extracted by the rule engine, and all judging rules are grouped;
the risk behavior judging program subscribes to the user operation log message and the baseline abnormal event message in real time, and logic of different judging rules is executed respectively according to the consumed message types.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410073356.1A CN117596078B (en) | 2024-01-18 | 2024-01-18 | Model-driven user risk behavior discriminating method based on rule engine implementation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410073356.1A CN117596078B (en) | 2024-01-18 | 2024-01-18 | Model-driven user risk behavior discriminating method based on rule engine implementation |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117596078A true CN117596078A (en) | 2024-02-23 |
CN117596078B CN117596078B (en) | 2024-04-02 |
Family
ID=89918735
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410073356.1A Active CN117596078B (en) | 2024-01-18 | 2024-01-18 | Model-driven user risk behavior discriminating method based on rule engine implementation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117596078B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117978836A (en) * | 2024-03-05 | 2024-05-03 | 安徽中杰信息科技有限公司 | Large-screen situation awareness system applied to cloud monitoring service platform |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107888574A (en) * | 2017-10-27 | 2018-04-06 | 深信服科技股份有限公司 | Method, server and the storage medium of Test database risk |
CN112149749A (en) * | 2020-09-29 | 2020-12-29 | 北京明朝万达科技股份有限公司 | Abnormal behavior detection method and device, electronic equipment and readable storage medium |
CN112685711A (en) * | 2021-02-02 | 2021-04-20 | 杭州宁达科技有限公司 | Novel information security access control system and method based on user risk assessment |
CN112769747A (en) * | 2020-11-12 | 2021-05-07 | 成都思维世纪科技有限责任公司 | 5G data security risk evaluation method and evaluation system |
KR20210133598A (en) * | 2020-04-29 | 2021-11-08 | 주식회사 오케이첵 | Method for monitoring anomaly about abuse of private information and device for monitoring anomaly about abuse of private information |
CN114218569A (en) * | 2021-12-17 | 2022-03-22 | 中国建设银行股份有限公司 | Data analysis method, device, equipment, medium and product |
CN114553720A (en) * | 2022-02-28 | 2022-05-27 | 中国工商银行股份有限公司 | User operation abnormity detection method and device |
CN115396109A (en) * | 2022-03-03 | 2022-11-25 | 四川中电启明星信息技术有限公司 | Scene-based data dynamic authorization control method and system |
CN115859240A (en) * | 2022-11-30 | 2023-03-28 | 群硕软件开发(上海)有限公司 | Log-based main body anomaly detection scoring method |
CN116011640A (en) * | 2022-12-30 | 2023-04-25 | 中国联合网络通信集团有限公司 | Risk prediction method and device based on user behavior data |
CN116185802A (en) * | 2023-03-10 | 2023-05-30 | 中国工商银行股份有限公司 | User risk behavior monitoring method and device |
CN117370548A (en) * | 2023-09-06 | 2024-01-09 | 中国电信股份有限公司 | User behavior risk identification method, device, electronic equipment and medium |
-
2024
- 2024-01-18 CN CN202410073356.1A patent/CN117596078B/en active Active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107888574A (en) * | 2017-10-27 | 2018-04-06 | 深信服科技股份有限公司 | Method, server and the storage medium of Test database risk |
KR20210133598A (en) * | 2020-04-29 | 2021-11-08 | 주식회사 오케이첵 | Method for monitoring anomaly about abuse of private information and device for monitoring anomaly about abuse of private information |
CN112149749A (en) * | 2020-09-29 | 2020-12-29 | 北京明朝万达科技股份有限公司 | Abnormal behavior detection method and device, electronic equipment and readable storage medium |
CN112769747A (en) * | 2020-11-12 | 2021-05-07 | 成都思维世纪科技有限责任公司 | 5G data security risk evaluation method and evaluation system |
CN112685711A (en) * | 2021-02-02 | 2021-04-20 | 杭州宁达科技有限公司 | Novel information security access control system and method based on user risk assessment |
CN114218569A (en) * | 2021-12-17 | 2022-03-22 | 中国建设银行股份有限公司 | Data analysis method, device, equipment, medium and product |
CN114553720A (en) * | 2022-02-28 | 2022-05-27 | 中国工商银行股份有限公司 | User operation abnormity detection method and device |
CN115396109A (en) * | 2022-03-03 | 2022-11-25 | 四川中电启明星信息技术有限公司 | Scene-based data dynamic authorization control method and system |
CN115859240A (en) * | 2022-11-30 | 2023-03-28 | 群硕软件开发(上海)有限公司 | Log-based main body anomaly detection scoring method |
CN116011640A (en) * | 2022-12-30 | 2023-04-25 | 中国联合网络通信集团有限公司 | Risk prediction method and device based on user behavior data |
CN116185802A (en) * | 2023-03-10 | 2023-05-30 | 中国工商银行股份有限公司 | User risk behavior monitoring method and device |
CN117370548A (en) * | 2023-09-06 | 2024-01-09 | 中国电信股份有限公司 | User behavior risk identification method, device, electronic equipment and medium |
Non-Patent Citations (1)
Title |
---|
李麒鑫,田秀霞: "基于深度特征合成和关联规则的数据库异常访问检测", 上海电力大学学报, vol. 38, no. 2, 30 April 2022 (2022-04-30) * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117978836A (en) * | 2024-03-05 | 2024-05-03 | 安徽中杰信息科技有限公司 | Large-screen situation awareness system applied to cloud monitoring service platform |
Also Published As
Publication number | Publication date |
---|---|
CN117596078B (en) | 2024-04-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110958220B (en) | Network space security threat detection method and system based on heterogeneous graph embedding | |
US7114183B1 (en) | Network adaptive baseline monitoring system and method | |
US9369484B1 (en) | Dynamic security hardening of security critical functions | |
CN117596078B (en) | Model-driven user risk behavior discriminating method based on rule engine implementation | |
CN116662989B (en) | Security data analysis method and system | |
CN103368979A (en) | Network security verifying device based on improved K-means algorithm | |
CN110896386B (en) | Method, device, storage medium, processor and terminal for identifying security threat | |
TW200530805A (en) | Database user behavior monitor system and method | |
US9961047B2 (en) | Network security management | |
CN108123939A (en) | Malicious act real-time detection method and device | |
CN113904881B (en) | Intrusion detection rule false alarm processing method and device | |
KR101692982B1 (en) | Automatic access control system of detecting threat using log analysis and automatic feature learning | |
CN112039858A (en) | Block chain service security reinforcement system and method | |
CN113157652A (en) | User line image and abnormal behavior detection method based on user operation audit | |
CN113965341A (en) | Intrusion detection system based on software defined network | |
CN115795330A (en) | Medical information anomaly detection method and system based on AI algorithm | |
JP2004054706A (en) | Security risk management system, program, and recording medium thereof | |
CN111159702B (en) | Process list generation method and device | |
CN118101250A (en) | Network security detection method and system | |
RU180789U1 (en) | DEVICE OF INFORMATION SECURITY AUDIT IN AUTOMATED SYSTEMS | |
CN117640240A (en) | Dynamic white list admittance release method and system based on machine learning | |
CN115632884B (en) | Network security situation perception method and system based on event analysis | |
CN114925366A (en) | Method, system, terminal and storage medium for virus detection and blocking | |
Sun et al. | Intelligent log analysis system for massive and multi-source security logs: MMSLAS design and implementation plan | |
CN112988327A (en) | Container safety management method and system based on cloud edge cooperation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |