CN114553720A - User operation abnormity detection method and device - Google Patents

User operation abnormity detection method and device Download PDF

Info

Publication number
CN114553720A
CN114553720A CN202210190006.4A CN202210190006A CN114553720A CN 114553720 A CN114553720 A CN 114553720A CN 202210190006 A CN202210190006 A CN 202210190006A CN 114553720 A CN114553720 A CN 114553720A
Authority
CN
China
Prior art keywords
user
log
detected
detection
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210190006.4A
Other languages
Chinese (zh)
Inventor
黄英盾
徐雯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial and Commercial Bank of China Ltd ICBC
Original Assignee
Industrial and Commercial Bank of China Ltd ICBC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial and Commercial Bank of China Ltd ICBC filed Critical Industrial and Commercial Bank of China Ltd ICBC
Priority to CN202210190006.4A priority Critical patent/CN114553720A/en
Publication of CN114553720A publication Critical patent/CN114553720A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/145Network analysis or design involving simulating, designing, planning or modelling of a network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/10Active monitoring, e.g. heartbeat, ping or trace-route
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/10Network architectures or network communication protocols for network security for controlling access to devices or network resources
    • H04L63/101Access control lists [ACL]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1408Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
    • H04L63/1416Event detection, e.g. attack signature detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1408Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
    • H04L63/1425Traffic logging, e.g. anomaly detection

Abstract

The invention provides a method and a device for detecting user operation abnormity, which can be used in the technical field of information security, and the method comprises the following steps: analyzing the obtained system log file operated by the user to obtain a log to be detected; determining at least one detection model according to user information and the log to be detected; according to the method and the device, the log to be detected is detected according to the at least one detection model to obtain an abnormal detection result, and the method and the device can improve the detection accuracy and the detection efficiency of user operation.

Description

User operation abnormity detection method and device
Technical Field
The present invention relates to the field of network security technologies, and in particular, to a method and an apparatus for detecting user operation anomalies.
Background
At present, the network attacks are various in forms, various modes such as vulnerability utilization, virus propagation, phishing mails and the like are involved, user login is used as a login entrance of an information system, the login entrance is a first pass for obtaining legal access certificates through various attacks, and safety problems such as brute force cracking, authority improvement, bypassing verification, illegal enabling and the like exist. In order to realize the monitoring of the attack behavior from the perspective of a user, some small mechanisms can perform log analysis and alarm through detection models, but when the models are applied to large mechanisms, the problems of low detection accuracy, low efficiency and the like exist.
Disclosure of Invention
The invention aims to provide a user operation abnormity detection method, which improves the user operation detection accuracy and detection efficiency. Another object of the present invention is to provide a user operation abnormality detection apparatus. It is a further object of this invention to provide such a computer apparatus. It is a further object of this invention to provide such a readable medium.
In order to achieve the above object, the present invention discloses a method for detecting user operation abnormality, including:
analyzing the obtained system log file operated by the user to obtain a log to be detected;
determining at least one detection model according to user information and the log to be detected;
and detecting the log to be detected according to the at least one detection model to obtain an abnormal detection result.
Preferably, the analyzing the obtained system log file operated by the user to obtain the log to be detected specifically includes:
and extracting the logs of the acquired system log file operated by the user based on a preset analysis rule to obtain the logs to be detected of different log types.
Preferably, the determining at least one detection model according to the user information and the log to be detected specifically includes:
determining a risk mark of a user according to the user information and a preset user information base;
determining at least one abnormal behavior characteristic to be detected from a preset detection rule base according to the risk mark and the log to be detected;
and determining the detection models corresponding to the at least one abnormal behavior characteristic to be detected respectively.
Preferably, the method further includes analyzing the obtained system log file of the user operation to obtain the log to be detected, before:
setting corresponding risk marks for the users according to the user information;
and storing the user information and the corresponding risk marks into a preset information base.
Preferably, the setting of the corresponding risk flag for the user according to the user information specifically includes:
acquiring authority information of a user from a target server;
and determining a risk mark corresponding to the user according to the authority information of the user and the target server information.
Preferably, the method further includes analyzing the obtained system log file of the user operation to obtain the log to be detected, before:
determining a plurality of abnormal behavior features;
determining abnormal behavior characteristics corresponding to different risk marks of a user;
and obtaining detection rules of different risk marks according to different risk marks, corresponding abnormal behavior characteristics and the types of the logs to be detected, and forming a preset detection rule base.
Preferably, the method further comprises the step of establishing the detection model in advance:
determining log keywords corresponding to different abnormal behavior characteristics, and determining a first detection model for detecting the log keywords;
determining a graph algorithm corresponding to different abnormal behavior characteristics, and determining a second detection model for detecting the graph algorithm.
Preferably, the method further comprises the following steps:
and if the abnormal detection result indicates that the user terminal is abnormal, the user terminal IP operated by the user is sealed and the user information is set to be in an unavailable state.
Preferably, the method further comprises the following steps:
if the abnormal detection result is abnormal, determining whether automatic recovery is needed according to user information;
and if so, restoring the object data corresponding to the user operation to the initial state.
Preferably, the method further comprises the following steps:
if the abnormity detection result is abnormal, acquiring authority examination and approval information, determining whether the user operation passes the examination and approval according to the authority examination and approval information, and if not, determining that the user operation is abnormal;
if so, feeding back abnormal behavior information to the operation and maintenance personnel to enable the operation and maintenance personnel to confirm whether the abnormal behavior information is an attack or not, if so, determining that the user operation is abnormal, otherwise, receiving non-attack confirmation information fed back by the operation and maintenance personnel, and updating the detection model.
The invention also discloses a user operation abnormity detection device, which comprises:
the log file analysis module 11 analyzes the acquired system log file of the user operation to obtain a log to be detected;
the detection rule determining module 12 determines at least one detection model according to the user information and the log to be detected;
the log anomaly detection module 13 detects the log to be detected according to the at least one detection model to obtain an anomaly detection result.
The invention also discloses a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor,
the processor, when executing the program, implements the method as described above.
The invention also discloses a computer-readable medium, having stored thereon a computer program,
which when executed by a processor implements the method as described above.
The user operation abnormity detection method analyzes the acquired system log file of the user operation to obtain a log to be detected, determines at least one detection model according to the user information and the log to be detected, and detects the log to be detected according to the at least one detection model to obtain an abnormity detection result. Therefore, the invention analyzes the system log file formed by the user operation to obtain the log to be detected, can remove a large amount of garbage logs, then determines the corresponding detection model according to the characteristics of the user in the user information and the log type of the log to be detected, and detects the log to be detected to obtain the abnormal detection result. Therefore, the log file is analyzed, the corresponding detection model is determined according to the user information and the log to be detected, the proper detection model can be selected for detecting different logs to be detected and different user information according to different detection rules, and the user operation detection accuracy and the detection efficiency are improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 illustrates a schematic diagram of user operation anomaly detection in the prior art;
FIG. 2 is a flow chart illustrating a method for detecting user operation anomalies according to an exemplary embodiment of the present invention;
FIG. 3 is a flowchart of a user operation anomaly detection method S100 according to a specific embodiment of the present invention;
FIG. 4 is a flowchart of a user operation anomaly detection method S200 according to a specific embodiment of the present invention;
FIG. 5 is a flowchart of a user operation anomaly detection method S000 according to a specific embodiment of the present invention;
fig. 6 is a flowchart illustrating a user operation anomaly detection method according to a specific embodiment S010 of the present invention;
fig. 7 is a flowchart of S030, which is a specific embodiment of a user operation abnormality detection method according to the present invention;
FIG. 8 is a flowchart showing a user operation abnormality detection method S040 according to a specific embodiment of the present invention;
fig. 9 shows a flowchart of a user operation abnormality detection method according to a specific embodiment of the present invention, including S400;
FIG. 10 is a flowchart of a user operation anomaly detection method S500 according to a specific embodiment of the present invention;
fig. 11 is a flowchart of a user operation abnormality detection method S600 according to a specific embodiment of the present invention;
FIG. 12 is a block diagram of an embodiment of a user operation abnormality detection apparatus according to the present invention;
FIG. 13 is a diagram illustrating a configuration of an information default module in an exemplary embodiment of the apparatus for detecting abnormal user operation according to the present invention;
FIG. 14 illustrates a schematic block diagram of a computer device suitable for use in implementing embodiments of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the method and the device for detecting the user operation anomaly disclosed by the present application can be used in the technical field of information security, and can also be used in any field except the technical field of information security.
In the prior art, aiming at network attacks with various means, monitoring of attack behaviors is usually realized from the perspective of users, and some small-sized mechanisms can perform log analysis and alarm through independent research and development models. For example, as shown in fig. 1, in general, in the monitoring of user attacks of small-sized organizations, a monitoring model is constructed by mainly analyzing original log data of a server and compiling SQL statements based on static rules, so that some relatively intuitive abnormal behavior alarms are realized, and the alarms are sent out by mails and a centralized monitoring system and are manually followed, confirmed and handled. The monitoring model has a simple framework, input data processing, rule logic construction and monitored objects are rough, a complete and reasonable evaluation and treatment system does not exist, and comprehensive monitored objects and types, an objective risk assessment process, an efficient processing process and method, objective alarm grading and automatic alarm treatment are lacked.
Therefore, when the existing detection model is applied to a large-scale mechanism, the following problems in 6 aspects exist: 1. the monitoring range and the category are not comprehensive: the large-scale system has large number of users, does not form a complete user information base and an abnormal access behavior characteristic base, has single monitoring dimension and is easy to omit illegal operation behaviors without obvious attack characteristics. For example, only a privileged user with UID of 0 is monitored, and the attention to common users in the right-removable class is easily omitted; and if only monitoring brute force cracking login within a specific time, the intrusion mode of password spraying cannot be found. 2. The alarm accuracy is low: monitoring is carried out based on static rules, a large number of false alarms are output, the actual absolute number is caused by normal operation of operation and maintenance personnel or normal program interaction, and the model is not accurate enough. 3. Lack of objective risk level for alarm: the generated alarm has no objective grading standard and is processed by the experience of operation and maintenance personnel, and the alarm type which has serious influence can be slow to process. 4. The execution efficiency of the model is low: the large-scale system has numerous servers, large log quantity and limited resources, so that the model runs slowly. 5. The risk of service interruption is high: for a user with damaged monitored availability, for example, the user is locked due to too many password attempts, the efficiency of manual emergency handling is low, and normal operation of a service may be affected. 6. The system intrusion risk is high: for the behavior confirmed as an attack, the mail is required to inform the relevant department and the relevant professional to log in the protection system to execute operations such as IP (Internet protocol) blocking, so that the attack cannot be blocked at the first time, and the system can be further damaged. Therefore, the existing user operation abnormity detection method has the problems of low accuracy, low efficiency and the like.
In order to facilitate understanding of the technical solutions provided in the present application, the following first describes relevant contents of the technical solutions in the present application. The user operation abnormity detection method provided by the embodiment of the invention analyzes the system log file formed by user operation to obtain the log to be detected, can remove a large amount of garbage logs, then determines a corresponding detection model according to the user information and the log to be detected to detect the log to be detected to obtain an abnormity detection result, and establishes a planned, targeted and standard model.
The user operation abnormity detection system provided by the embodiment of the invention comprises a production system which is to be detected and can provide service for a user and a user operation abnormity detection device. The production system can comprise a plurality of servers, and relevant user operations performed by a user linked to the production system can form an operation log in the system, and further form a log file and store the log file in the system.
The user operation abnormity detection device can analyze the acquired system log file of the user operation to obtain a log to be detected; determining at least one detection model according to user information and the log to be detected; and detecting the log to be detected according to the at least one detection model to obtain an abnormal detection result.
It should be noted that the user operation abnormality detection device of the present invention may be separately installed, or may be integrated with the production system, and those skilled in the art may install the user operation abnormality detection device and the production system according to the actual situation, which is not limited by the present invention.
The following describes an implementation process of the user operation anomaly detection method provided by the embodiment of the present invention, taking a user operation anomaly detection apparatus as an execution subject. It can be understood that the execution subject of the user operation abnormality detection method provided by the embodiment of the invention includes, but is not limited to, the user operation abnormality detection device.
According to one aspect of the invention, the embodiment discloses a user operation abnormity detection method. As shown in fig. 2, in this embodiment, the method includes:
s100: and analyzing the acquired system log file operated by the user to obtain a log to be detected.
S200: and determining at least one detection model according to the user information and the log to be detected.
S300: and detecting the log to be detected according to the at least one detection model to obtain an abnormal detection result.
The user operation abnormity detection method analyzes the acquired system log file of the user operation to obtain a log to be detected, determines at least one detection model according to the user information and the log to be detected, and detects the log to be detected according to the at least one detection model to obtain an abnormity detection result. Therefore, the invention analyzes the system log file formed by the user operation to obtain the log to be detected, can remove a large amount of garbage logs, then determines the corresponding detection model according to the characteristics of the user in the user information and the log type of the log to be detected, and detects the log to be detected to obtain the abnormal detection result. Therefore, the log file is analyzed, the corresponding detection model is determined according to the user information and the log to be detected, the proper detection model can be selected for detection aiming at different logs to be detected and different user information through different detection rules, and the user operation detection accuracy and the detection efficiency are improved.
In a preferred embodiment, as shown in fig. 3, the analyzing the acquired system log file operated by the user to obtain the log to be detected in S100 specifically includes:
s110: and extracting the logs of the acquired system log file operated by the user based on a preset analysis rule to obtain the logs to be detected of different log types.
Specifically, after the user performs the corresponding user operation on the system by linking to the production system, the server of the production system may record the process of the user operation through the log file. Therefore, the system log files operated by the user can be obtained from all servers of the production system, and then the log files are analyzed to obtain different types of log files as the log files to be detected. Preferably, attribute information corresponding to different log types can be preset, and the log files of different log types can be obtained by analyzing the attribute information in the log files.
For example, in a specific example, the log types may include log types such as a login log, a user authority change log, and an operation log, and the attribute information of different log types may include information such as characters, key commands, and formats of different log types. The obtained log files can be matched with the attribute information to distinguish the logs, so that a large number of garbage logs are screened out. Furthermore, different types of logs can be respectively sent to corresponding data pools for temporary storage.
Preferably, in order to ensure data integrity and recover the log file in time after the error operation, the log file can be backed up and then analyzed to obtain the log to be detected.
In a preferred embodiment, as shown in fig. 4, the determining, by the S200, at least one detection model according to the user information and the log to be detected specifically includes:
s210: and determining the risk mark of the user according to the user information and a preset user information base.
S220: and determining at least one abnormal behavior characteristic to be detected from a preset detection rule base according to the risk mark and the log to be detected.
S230: and determining the detection models respectively corresponding to the at least one abnormal behavior characteristic to be detected.
Specifically, in the preferred embodiment, in order to set different anomaly detection modes for different users, the risk flag of the user may be determined according to user information for performing user operation, and at least one corresponding anomaly behavior feature may be set in advance for the users with different risk flags. Therefore, users with different risk marks can carry out different abnormal detection, and the set abnormal behavior characteristics to be detected can be more than users with low risk in the risk marks aiming at the users with high risk in the risk marks, so that strict abnormal detection of the users with high risk is realized, and the flexibility of the abnormal operation detection of the users is improved.
In a preferred embodiment, as shown in fig. 5, the method further includes, before parsing the obtained system log file of the user operation to obtain a log to be detected, S000:
s010: and setting corresponding risk marks for the user according to the user information.
S020: and storing the user information and the corresponding risk marks into a preset information base.
Alternatively, the risk level of the user can be determined according to the user information, so that a risk mark can be set for the user information. The method and the device have the advantages that the user information base is formed according to the user information and the corresponding risk marks, multiple abnormal detection can be carried out on the high-risk user through the form of forming the user information base, and the low-risk user only needs to carry out necessary abnormal detection.
The abnormal behavior characteristics can be predetermined according to enterprise specification requirements and daily operation and maintenance requirements, the abnormal behavior characteristics can be network attacks, and the abnormal behavior characteristics can be behaviors such as bypass bastion machine login, different application mutual accesses, non-change time period operation, high-risk operation, abnormal terminal login and the like. And determining abnormal behavior characteristics corresponding to different risk marks according to user operations needing to be monitored by users with different risk marks.
In a preferred embodiment, as shown in fig. 6, the setting, by the S010, a corresponding risk flag for the user according to the user information specifically includes:
s011: and acquiring the authority information of the user from the target server.
S012: and determining a risk mark corresponding to the user according to the authority information of the user and the target server information.
Specifically, the locking state, authority, user use mode, whether the user is lockable, the belonging personnel, the belonging server type, the bearer service and other user authority information of the user can be obtained from the server of the production system in advance, the risk level of the user can be determined according to the user authority information, the risk mark of the user is further determined, the user information of the user is associated and corresponds to the risk mark and then is stored in a user information base, and the user information base can be a database.
In a preferred embodiment, as shown in fig. 7, the method further includes, before analyzing the obtained system log file of the user operation to obtain a log to be detected, S030:
s031: a plurality of abnormal behavior features is determined.
S032: and determining abnormal behavior characteristics corresponding to different risk marks of the user.
S033: and obtaining detection rules of different risk marks according to different risk marks, corresponding abnormal behavior characteristics and the types of the logs to be detected, and forming a preset detection rule base.
Specifically, for different risk markers, at least one corresponding abnormal behavior feature can be determined, the risk markers of the user needing attention and the specific abnormal behavior features are combined to form various detection rules, so that comprehensive monitoring content is formed, and all the detection rules can form a preset detection rule base.
In a preferred embodiment, as shown in fig. 8, the method further includes a step S040 of previously establishing the detection model:
s041: determining log keywords corresponding to different abnormal behavior characteristics, and determining a first detection model for detecting the log keywords.
S042: determining a graph algorithm corresponding to different abnormal behavior characteristics, and determining a second detection model for detecting the graph algorithm.
It can be understood that, for the detection of different abnormal behavior characteristics, the abnormal detection can be performed on the log to be detected through the detection models corresponding to the static rules and the dynamic rules, so that the accuracy of the abnormal detection is ensured. Specifically, for the static rule, the log keywords can be set, the first detection model is formed to match the log keywords in the log file for anomaly detection, and the set log keywords are not changed generally; for the dynamic rules, according to the user operation behaviors and frequent mutual access behaviors which occur within a period of time, such as common commands of the user, frequently-logged terminals and applications which need to be accessed by normal services, a daily behavior baseline is established by using a graph algorithm, then a second detection model is formed by using a machine learning clustering algorithm, abnormal behaviors are identified, and continuous optimization and dynamic adjustment are carried out according to learning conditions.
The detection model mainly comprises a first detection model and a second detection model which are respectively formed according to static and dynamic rules, for example, models such as creating illegal authorization, illegal source address login and the like, and when the detection result is abnormal, alarm information can be output to operation and maintenance personnel through a mail or a centralized monitoring platform. The output of the alarm information needs to be graded according to factors such as the risk degree of the user, the sensitivity degree of the behavior and the like by combining the user information base and the abnormal behavior characteristics.
In one specific example, the illegal privilege offering model employs static rules, i.e., that ordinary users are granted special level rights. Selecting a first detection model corresponding to the abnormal behavior characteristics of a high-risk operation command 'usermod user name-g privileged user group' from an abnormal behavior characteristic library, comparing the log to be detected with the command through the first detection model, and if the log to be detected is matched with the command, outputting alarm information.
In another specific example, the illegal source address login adopts a dynamic rule that a user logs in from a terminal which is not logged in daily. And acquiring the login condition of the user within 1 month, creating an 'access source-user-destination server' map, forming a base line of legal access, detecting a log to be detected through a second detection model, and outputting an alarm if the user logs in a destination server from a new network segment on a certain day.
In a preferred embodiment, as shown in fig. 9, the method further comprises:
s400: and if the abnormal detection result indicates that the user terminal is abnormal, the user terminal IP operated by the user is sealed and the user information is set to be in an unavailable state.
Specifically, in the preferred embodiment, when there is an anomaly in the anomaly detection result, in order to timely stop the attack behavior, an automated process may be adopted to timely stop the network attack. If the anomaly detection result is abnormal, a network attack danger may exist, measures for carrying out blocking on an ip address of a user terminal operated by a user can be executed to block an external attack source, and meanwhile, the bastion machine can be linked to lock the user, so that the user cannot use the bastion machine, and the internal transverse expansion behavior is blocked.
In a preferred embodiment, as shown in fig. 10, the method further includes S500:
s510: and if the abnormal detection result is abnormal, determining whether automatic recovery is needed or not according to user information.
S520: and if so, restoring the object data corresponding to the user operation to the initial state.
Specifically, for some user information, the importance is high and the user information cannot be changed freely. In this case, the user information can be acquired according to the user information base, whether automatic unlocking or user permission recovery is required is determined according to the user information, and if the user information meets the condition, user availability is automatically recovered. For example, a user is marked in the information base as: "not lockable", "user authority uid is 0", and "carry core service", it is judged that the user has high requirements for availability and authority, and once it is monitored that its status is changed, it needs to be restored to the initial status immediately.
In a preferred embodiment, as shown in fig. 11, the method further includes S600:
s610: and if the abnormity detection result is that abnormity exists, acquiring authority examination and approval information, determining whether the user operation passes the examination and approval according to the authority examination and approval information, and if not, judging that the user operation is abnormal.
S620: if so, feeding back abnormal behavior information to the operation and maintenance personnel to enable the operation and maintenance personnel to confirm whether the abnormal behavior information is an attack or not, if so, determining that the user operation is abnormal, otherwise, receiving non-attack confirmation information fed back by the operation and maintenance personnel, and updating the detection model.
Specifically, for an abnormal detection result of the detection model, there may be a situation of false alarm, in the preferred embodiment, when the abnormal detection result is that there is an abnormality, authority audit information may be obtained, that is, it is determined whether the abnormal user operation is a provisional application and has been approved, if the abnormal user operation passes the non-approval, the user operation has an abnormality, and if the abnormal user operation passes the approval, the abnormal user operation is indicated as a normal user operation that may pass the provisional application, and further it is necessary to feed back the abnormal user operation to the operation and maintenance staff to further determine whether the user operation is abnormal, if the operation and maintenance staff determines that the user operation is an attack, the abnormal detection result is accurate, and the user operation is abnormal. And if the operation and maintenance personnel confirm that the operation and maintenance personnel are not attacked, receiving non-attack confirmation information transmitted by the operation and maintenance personnel, and indicating that the abnormal detection result output by the detection model is inaccurate and false alarm exists. The abnormal detection result of the attack or non-attack confirmed by the user can be used as a training sample to further train the detection model so as to continuously update the detection model and improve the abnormal detection accuracy of the detection model.
In a specific example, under the condition of automatically confirming whether the user is an attack, the system can be linked with an authority examination and approval system set by an enterprise to confirm whether the abnormal user operation of the user is examined and approved by operation and maintenance personnel, and if the abnormal user operation of the user is not examined and approved but suspicious operation occurs, the abnormal user operation is judged to be the attack; and if the user applies for the E-mail and the operation and maintenance personnel approve the E-mail, linking the task system, automatically sending the E-mail to the operation and maintenance personnel, and judging whether the E-mail is an attack or not according to the feedback condition. If the attack is confirmed, linking a safety protection system, implementing measures of sealing the user terminal IP operated by the user, setting the user information to be in an unavailable state and the like, and if the attack is confirmed not to be attacked: and updating the dynamic rule to prevent false alarm again.
In summary, compared with the problem of the user operation abnormality detection method in the prior art, the method has the following advantages:
1. monitoring coverage is comprehensive: forming a complete user information base by using a bastion machine of a large mechanism, marking users, and strengthening monitoring of important users and secondary attention of secondary users; and forming a plurality of monitoring rules based on the abnormal access behavior feature library, and covering the malicious behaviors without obvious attack features.
2. The alarm accuracy is high: and (3) making a dynamic rule by utilizing machine learning, and continuously adjusting a prediction model to realize accurate user behavior monitoring.
3. The alarm is graded according to objective risk: the generated alarm is graded according to factors such as the risk degree of the user, the importance level of the application, the sensitivity degree of the behavior and the like.
4. The model execution efficiency is high: the method has the advantages that the logs are subjected to table division, large and small partition processing is achieved, junk data are filtered, the amount of bottom layer data called by model operation is reduced, and model efficiency is improved.
5. And the service interruption risk is reduced:
and for the monitored user with damaged availability, automatically initiating a confirmation process, automatically restoring the user state, and recovering production and service at the first time.
6. And (3) reducing the risk of system intrusion:
and for the behavior confirmed as the attack, automatically linking the safety protection system to carry out threat blocking, blocking an attack source from a network layer, or automatically locking a user and recovering the user authority, so as to prevent the attacker from further invading.
In order to facilitate understanding of the technical solutions provided in the present application, the following first describes relevant contents of the technical solutions in the present application. The apk application program development method provided by the embodiment of the invention is used for carrying out graphic block corresponding to different components on the user terminal according to the development personnel
Based on the same principle, the embodiment also discloses a user operation abnormity detection device. As shown in fig. 12, the apparatus includes a log file parsing module 11, a detection rule determining module 12, and a log anomaly detection module 13.
The log file analyzing module 11 is configured to analyze the obtained system log file operated by the user to obtain a log to be detected.
The detection rule determining module 12 is configured to determine at least one detection model according to the user information and the log to be detected.
The log anomaly detection module 13 is configured to detect the log to be detected according to the at least one detection model to obtain an anomaly detection result.
In a preferred embodiment, the log file parsing module 11 is specifically configured to perform log extraction on an acquired system log file operated by a user based on a preset parsing rule to obtain logs to be detected of different log types.
In a preferred embodiment, the detection rule determining module 12 is specifically configured to determine a risk flag of a user according to user information and a preset user information base; determining at least one abnormal behavior characteristic to be detected from a preset detection rule base according to the risk mark and the log to be detected; and determining the detection models respectively corresponding to the at least one abnormal behavior characteristic to be detected.
In a preferred embodiment, as shown in fig. 13, the apparatus further comprises an information presetting module 10. The information presetting module 10 is used for setting a corresponding risk flag for the user according to the user information before analyzing the obtained system log file operated by the user to obtain a log to be detected; and storing the user information and the corresponding risk marks into a preset information base.
In a preferred embodiment, the information presetting module 10 is specifically configured to obtain the authority information of the user from the target server; and determining a risk mark corresponding to the user according to the authority information of the user and the target server information.
In a preferred embodiment, the information presetting module 10 is further configured to determine a plurality of abnormal behavior characteristics before analyzing the obtained system log file of the user operation to obtain a log to be detected; determining abnormal behavior characteristics corresponding to different risk marks of a user; and obtaining detection rules of different risk marks according to different risk marks, corresponding abnormal behavior characteristics and the types of the logs to be detected, and forming a preset detection rule base.
In a preferred embodiment, the information presetting module 10 is further configured to pre-establish the detection model: determining log keywords corresponding to different abnormal behavior characteristics, and determining a first detection model for detecting the log keywords; and determining a graph algorithm corresponding to different abnormal behavior characteristics, and determining a second detection model for detecting the graph algorithm.
In a preferred embodiment, the log anomaly detection module 13 is further configured to, if the anomaly detection result indicates that an anomaly exists, block the user terminal IP operated by the user and set the user information to be in an unavailable state.
In a preferred embodiment, the log anomaly detection module 13 is further configured to determine whether automatic recovery is required according to user information if the anomaly detection result indicates that an anomaly exists; and if so, restoring the object data corresponding to the user operation to the initial state.
In a preferred embodiment, the log anomaly detection module 13 is further configured to, if the anomaly detection result indicates that an anomaly exists, obtain permission approval information, and determine whether the user operation has been approved according to the permission approval information; if yes, feeding back abnormal behavior information to the operation and maintenance personnel to enable the operation and maintenance personnel to confirm whether the abnormal behavior information is an attack or not, if not, receiving non-attack confirmation information fed back by the operation and maintenance personnel, and updating the detection model.
Since the principle of the device for solving the problems is similar to the method, the implementation of the device can refer to the implementation of the method, and the detailed description is omitted here.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. A typical implementation device is a computer device, which may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
In a typical example, the computer device specifically comprises a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method performed by the client as described above when executing the program, or the processor implementing the method performed by the server as described above when executing the program.
Referring now to FIG. 14, shown is a schematic block diagram of a computer device 600 suitable for use in implementing embodiments of the present application.
As shown in fig. 14, the computer apparatus 600 includes a Central Processing Unit (CPU)601 which can perform various appropriate works and processes according to a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage section 608 into a Random Access Memory (RAM)) 603. In the RAM603, various programs and data necessary for the operation of the system 600 are also stored. The CPU601, ROM602, and RAM603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
The following components are connected to the I/O interface 605: an input portion 606 including a keyboard, a mouse, and the like; an output section 607 including a Cathode Ray Tube (CRT), a liquid crystal feedback (LCD), and the like, and a speaker and the like; a storage section 608 including a hard disk and the like; and a communication section 609 including a network interface card such as a LAN card, a modem, or the like. The communication section 609 performs communication processing via a network such as the internet. The driver 610 is also connected to the I/O interface 605 as needed. A removable medium 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 610 as necessary, so that a computer program read out therefrom is mounted as necessary on the storage section 608.
In particular, according to an embodiment of the present invention, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the invention include a computer program product comprising a computer program tangibly embodied on a machine-readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 609, and/or installed from the removable medium 611.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functionality of the units may be implemented in one or more software and/or hardware when implementing the present application.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (13)

1. A user operation abnormity detection method is characterized by comprising the following steps:
analyzing the obtained system log file operated by the user to obtain a log to be detected;
determining at least one detection model according to user information and the log to be detected;
and detecting the log to be detected according to the at least one detection model to obtain an abnormal detection result.
2. The method for detecting the user operation abnormality according to claim 1, wherein the analyzing the acquired system log file of the user operation to obtain the log to be detected specifically includes:
and extracting the logs of the acquired system log file operated by the user based on a preset analysis rule to obtain the logs to be detected of different log types.
3. The method according to claim 1, wherein the determining at least one detection model according to the user information and the log to be detected specifically comprises:
determining a risk mark of a user according to the user information and a preset user information base;
determining at least one abnormal behavior characteristic to be detected from a preset detection rule base according to the risk mark and the log to be detected;
and determining the detection models corresponding to the at least one abnormal behavior characteristic to be detected respectively.
4. The method according to claim 1, further comprising, before parsing the obtained system log file of the user operation to obtain a log to be detected:
setting corresponding risk marks for the users according to the user information;
and storing the user information and the corresponding risk marks into a preset information base.
5. The method according to claim 4, wherein the setting of the corresponding risk flag for the user according to the user information specifically includes:
acquiring authority information of a user from a target server;
and determining a risk mark corresponding to the user according to the authority information of the user and the target server information.
6. The method according to claim 4, further comprising, before analyzing the obtained system log file of the user operation to obtain a log to be detected:
determining a plurality of abnormal behavior features;
determining abnormal behavior characteristics corresponding to different risk marks of a user;
and obtaining detection rules of different risk marks according to different risk marks, corresponding abnormal behavior characteristics and the types of the logs to be detected, and forming a preset detection rule base.
7. The user operation abnormality detection method according to claim 4, characterized by further comprising a step of previously establishing the detection model:
determining log keywords corresponding to different abnormal behavior characteristics, and determining a first detection model for detecting the log keywords;
determining a graph algorithm corresponding to different abnormal behavior characteristics, and determining a second detection model for detecting the graph algorithm.
8. The user operation abnormality detection method according to claim 1, characterized by further comprising:
and if the abnormal detection result indicates that the user terminal is abnormal, the user terminal IP operated by the user is sealed and the user information is set to be in an unavailable state.
9. The user operation abnormality detection method according to claim 1, characterized by further comprising:
if the abnormal detection result is abnormal, determining whether automatic recovery is needed according to user information;
and if so, restoring the object data corresponding to the user operation to the initial state.
10. The user operation abnormality detection method according to claim 1, characterized by further comprising:
if the abnormity detection result is abnormal, acquiring authority examination and approval information, determining whether the user operation passes the examination and approval according to the authority examination and approval information, and if not, determining that the user operation is abnormal;
if so, feeding back abnormal behavior information to the operation and maintenance personnel to enable the operation and maintenance personnel to confirm whether the abnormal behavior information is an attack or not, if so, determining that the user operation is abnormal, otherwise, receiving non-attack confirmation information fed back by the operation and maintenance personnel, and updating the detection model.
11. A user operation abnormality detection apparatus characterized by comprising:
the log file analysis module is used for analyzing the acquired system log file operated by the user to obtain a log to be detected;
the detection rule determining module is used for determining at least one detection model according to the user information and the log to be detected;
and the log abnormity detection module is used for detecting the log to be detected according to the at least one detection model to obtain an abnormity detection result.
12. A computer device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor,
the processor, when executing the program, implements the method of any of claims 1-10.
13. A computer-readable medium, having stored thereon a computer program,
the program when executed by a processor implementing the method according to any one of claims 1-10.
CN202210190006.4A 2022-02-28 2022-02-28 User operation abnormity detection method and device Pending CN114553720A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210190006.4A CN114553720A (en) 2022-02-28 2022-02-28 User operation abnormity detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210190006.4A CN114553720A (en) 2022-02-28 2022-02-28 User operation abnormity detection method and device

Publications (1)

Publication Number Publication Date
CN114553720A true CN114553720A (en) 2022-05-27

Family

ID=81661095

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210190006.4A Pending CN114553720A (en) 2022-02-28 2022-02-28 User operation abnormity detection method and device

Country Status (1)

Country Link
CN (1) CN114553720A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115348338A (en) * 2022-08-05 2022-11-15 中国银行股份有限公司 Inter-system message exception handling method, device and related equipment
CN116975934A (en) * 2023-09-20 2023-10-31 北京安天网络安全技术有限公司 File security detection method and system
CN117596078A (en) * 2024-01-18 2024-02-23 成都思维世纪科技有限责任公司 Model-driven user risk behavior discriminating method based on rule engine implementation

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107046550A (en) * 2017-06-14 2017-08-15 微梦创科网络科技(中国)有限公司 A kind of detection method and device of abnormal login behavior
CN108304723A (en) * 2018-01-17 2018-07-20 链家网(北京)科技有限公司 A kind of anomaly detection method and device
CN110347547A (en) * 2019-05-27 2019-10-18 中国平安人寿保险股份有限公司 Log method for detecting abnormality, device, terminal and medium based on deep learning
CN112149749A (en) * 2020-09-29 2020-12-29 北京明朝万达科技股份有限公司 Abnormal behavior detection method and device, electronic equipment and readable storage medium
CN112804196A (en) * 2020-12-25 2021-05-14 北京明朝万达科技股份有限公司 Log data processing method and device
WO2021174870A1 (en) * 2020-09-02 2021-09-10 平安科技(深圳)有限公司 Network security risk inspection method and system, computer device, and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107046550A (en) * 2017-06-14 2017-08-15 微梦创科网络科技(中国)有限公司 A kind of detection method and device of abnormal login behavior
CN108304723A (en) * 2018-01-17 2018-07-20 链家网(北京)科技有限公司 A kind of anomaly detection method and device
CN110347547A (en) * 2019-05-27 2019-10-18 中国平安人寿保险股份有限公司 Log method for detecting abnormality, device, terminal and medium based on deep learning
WO2021174870A1 (en) * 2020-09-02 2021-09-10 平安科技(深圳)有限公司 Network security risk inspection method and system, computer device, and storage medium
CN112149749A (en) * 2020-09-29 2020-12-29 北京明朝万达科技股份有限公司 Abnormal behavior detection method and device, electronic equipment and readable storage medium
CN112804196A (en) * 2020-12-25 2021-05-14 北京明朝万达科技股份有限公司 Log data processing method and device

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115348338A (en) * 2022-08-05 2022-11-15 中国银行股份有限公司 Inter-system message exception handling method, device and related equipment
CN115348338B (en) * 2022-08-05 2024-02-23 中国银行股份有限公司 Inter-system message exception handling method and device and related equipment
CN116975934A (en) * 2023-09-20 2023-10-31 北京安天网络安全技术有限公司 File security detection method and system
CN116975934B (en) * 2023-09-20 2023-12-15 北京安天网络安全技术有限公司 File security detection method and system
CN117596078A (en) * 2024-01-18 2024-02-23 成都思维世纪科技有限责任公司 Model-driven user risk behavior discriminating method based on rule engine implementation
CN117596078B (en) * 2024-01-18 2024-04-02 成都思维世纪科技有限责任公司 Model-driven user risk behavior discriminating method based on rule engine implementation

Similar Documents

Publication Publication Date Title
CN114553720A (en) User operation abnormity detection method and device
US9344457B2 (en) Automated feedback for proposed security rules
US20190342341A1 (en) Information technology governance and controls methods and apparatuses
JP6334069B2 (en) System and method for accuracy assurance of detection of malicious code
CN112187792A (en) Network information safety protection system based on internet
US20100281543A1 (en) Systems and Methods for Sensitive Data Remediation
CN112787992A (en) Method, device, equipment and medium for detecting and protecting sensitive data
EP1476827A2 (en) Method and apparatus for monitoring a database system
CN110417718B (en) Method, device, equipment and storage medium for processing risk data in website
CN111224988A (en) Network security information filtering method
Vaidya et al. Security issues in language-based software ecosystems
CN113438249B (en) Attack tracing method based on strategy
Yadav et al. Assessment of SCADA system vulnerabilities
CN113311809A (en) Industrial control system-based safe operation and maintenance instruction blocking device and method
CN112637108B (en) Internal threat analysis method and system based on anomaly detection and emotion analysis
EP3738064B1 (en) System and method for implementing secure media exchange on a single board computer
CN113923037B (en) Anomaly detection optimization device, method and system based on trusted computing
Hakkoymaz Classifying Database Users for Intrusion Prediction and Detection in Data Security
CN113422776A (en) Active defense method and system for information network security
Shivakumara et al. Review Paper on Dynamic Mechanisms of Data Leakage Detection and Prevention
Kumar et al. Generic security risk profile of e-governance applications—a case study
CN117648100B (en) Application deployment method, device, equipment and storage medium
US11822916B2 (en) Correlation engine for detecting security vulnerabilities in continuous integration/continuous delivery pipelines
US20230336573A1 (en) Security threat remediation for network-accessible devices
CN117786725A (en) Identity security audit analysis method, system and device for information system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination