CN117370701A - Browser risk detection method, browser risk detection device, computer equipment and storage medium - Google Patents

Browser risk detection method, browser risk detection device, computer equipment and storage medium Download PDF

Info

Publication number
CN117370701A
CN117370701A CN202311378204.4A CN202311378204A CN117370701A CN 117370701 A CN117370701 A CN 117370701A CN 202311378204 A CN202311378204 A CN 202311378204A CN 117370701 A CN117370701 A CN 117370701A
Authority
CN
China
Prior art keywords
risk detection
browser
abnormal behavior
data
security
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311378204.4A
Other languages
Chinese (zh)
Inventor
宫婉钰
李金泽
邓飞
陈海滨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial and Commercial Bank of China Ltd ICBC
Original Assignee
Industrial and Commercial Bank of China Ltd ICBC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial and Commercial Bank of China Ltd ICBC filed Critical Industrial and Commercial Bank of China Ltd ICBC
Priority to CN202311378204.4A priority Critical patent/CN117370701A/en
Publication of CN117370701A publication Critical patent/CN117370701A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/958Organisation or management of web site content, e.g. publishing, maintaining pages or automatic linking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/55Detecting local intrusion or implementing counter-measures
    • G06F21/56Computer malware detection or handling, e.g. anti-virus arrangements
    • G06F21/566Dynamic detection, i.e. detection performed at run-time, e.g. emulation, suspicious activities
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Virology (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The application relates to a browser risk detection method, a browser risk detection device, computer equipment, a storage medium and a computer program product, and relates to the field of information security. The method comprises the following steps: responding to a browser running instruction, starting a safe container environment and running a browser in the safe container environment; in the running process of the browser, acquiring interaction data between the browser and an external environment through a security probe; the external environment refers to an environment outside the secure container environment; acquiring at least one target risk detection method from a plurality of preset risk detection methods according to the interaction data; and performing risk detection according to the at least one target risk detection method and the interaction data. By adopting the method, the browser safety can be improved.

Description

Browser risk detection method, browser risk detection device, computer equipment and storage medium
Technical Field
The present invention relates to the field of information security, and in particular, to a browser risk detection method, apparatus, computer device, storage medium, and computer program product.
Background
With the development of internet technology, network security is increasingly receiving attention from users. Network security technologies widely used today include encrypted communications, website authentication, security plug-ins, etc., which can ensure secure browsing and interaction of users on the network.
In the traditional method, taking encryption communication as an example, security and integrity of data in a transmission process are ensured by establishing a secure communication channel between a browser and a network server and using a public key encryption mode and a private key decryption mode.
However, since there is a risk of leakage in the transmission and storage of the key, the encryption algorithm may also have a vulnerability, so that the browser security is low.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a browser risk detection method, apparatus, computer device, computer readable storage medium, and computer program product that can improve browser security.
In a first aspect, the present application provides a browser risk detection method. The method comprises the following steps: responding to a browser running instruction, starting a safe container environment and running a browser in the safe container environment; in the running process of the browser, acquiring interaction data between the browser and an external environment through a security probe; the external environment refers to an environment outside the secure container environment; acquiring at least one target risk detection method from a plurality of preset risk detection methods according to the interaction data; and performing risk detection according to the at least one target risk detection method and the interaction data.
In a second aspect, the present application further provides a browser risk detection apparatus. Comprising the following steps: the safe container starting module is used for responding to the browser running instruction, starting a safe container environment and running a browser in the safe container environment; the data acquisition module is used for acquiring interaction data between the browser and an external environment through a security probe in the running process of the browser; the external environment refers to an environment outside the secure container environment; the risk detection method determining module is used for acquiring at least one target risk detection method from a plurality of preset risk detection methods according to the interaction data; and the risk detection module is used for carrying out risk detection according to the at least one target risk detection method and the interaction data.
In some embodiments, the risk detection method determining module is further configured to determine priorities corresponding to the multiple risk detection methods respectively according to the interaction data; and acquiring at least one target risk detection method from the multiple risk detection methods according to the order of the priority from high to low.
In some embodiments, the apparatus further includes a first model training module configured to obtain sample interaction data, and obtain priorities of the multiple risk detection methods for the sample interaction data, to obtain a first priority of the multiple risk detection methods; inputting the sample interaction data into a priority generation model to be trained, predicting the priorities of the multiple risk detection methods, and obtaining a second priority of the multiple risk detection methods; determining a first priority and a second priority of each risk detection method, and training a priority generation model.
In some embodiments, the risk detection module is further configured to determine abnormal behavior data from the behavior data by the abnormal behavior detection method; and performing risk detection on the abnormal behavior data by adopting the abnormal behavior risk detection method to obtain a risk detection result corresponding to the abnormal behavior data.
In some embodiments, the risk detection module is further configured to input the abnormal behavior data into the behavior risk detection model to perform risk detection, so as to obtain a risk detection result corresponding to the abnormal behavior data.
In some embodiments, the apparatus further includes a first model training module, configured to obtain sample abnormal behavior data, and tag intention information and tag cause information corresponding to the sample abnormal behavior data; inputting the sample abnormal behavior data into a behavior risk detection model to be trained for prediction to obtain a sample intention detection result and a sample reason detection result; training the behavior risk detection model according to the difference between the label intention information and the sample intention detection result and the difference between the label reason information and the sample reason detection result.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory and a processor, wherein the memory stores a computer program, and the processor realizes the steps in the browser risk detection method when executing the computer program.
In a fourth aspect, the present application also provides a computer-readable storage medium. The computer readable storage medium has stored thereon a computer program which, when executed by a processor, implements the steps of the browser risk detection method described above.
In a fifth aspect, the present application also provides a computer program product. The computer program product comprises a computer program which, when executed by a processor, implements the steps of the browser risk detection method described above.
The browser risk detection method, the browser risk detection device, the browser risk detection computer device, the browser risk detection storage medium and the browser risk detection computer program product are used for responding to browser running instructions, starting a safe container environment and running a browser in the safe container environment; in the running process of the browser, acquiring interaction data between the browser and an external environment through a security probe; the external environment refers to an environment outside the secure container environment; acquiring at least one target risk detection method from a plurality of preset risk detection methods according to the interaction data; and performing risk detection according to the at least one target risk detection method and the interaction data. The safety container environment and the safety probe are used for protecting and detecting the browser behavior, so that the browser safety is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the related art, the drawings that are required to be used in the embodiments or the related technical descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to the drawings without inventive effort for a person having ordinary skill in the art.
FIG. 1 is an application environment diagram of a browser risk detection method in one embodiment;
FIG. 2 is a flowchart of a browser risk detection method according to an embodiment;
FIG. 3 is a flow diagram of a secure container environment in one embodiment;
FIG. 4 is a flow chart of security probe detection and analysis in one embodiment;
FIG. 5 is a flowchart of a browser risk detection method according to another embodiment;
FIG. 6 is a flow diagram of security event detection in one embodiment;
FIG. 7 is a flow diagram of security policy management in one embodiment;
FIG. 8 is a flow chart of anomaly detection and defense in one embodiment;
FIG. 9 is a flow diagram of security logs and responses in one embodiment;
FIG. 10 is a block diagram of a browser risk detection apparatus in one embodiment;
FIG. 11 is an internal block diagram of a computer device in one embodiment;
fig. 12 is an internal structural view of a computer device in another embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
The browser risk detection method provided by the embodiment of the application can be applied to an application environment shown in fig. 1. Wherein the terminal 102 communicates with the server 104 via a network. The data storage system may store data that the server 104 needs to process. The data storage system may be integrated on the server 104 or may be located on a cloud or other network server.
Specifically, the server 104 initiates the secure container environment in response to a browser running instruction sent by the terminal 102. The terminal 102 runs a browser in a secure container environment. During the running process of the browser, the server 104 obtains interaction data between the browser and the external environment through the security probe. The server 104 obtains at least one target risk detection method from a plurality of preset risk detection methods according to the interaction data, and performs risk detection on the browser according to the at least one target risk detection method and the interaction data.
The terminal 102 may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, internet of things devices, and portable wearable devices, where the internet of things devices may be smart speakers, smart televisions, smart air conditioners, smart vehicle devices, and the like. The portable wearable device may be a smart watch, smart bracelet, headset, or the like. The server 104 may be implemented as a stand-alone server or as a server cluster of multiple servers.
In an exemplary embodiment, as shown in fig. 2, a browser risk detection method is provided, and the method is applied to the server 104 in fig. 1 for illustration, and includes the following steps 202 to 208. Wherein:
step 202, in response to the browser running instruction, starting the secure container environment and running the browser in the secure container environment.
Wherein the browser running instruction is a command indicating that the browser starts running. The secure container environment is an isolation environment that isolates the browser from the external environment, creating a separate execution environment that prevents malware from accessing data in the operating system or other browser tags.
Specifically, the terminal displays a browser interface, and sends a browser operation instruction to the server in response to an operation triggered on the browser interface. The server receives and responds to the browser running instruction, and starts the safe container environment so that the browser runs in the safe container environment.
In some embodiments, the secure container environment is first started before the browser runs. The secure container may be implemented using sandbox technology. The secure container environment may isolate browser processes from the operating system and other applications so that the impact on the operating system and other applications is minimized in the event of malware attacks or security vulnerabilities within the browser. Before the secure containment environment is booted, a security verification and integrity check may be performed on the secure containment environment to ensure the trustworthiness of the secure containment environment and its integrity. Ensuring the integrity of the components within the secure container may prevent unauthorized unknown alteration or tampering. When the security container environment is subjected to security verification, the identity information, the digital signature, legal information, access control and other information of the security container environment can be verified. When the integrity check is carried out on the environment of the safety container, the integrity of the safety container file, the integrity of the running process, the integrity of the configuration, the integrity of the software package and the like can be checked. The secure container environment is connected and communicated with the browser through a secure interface. The security interface may allow the browser to perform the necessary operations and return the results of the browser operations to the secure container environment for processing. The secure interface may ensure that the browser is running in a restricted environment. The safety container environment also monitors and detects behaviors of the browser and resource use conditions of the browser in real time, potential safety problems or risks in the browser can be detected, and corresponding defensive measures and preventive measures can be adopted in advance. In order to cope with security threats and loopholes which can possibly update at any time, the security container environment needs to be updated and maintained regularly, and means of timely repairing known security loopholes, updating security policies and rules and the like can be adopted, and meanwhile compatibility of the security container environment and a browser is ensured.
Step 204, in the running process of the browser, acquiring interaction data between the browser and an external environment through a security probe; the external environment refers to an environment outside the secure container environment.
Among other things, the security probe is a tool for monitoring and detecting browser activity that captures and analyzes web requests, data exchanges, and code execution processes of the browser. By monitoring the browser behavior, the security probe can identify potential security risks and take corresponding safeguards. The external environment refers to an environment outside the secure container environment. The interactive data is data generated between the browser and the external environment due to interactive behavior.
Specifically, during the running process of the browser, the server can acquire data generated under the condition of interaction between the browser and the external environment in real time by using the security probe.
Step 206, obtaining at least one target risk detection method from a plurality of preset risk detection methods according to the interaction data.
The risk detection method is used for identifying and evaluating potential risks in the running process of the browser. The target risk detection method is selected from a plurality of preset risk detection methods, and is finally used for detecting browser risk.
Specifically, the server receives and analyzes interaction data between the browser and the external environment, and selects an appropriate method from a plurality of preset risk detection methods as a target risk detection method according to an analysis result.
In some embodiments, the risk detection method may be network traffic detection, web content analysis, execution script detection, abnormal behavior monitoring, or security threat detection. The process of selecting the target risk detection method can be implemented by adopting a training model or manual comparison.
And step 208, performing risk detection according to at least one target risk detection method and interaction data.
Specifically, the server detects risk of the browser by using the selected target risk detection method based on interaction data between the browser and the external environment.
In some embodiments, the target risk detection methods selected may be more than one, but at least one, due to different criteria for selecting the target risk detection methods. Taking the detection of the web content of the browser as an example, the web content analysis can be used as a target risk detection method, and the security threat detection can be used as a target risk detection method.
In some embodiments, taking a browser risk detection method based on a security probe as an example, a specific implementation flow of detecting browser risk is as follows:
first, the security probe gathers data between the browser and external resources due to interaction behavior, including but not limited to network traffic, web content, script execution, system calls, and the like. Then, the security probe monitors network traffic generated between the browser and the external resource due to interaction in real time, wherein the network traffic comprises at least one of information of monitoring request, responding data packet, transmission protocol, source address, target address and the like. The security probe may also perform in-depth analysis on the web page content loaded by the browser, for example, the security probe may detect information such as malicious code, malicious links, or malicious scripts in the web page, so as to identify potential security threats. The security probe can also monitor scripts executed in the browser and identify security vulnerabilities such as injection of malicious scripts and attack of cross-site scripts from the scripts executed by the browser.
In some embodiments, the server may train the browser normal behavior model using techniques such as machine learning or behavior analysis, so as to identify abnormal behaviors that are inconsistent with the browser normal behaviors in the behaviors triggered by the browser while monitoring and analyzing the behaviors of the browser in real time, where the abnormal behaviors may be, but are not limited to, abnormal access, abnormal login, abnormal traffic, abnormal response, or the like. The security probe may also compare and analyze the collected browser interaction data with known threat intelligence data to facilitate identification of security threats that may exist in the browser, which may be, but are not limited to, malware, phishing websites, security vulnerabilities, or the like. According to the browser risk detection method based on the security probe, the server can generate a security event report, wherein the content of the security event report comprises, but is not limited to, detected security events, abnormal behaviors and possible security threats, and the security event report can also comprise detailed information such as security event description, abnormal behavior risk assessment and defense advice. Corresponding defensive measures such as alarm notification, network connection interruption or malicious website access prevention can be triggered according to the security event report, and real-time response can defend against potential security threats of the browser and protect the security of users and systems.
In some embodiments, the secure container is typically implemented using virtualization technology or sandboxed mechanisms. As shown in fig. 3, a flow chart of a secure container environment is provided. As shown in fig. 4, a flow diagram of the detection and analysis of the security probe is provided.
In the browser risk detection method, in response to a browser running instruction, a safe container environment is started and a browser is run in the safe container environment, in the running process of the browser, interactive data between the browser and an external environment is obtained through a safe probe, the external environment is an environment outside the safe container environment, at least one target risk detection method is obtained from a plurality of preset risk detection methods according to the interactive data, and risk detection is carried out according to the at least one target risk detection method and the interactive data. The safety container environment and the safety probe are used for protecting and detecting the browser behavior, so that the browser safety is improved.
In an exemplary embodiment, at least one target risk detection method is obtained from a plurality of preset risk detection methods according to interaction data, including: determining priorities corresponding to the multiple risk detection methods respectively according to the interaction data; at least one target risk detection method is acquired from a plurality of risk detection methods in the order of priority from high to low.
The priority is the order of determining which method should be adopted first in the risk detection method of the browser.
Specifically, the server determines the priority of the multiple risk detection methods based on the interactive data between the browser and the external environment, and obtains the priority sequence of the multiple risk detection methods. And then selecting one or more target detection methods to detect the risk of the browser according to the sequence of the priorities of the multiple risk detection methods from high to low. The plural kinds refer to at least two kinds.
In some embodiments, the multiple risk detection methods may be prioritized according to the data size of the interaction data, the priorities of the multiple risk detection methods may be determined according to the estimated time period for detecting the interaction data, and the priorities of the multiple risk detection methods may be determined according to the probability and probability that the interaction data is invaded and threatened.
In this embodiment, by determining the priority of the risk detection method, at least one target risk detection method can be obtained, thereby improving risk detection efficiency of the browser.
In some embodiments, the priorities are obtained using a priority generation model, the method further comprising a priority generation model training step comprising: acquiring sample interaction data, and acquiring priorities of multiple risk detection methods aiming at the sample interaction data to obtain a first priority of the multiple risk detection methods; inputting sample interaction data into a priority generation model to be trained, predicting priorities of multiple risk detection methods, and obtaining second priorities of the multiple risk detection methods; determining a first priority and a second priority of each risk detection method, and training a priority generation model.
Wherein the priority generation model is a neural network model trained to determine priorities of the plurality of risk detection methods. The sample interaction data is historical interaction data of a specified period from browser historical interaction data. The first priority is a priority of a plurality of risk detection methods determined for the sample interaction data. The second priority is the priority of the multiple risk detection methods predicted by the priority generation model to be trained for the sample interaction data.
Specifically, the server acquires historical interaction data of a specified period from the browser historical interaction data as sample interaction data. Based on sample interaction data, risk detection is carried out on the sample interaction data by using different risk detection methods, and multiple risk detection methods are ordered according to detection results to obtain a first priority. The server inputs the sample interaction data to a priority generation model to be trained, and the priority generation model to be trained can predict the priorities of multiple risk detection methods according to the sample interaction data to obtain a second priority. And comparing the first priority with the second priority to obtain a difference value between the first priority and the second priority. The priority generation model is trained based on a difference between the first priority and the second priority.
In some embodiments, in the process of obtaining the first priority, the first priority may be ranked according to the time period used for detection, where the shorter the time period used for detection, the higher the priority of the corresponding risk detection method. The total data quantity detected can be ranked according to the size of the total data quantity detected in the detection process, and the smaller the total data quantity detected is, the higher the priority of the corresponding risk detection method is.
In this embodiment, by training the priority generation model, the efficiency of obtaining priorities of multiple risk detection methods can be improved, so that the efficiency of risk detection of the browser is improved.
In some embodiments, the interaction data comprises behavioral data, and the at least one target risk detection method comprises an abnormal behavior detection method and an abnormal behavior risk detection method; performing risk detection according to at least one target risk detection method and interaction data, including: determining abnormal behavior data from the behavior data by an abnormal behavior detection method; and carrying out risk detection on the abnormal behavior data by adopting an abnormal behavior risk detection method to obtain a risk detection result corresponding to the abnormal behavior data.
The behavior data are data generated when the browser performs actions such as interaction. The abnormal behavior detection method is a method for judging whether the browser behavior is abnormal behavior. The abnormal behavior risk detection method is a method of detecting a specific risk of a determined abnormal behavior. The abnormal behavior data is data of abnormal behavior in the browser. The risk detection result is the generation cause and intention of the abnormal behavior of the browser.
Specifically, the server detects the historical behaviors of the browser in a specified period through an abnormal behavior detection method, and determines abnormal behavior data existing in the historical behavior data. And performing risk detection on the detected browser abnormal behavior data by adopting an abnormal behavior risk detection method to obtain a risk detection result corresponding to the browser abnormal behavior data.
In some embodiments, browser behavior may be monitored and analyzed in real-time using techniques such as machine learning or statistical analysis. And detecting abnormal behaviors which are inconsistent with the normal behaviors through comparison with the normal behavior model.
In this embodiment, by detecting abnormal behavior data existing in the browser, a risk detection result may be obtained, thereby improving security of risk detection of the browser.
In some embodiments, the abnormal behavior risk detection method is implemented using a trained behavior risk detection model; performing risk detection on the abnormal behavior data by adopting an abnormal behavior risk detection method to obtain a risk detection result corresponding to the abnormal behavior data, wherein the risk detection result comprises the following steps: and inputting the abnormal behavior data into a behavior risk detection model to perform risk detection, and obtaining a risk detection result corresponding to the abnormal behavior data.
Wherein the behavioral risk detection model is a neural network model trained to detect specific risks of the determined abnormal behavior.
Specifically, the terminal sends the detected abnormal behavior data of the browser to the server. The server inputs the abnormal behavior data of the browser into a trained behavior risk detection model, and carries out risk detection on the abnormal behavior data of the browser to obtain a risk detection result.
In some embodiments, detecting abnormal behavior in the browser may also be implemented using a browser plug-in or abnormal behavior risk monitoring software.
In this embodiment, abnormal behavior data is detected through the trained behavior risk detection model, so that a risk detection result can be obtained, and the efficiency of risk detection of the browser is improved.
In some embodiments, the risk detection result includes an intention detection result and a reason detection result corresponding to the abnormal behavior data, the method further includes a behavior risk detection model training step, and the behavior risk detection model training step includes: acquiring label intention information and label reason information corresponding to the sample abnormal behavior data; inputting the sample abnormal behavior data into a behavior risk detection model to be trained for prediction to obtain a sample intention detection result and a sample reason detection result; training a behavior risk detection model according to the difference between the label intention information and the sample intention detection result and the difference between the label reason information and the sample reason detection result.
Wherein the intention detection result is intention information of the detected abnormal behavior data. The cause detection result is the cause and source of the detected abnormal behavior data. The sample abnormal behavior data is historical abnormal behavior data of a specified period from among the browser historical abnormal behavior data. The tag intention information is true intention information that the abnormal behavior data has been analyzed. The tag cause information is real cause information that the abnormal behavior data has been analyzed. The sample intention detection result is intention information of abnormal behavior data predicted by the behavior risk detection model to be trained. The sample cause detection result is the cause information of the abnormal behavior data predicted by the behavior risk detection model to be trained.
Specifically, the server acquires historical abnormal behavior data of a specified period from the browser historical abnormal behavior data as sample abnormal behavior data. And acquiring label intention information and label reason information corresponding to the sample abnormal behavior data according to the sample abnormal behavior data. The server inputs the sample abnormal behavior data to a behavior risk detection model to be trained, and the behavior risk detection model to be trained can predict a sample intention detection result and a sample reason detection result corresponding to the sample abnormal behavior data according to the sample abnormal behavior data. Comparing the label intention information with the sample intention detection result to obtain a difference value between the label intention information and the sample intention detection result; and comparing the label reason information with the sample reason detection result to obtain a difference value between the label reason information and the sample reason detection result. Training a behavioral risk detection model based on a difference between the tag intent information and the sample intent detection result, a difference between the tag cause information and the sample cause detection result
In some embodiments, the risk detection result may further include information such as a generation manner of the abnormal behavior, a risk property, a hazard degree, and a hazard speed.
In this embodiment, by training the behavioral risk detection model, the efficiency of obtaining the risk detection result may be improved, thereby improving the efficiency of risk detection of the browser.
In some embodiments, as shown in FIG. 5, the following steps may be employed for browser risk detection.
Step 501, secure container environment is started.
The browser can run in a safe container environment, and the safe container environment ensures the isolation state between the browser and the operating system and the external environment by isolating and limiting the processes, resources, rights and the like of the browser.
Step 502, security probe monitoring and analysis.
The security probe can monitor the behavior of the browser and the network flow generated in the running process in real time, and can monitor the network interaction of the browser and external resources, the loading of webpage content, the execution of scripts and the like, so that the abnormal behavior and potential security threat in the browser can be detected in time.
At step 503, security event detection.
Based on the monitoring result of the security probe, the security event detection module can analyze the behavior of the browser and identify potential security events and threats in the browser, wherein the potential security events and threats in the browser include, but are not limited to, malicious software activities, phishing attacks, vulnerability exploitation and the like.
Step 504, security policy management.
And the security policy management module performs fine control on the behavior of the browser according to the detection result of the security event and the real-time security information of the browser. The safety information is information obtained by collecting, analyzing, explaining and other operations on information safety related data and information. The refined control means that the security policy management module divides and refines the browser behaviors, and more accurate management is carried out on the browser behaviors through accurate data acquisition, monitoring and analysis and a reasonable control policy and control method so as to realize accurate management of the browser behaviors by the server. The method can perform access control, flow filtering, black and white listing of malicious websites and the like according to policy configuration.
Step 505, anomaly detection and defense.
The abnormal detection and defense module can monitor and analyze the browser behaviors in real time by training a browser normal behavior model. The training process of the browser normal behavior model can use machine learning, behavior analysis and other technologies. The browser normal behavior model can detect abnormality of browser behaviors, such as installation of malicious plug-ins or injection of malicious scripts, so that the abnormality detection and the defense module can take defense measures corresponding to the abnormal behaviors.
Step 506, security log and response.
The security log and response module can record security events and behavior logs of the browser and establish a corresponding event response mechanism. The safety log and response module can be used for tracking the cause and influence of the safety event, and timely taking appropriate measures for the safety event of the browser to repair and recover the browser.
In some embodiments, as shown in FIG. 6, the security event detection process in step 503 may be performed using the following steps.
And 601, collecting data.
The security event detection module may obtain information from data collected by the security probe, where the obtained information includes, but is not limited to, browser behavior, network traffic, web page content, script execution, and the like.
Step 602, data preprocessing.
The server preprocesses data collected by the security probe. Methods of preprocessing include, but are not limited to, data cleansing, format conversion, data normalization, and the like.
And 603, establishing a normal behavior model.
The server may train the browser normal behavior model based on the browser normal behavior sample, so as to identify abnormal behaviors which are inconsistent with the browser normal behaviors in the behaviors triggered by the browser while monitoring and analyzing the behaviors of the browser in real time, where the abnormal behaviors may be, but are not limited to, abnormal access, abnormal login, abnormal traffic, abnormal response, and the like. The browser normal behavior sample can be, but is not limited to, browser history data, browser user behavior patterns, browser security policies, and the like
Step 604, abnormal behavior detection.
The server can monitor and analyze the browser behaviors in real time by utilizing technologies such as machine learning or behavior analysis. And comparing the difference value with a normal behavior model of the browser to obtain an abnormal behavior which is inconsistent with the normal behavior of the browser.
Step 605, security event identification.
The security event identification module can identify potential security events according to the characteristics and rules of abnormal browser behaviors. Security events include, but are not limited to, malware activity, phishing attacks or exploits, and the like.
Step 606, threat intelligence analysis.
The threat intelligence analysis module compares and analyzes the detected security event with known threat intelligence. Security events include, but are not limited to, malware samples, black or white lists, etc., to facilitate determining whether a detected security event belongs to a known security threat.
And step 607, risk assessment.
The risk assessment module carries out risk assessment on the detected security event and determines threat levels of the detected security event to the system and the user so as to determine the priority of response and measures to be taken.
At step 608, a security event is reported.
The security event report details the detected security event, abnormal behavior and related threat information, and may further include information such as description of the security event, risk assessment, and recommended response measures.
Step 609, threat response.
And the server takes corresponding threat response measures for the abnormal behavior or the security event according to the security event report and the risk assessment result. Threat response measures include, but are not limited to, alert notifications, blocking malicious activity, quarantining infected systems, etc., in order to reduce the impact of security events on the browser.
Security event recording and tracking, step 610.
The server records the detected browser security event into a security log so as to facilitate subsequent analysis and tracking of the browser security event. Security event tracking helps to understand the cause, scope of impact, and subsequent processing of security events.
In some embodiments, as shown in FIG. 7, the security policy management process in step 504 may be performed using the following steps.
Step 701, policy definition.
Wherein, at the initial stage of security policy management, a security policy applicable to the browser needs to be defined. Security policies include, but are not limited to, specifying security access policies, traffic filtering rules, rights management rules, and the like.
Step 702, risk assessment.
The server evaluates the security risk of the browser and determines the threat and security vulnerability possibly faced by the browser at present. The risk assessment process may be based on known security vulnerabilities, threat intelligence, security event analysis, and the like.
Step 703, policy formulation.
The server establishes a security policy corresponding to the browser based on the risk assessment result.
At step 704, the policy is enforced.
Specifically, the server applies the formulated security policy to the security container environment in which the browser is located. The process of applying the security policy to the browser may be implemented by configuring parameters of the security container and probe, setting access control lists, filtering rules, and the like.
Step 705, security policy update and maintenance.
The server updates and maintains the security policy periodically so as to adapt to security threats and vulnerabilities which may be updated at any time. The process of updating and maintaining security policies includes, but is not limited to, releasing security patches in time, updating black-and-white lists, adjusting access control rules, and the like.
Step 706, security policy audit.
The security policy audit module performs periodic audit and evaluation on the implemented security policies. Through security policy audit, possible security problems and illegal actions can be found, and corresponding corrective measures and solutions can be timely taken.
Step 707, security policy response.
The server responds to the security policy according to the results of the browser real-time monitoring and the security event detection. The security policy may be means of adding defensive measures, updating policy configuration, adjusting access rights, etc. in order to cope with security threats that may be updated at any time.
At step 708, the policy is optimized.
The optimization module can optimize and improve the security policy based on feedback of security events, user feedback and technical evolution according to implementation conditions and implementation effects of the security policy.
In some embodiments, as shown in FIG. 8, the anomaly detection and defense process in step 505 may be performed using the following steps.
Step 801, normal behavior modeling.
The server collects and analyzes historical normal behavior data of the browser and builds a normal behavior model. The historical normal behavior data of the browser may be, but is not limited to, user behavior patterns, system resource usage, and the like.
Step 802, monitoring in real time.
When the browser runs, the safety probe monitors the behavior and flow of the browser in real time.
Step 803, abnormal behavior detection.
Specifically, the server detects abnormal behaviors which are inconsistent with normal behaviors by comparing the behaviors of the browser monitored in real time with the normal behavior model.
Step 804, abnormal behavior analysis.
The abnormal behavior analysis module analyzes and researches the detected abnormal behavior by means of feature extraction or behavior association analysis and the like of the abnormal behavior so as to know the cause and potential threat of the abnormal behavior.
Threat assessment, step 805.
Wherein the server evaluates threat levels of the detected abnormal behavior to the system and the user.
Step 806, defensive measures trigger.
Specifically, the server triggers corresponding defensive measures according to threat level evaluation results of abnormal behaviors. Defensive measures include, but are not limited to, alarm notifications, breaking network connections or quarantining infected systems, etc.
Step 807, restoration and repair.
After detecting the abnormal behavior in the browser, the server takes recovery and repair measures corresponding to the abnormal behavior, for example, the abnormal behavior can be repaired by means of removing malicious software or repairing system loopholes.
Step 808, update and adjust in real time.
Specifically, the server can timely update and adjust the abnormality detection and defense system according to the results of abnormality behavior detection and defense. For example, the abnormal behavior model is updated regularly or a defense algorithm is optimized, so that accuracy and efficiency of risk detection of the browser are improved.
Step 809, security log recording and analysis.
The server records the detected abnormal browser behaviors and the corresponding defensive measures adopted, and records and analyzes the security log so as to facilitate follow-up security event tracking and analysis.
In some embodiments, as shown in FIG. 9, the following steps may be employed to perform the security log and response procedure in step 506.
Step 901, security event recording.
The safety probe monitors the behavior and flow of the browser in real time. The server records the detected browser security event into a security log.
Step 902, security log analysis.
The server analyzes and processes the browser security log by using log aggregation, association analysis, anomaly detection and other technologies so as to identify security events, threat trends and potential security vulnerabilities in the browser.
Step 903, security event response.
Specifically, the server takes corresponding security event response measures according to the analysis result of the security log. Security event response measures include, but are not limited to, alarm notification, breaking network connections, quarantining infected systems, etc., to prevent further expansion of the potential threat situation.
Step 904, threat investigation and analysis.
The server performs further investigation and analysis on the detected security events, and analysis content includes, but is not limited to, tracking attack sources of threats, analyzing attack means and targets of the threats, and the like, so as to facilitate understanding of the cause and influence scope of the threats.
At step 905, a security event is processed.
Wherein the server takes appropriate action to process the detected security event. Means of handling security events may be, but are not limited to, cleaning up malware, repairing system vulnerabilities, providing user suggestions and support, and the like.
Step 906, security event reporting and notification.
The server may generate a security event report including, but not limited to, details of the detected security event, response measures, and suggested follow-up steps, among others.
Step 907, restore and repair.
After the security event is processed, the server performs recovery and repair work of the system. Recovery and repair work of the system includes, but is not limited to, repairing configuration of the affected system, patching known vulnerabilities, updating security policies, and the like.
Step 908, attack tracing.
The server can conduct attack tracing on the security event, and the tracing process comprises, but is not limited to, network evidence collection, log analysis, partner assistance and the like.
Step 909, security event post-processing.
The server can also continuously track and subsequently process the security events. Subsequent processing includes, but is not limited to, assessing the effectiveness of the event response, revising security policies and measures, monitoring potential recurrence, and the like.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides a browser risk detection device for realizing the browser risk detection method. The implementation of the solution provided by the device is similar to the implementation described in the above method, so the specific limitation in the embodiments of one or more browser risk detection devices provided below may refer to the limitation of the browser risk detection method hereinabove, and will not be repeated herein.
In an exemplary embodiment, as shown in fig. 10, there is provided a browser risk detection apparatus, including: a secure container startup module 1002, a data acquisition module 1004, a risk detection method determination module 1006, and a risk detection module 1008, wherein:
a secure container initiation module 1002 for initiating a secure container environment and running a browser in the secure container environment in response to a browser running instruction;
the data acquisition module 1004 is configured to acquire interaction data between the browser and an external environment through the security probe during the running process of the browser; the external environment refers to an environment outside the secure container environment;
a risk detection method determining module 1006, configured to obtain at least one target risk detection method from a plurality of preset risk detection methods according to the interaction data;
the risk detection module 1008 is configured to perform risk detection according to at least one target risk detection method and interaction data.
In some embodiments, the risk detection method determining module is further configured to determine priorities corresponding to the multiple risk detection methods respectively according to the interaction data; at least one target risk detection method is acquired from a plurality of risk detection methods in the order of priority from high to low.
In some embodiments, the apparatus further includes a first model training module, configured to obtain sample interaction data, and obtain priorities of multiple risk detection methods for the sample interaction data, to obtain a first priority of the multiple risk detection methods; inputting sample interaction data into a priority generation model to be trained, predicting priorities of multiple risk detection methods, and obtaining second priorities of the multiple risk detection methods; determining a first priority and a second priority of each risk detection method, and training a priority generation model.
In some embodiments, the risk detection module is further configured to determine abnormal behavior data from the behavior data by an abnormal behavior detection method; and carrying out risk detection on the abnormal behavior data by adopting an abnormal behavior risk detection method to obtain a risk detection result corresponding to the abnormal behavior data.
In some embodiments, the risk detection module is further configured to input the abnormal behavior data into the behavior risk detection model to perform risk detection, so as to obtain a risk detection result corresponding to the abnormal behavior data.
In some embodiments, the apparatus further includes a first model training module, configured to obtain sample abnormal behavior data, and tag intention information and tag cause information corresponding to the sample abnormal behavior data; inputting the sample abnormal behavior data into a behavior risk detection model to be trained for prediction to obtain a sample intention detection result and a sample reason detection result; training the behavior risk detection model according to the difference between the label intention information and the sample intention detection result and the difference between the label reason information and the sample reason detection result.
The respective modules in the browser risk detection apparatus described above may be implemented in whole or in part by software, hardware, or a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one exemplary embodiment, a computer device is provided, which may be a server, and the internal structure thereof may be as shown in fig. 11. The computer device includes a processor, a memory, an Input/Output interface (I/O) and a communication interface. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface is connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is for storing session data. The input/output interface of the computer device is used to exchange information between the processor and the external device. The communication interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a browser risk detection method.
In an exemplary embodiment, a computer device, which may be a terminal, is provided, and an internal structure thereof may be as shown in fig. 12. The computer device includes a processor, a memory, an input/output interface, a communication interface, a display unit, and an input means. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface, the display unit and the input device are connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The input/output interface of the computer device is used to exchange information between the processor and the external device. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless mode can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a browser risk detection method. The display unit of the computer device is used for forming a visual picture, and can be a display screen, a projection device or a virtual reality imaging device. The display screen can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be a key, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structures shown in fig. 11 and 12 are block diagrams of only portions of structures that are relevant to the present application and are not intended to limit the computer device on which the present application may be implemented, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In an exemplary embodiment, a computer device is provided, comprising a memory and a processor, the memory having stored therein a computer program, the processor performing the steps of the method embodiments described above when the computer program is executed.
In an exemplary embodiment, a computer-readable storage medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, carries out the steps of the method embodiments described above.
In an exemplary embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the steps of the method embodiments described above.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the various embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the various embodiments provided herein may include at least one of relational databases and non-relational databases. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic units, quantum computing-based data processing logic units, etc., without being limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples only represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the present application. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application shall be subject to the appended claims.

Claims (10)

1. A browser risk detection method, the method comprising:
responding to a browser running instruction, starting a safe container environment and running a browser in the safe container environment;
in the running process of the browser, acquiring interaction data between the browser and an external environment through a security probe; the external environment refers to an environment outside the secure container environment;
Acquiring at least one target risk detection method from a plurality of preset risk detection methods according to the interaction data;
and performing risk detection according to the at least one target risk detection method and the interaction data.
2. The method according to claim 1, wherein the acquiring at least one target risk detection method from a plurality of preset risk detection methods according to the interaction data includes:
determining priorities corresponding to the multiple risk detection methods respectively according to the interaction data;
and acquiring at least one target risk detection method from the multiple risk detection methods according to the order of the priority from high to low.
3. The method of claim 2, wherein the priorities are obtained using a priority generation model, the method further comprising a priority generation model training step comprising:
acquiring sample interaction data, and acquiring priorities of the multiple risk detection methods aiming at the sample interaction data to obtain a first priority of the multiple risk detection methods;
inputting the sample interaction data into a priority generation model to be trained, predicting the priorities of the multiple risk detection methods, and obtaining a second priority of the multiple risk detection methods;
Determining a first priority and a second priority of each risk detection method, and training a priority generation model.
4. The method of claim 1, wherein the interaction data comprises behavioral data, and wherein the at least one target risk detection method comprises an abnormal behavior detection method and an abnormal behavior risk detection method;
the risk detection according to the at least one target risk detection method and the interaction data comprises the following steps:
determining abnormal behavior data from the behavior data by the abnormal behavior detection method;
and performing risk detection on the abnormal behavior data by adopting the abnormal behavior risk detection method to obtain a risk detection result corresponding to the abnormal behavior data.
5. The method of claim 4, wherein the abnormal behavior risk detection method is implemented using a trained behavior risk detection model;
the risk detection of the abnormal behavior data by adopting the abnormal behavior risk detection method to obtain a risk detection result corresponding to the abnormal behavior data comprises the following steps:
and inputting the abnormal behavior data into the behavior risk detection model to perform risk detection, and obtaining a risk detection result corresponding to the abnormal behavior data.
6. The method of claim 5, wherein the risk detection results include intent detection results and reason detection results corresponding to the abnormal behavior data, the method further comprising a behavior risk detection model training step comprising:
acquiring sample abnormal behavior data and label intention information and label reason information corresponding to the sample abnormal behavior data;
inputting the sample abnormal behavior data into a behavior risk detection model to be trained for prediction to obtain a sample intention detection result and a sample reason detection result;
training the behavior risk detection model according to the difference between the label intention information and the sample intention detection result and the difference between the label reason information and the sample reason detection result.
7. A browser risk detection apparatus, the apparatus comprising:
the safe container starting module is used for responding to the browser running instruction, starting a safe container environment and running a browser in the safe container environment;
the data acquisition module is used for acquiring interaction data between the browser and an external environment through a security probe in the running process of the browser; the external environment refers to an environment outside the secure container environment;
The risk detection method determining module is used for acquiring at least one target risk detection method from a plurality of preset risk detection methods according to the interaction data;
and the risk detection module is used for carrying out risk detection according to the at least one target risk detection method and the interaction data.
8. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1 to 6 when the computer program is executed.
9. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 6.
10. A computer program product comprising a computer program, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 6.
CN202311378204.4A 2023-10-23 2023-10-23 Browser risk detection method, browser risk detection device, computer equipment and storage medium Pending CN117370701A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311378204.4A CN117370701A (en) 2023-10-23 2023-10-23 Browser risk detection method, browser risk detection device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311378204.4A CN117370701A (en) 2023-10-23 2023-10-23 Browser risk detection method, browser risk detection device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117370701A true CN117370701A (en) 2024-01-09

Family

ID=89405500

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311378204.4A Pending CN117370701A (en) 2023-10-23 2023-10-23 Browser risk detection method, browser risk detection device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117370701A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118018550A (en) * 2024-04-09 2024-05-10 河北金锁安防工程股份有限公司 PaaS platform-based safety control method and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118018550A (en) * 2024-04-09 2024-05-10 河北金锁安防工程股份有限公司 PaaS platform-based safety control method and system
CN118018550B (en) * 2024-04-09 2024-06-11 河北金锁安防工程股份有限公司 PaaS platform-based safety control method and system

Similar Documents

Publication Publication Date Title
US11785040B2 (en) Systems and methods for cyber security alert triage
US20220060497A1 (en) User and entity behavioral analysis with network topology enhancements
US10652274B2 (en) Identifying and responding to security incidents based on preemptive forensics
US9258321B2 (en) Automated internet threat detection and mitigation system and associated methods
JP6703616B2 (en) System and method for detecting security threats
EP3789896B1 (en) Method and system for managing security vulnerability in host system using artificial neural network
US10242187B1 (en) Systems and methods for providing integrated security management
CN105409164A (en) Rootkit detection by using hardware resources to detect inconsistencies in network traffic
CN117879970B (en) Network security protection method and system
US20220309171A1 (en) Endpoint Security using an Action Prediction Model
CA3234316A1 (en) Systems and methods for detecting malicious hands-on-keyboard activity via machine learning
CN117370701A (en) Browser risk detection method, browser risk detection device, computer equipment and storage medium
US20220385687A1 (en) Cybersecurity threat management using element mapping
CN117708880A (en) Intelligent security processing method and system for banking data
Ehis Optimization of security information and event management (SIEM) infrastructures, and events correlation/regression analysis for optimal cyber security posture
US20230421582A1 (en) Cybersecurity operations case triage groupings
CN117668400A (en) Front-end page operation abnormality identification method, device, equipment and medium
US20230156020A1 (en) Cybersecurity state change buffer service
US20230068946A1 (en) Integrated cybersecurity threat management
US11822651B2 (en) Adversarial resilient malware detector randomization method and devices
CN114490261A (en) Terminal security event linkage processing method, device and equipment
Weintraub et al. Continuous monitoring system based on systems' environment
US20240305664A1 (en) Cybersecurity operations mitigation management
Rehman et al. Enhancing Cloud Security: A Comprehensive Framework for Real-Time Detection Analysis and Cyber Threat Intelligence Sharing
CN116094847B (en) Honeypot identification method, honeypot identification device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination