CN117176459A - Security rule generation method and device - Google Patents

Security rule generation method and device Download PDF

Info

Publication number
CN117176459A
CN117176459A CN202311250655.XA CN202311250655A CN117176459A CN 117176459 A CN117176459 A CN 117176459A CN 202311250655 A CN202311250655 A CN 202311250655A CN 117176459 A CN117176459 A CN 117176459A
Authority
CN
China
Prior art keywords
feature
model
data
determining
importance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311250655.XA
Other languages
Chinese (zh)
Inventor
姚志豪
李铜舒
黄晓东
张安清
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Asiainfo Security Technology Co ltd
Original Assignee
Asiainfo Security Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Asiainfo Security Technology Co ltd filed Critical Asiainfo Security Technology Co ltd
Priority to CN202311250655.XA priority Critical patent/CN117176459A/en
Publication of CN117176459A publication Critical patent/CN117176459A/en
Pending legal-status Critical Current

Links

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application discloses a method and a device for generating a security rule, which relate to the field of network security and are used for solving the problems of artificial subjective factor influence, low efficiency and difficulty in adapting to a dynamic environment in the current security rule extraction method, and comprise the following steps: acquiring sample data; the sample data comprises safety data and label information, and the label information is used for marking abnormal conditions of the safety data; carrying out data preprocessing on the sample data, and determining a preprocessing result; determining an optimal model according to the preprocessing result; determining a target feature combination according to the preprocessing result and the optimal model; the importance threshold of each feature combination in the target feature combination meets a first preset condition, and the importance threshold is used for representing the feature importance degree of each feature combination; and determining the safety rule according to the feature combination and the decision tree. The method and the device are used for generating the security rules in the network.

Description

Security rule generation method and device
Technical Field
The present application relates to the field of network security, and in particular, to a method and apparatus for generating a security rule.
Background
The traditional safety rule extraction method generally depends on manual experience and rule writing, and the method has the problems of artificial subjective factor influence, low efficiency, difficulty in adapting to dynamic environment and the like.
In recent years, rapid development of artificial intelligence technologies such as machine learning and deep learning has provided new possibilities for solving the problem of security rule extraction. Artificial intelligence techniques are capable of learning patterns and rules from large-scale data and automatically discovering implicit security rules and abnormal behavior.
Disclosure of Invention
The application provides a safety rule generation method and a safety rule generation device, which can solve the problems of artificial subjective factor influence, low efficiency and difficulty in adapting to dynamic environment in the safety rule extraction method at the present stage.
In order to achieve the above purpose, the application adopts the following technical scheme:
in a first aspect, the present application provides a security rule generating method, including: acquiring sample data; the sample data comprises safety data and label information, and the label information is used for marking abnormal conditions of the safety data; carrying out data preprocessing on the sample data, and determining a preprocessing result; determining an optimal model according to the preprocessing result; determining a target feature combination according to the preprocessing result and the optimal model; the importance threshold of each feature combination in the target feature combination meets a first preset condition, and the importance threshold is used for representing the feature importance degree of each feature combination; and determining the safety rule according to the feature combination and the decision tree.
Based on the technical scheme, the safety rule automatic extraction framework based on machine learning/deep learning is constructed, so that after the samples of the safety data are collected, data preprocessing and optimal model selection can be automatically completed, and then the safety rules are generated by combining the samples. Therefore, the application eliminates the imperfection and one-sided property of rules caused by subjective bias of expert individuals; meanwhile, the application can realize the construction of complex safety rules based on multiple fields by only correlating the data in the related fields. In addition, the safety rule obtained by the application is based on the related abnormal information which has occurred, so that the problem of lack of practical verification is avoided, the safety rule can be updated rapidly, and timeliness can be considered.
In one possible implementation, the data preprocessing includes one or more of the following: performing control filling, classification feature recognition and class coding on the sample data; the pretreatment result comprises: characteristic data, a category mapping table and unbalanced data set attribution information; the characteristic data are used for representing abnormal conditions, grades and types of the safety data, the class mapping table is used for representing class codes of the safety data, and the unbalanced data set attribution information is used for representing whether the characteristic data are unbalanced data or not.
In one possible implementation manner, determining the optimal model according to the preprocessing result specifically includes: under the condition that the characteristic data are balance data, inputting the characteristic data into a preset model set, and obtaining the accuracy of each model in the preset model set; determining a model with highest accuracy as an optimal model; under the condition that the characteristic data are unbalanced data, inputting the characteristic data into a preset model set, and obtaining an F1 value of each model in the preset model set; determining the model with the highest F1 value as an optimal model; wherein the set of preset models includes one or more of: xgboost model, LGBM model, catoost model, random forest model, and deep learning model.
In one possible implementation manner, the first preset condition is that the importance threshold of the feature combination is ranked smaller than or equal to a preset threshold in the feature importance threshold sequence; according to the preprocessing result and the optimal model, determining the target feature combination specifically comprises the following steps: under the condition that the optimal model is a tree model, feature importance descending order is arranged on the feature combination output by the optimal model so as to obtain a feature importance sequence; determining the feature combination with the sequence of the feature importance degree smaller than or equal to a preset threshold value as a target feature combination; under the condition that the optimal model is a deep learning model, determining an importance threshold according to the accuracy of the optimal model and the F1 value; performing feature importance descending order arrangement on the feature combinations output by the optimal model to obtain the feature importance sequence; and determining the feature combination with the rank smaller than or equal to the preset threshold value in the feature importance sequence as the target feature combination.
In one possible implementation manner, in the case that the optimal model is a deep learning model, determining the importance threshold according to the accuracy of the optimal model and the F1 value specifically includes: acquiring a plurality of feature columns according to the feature data and a model file of an optimal model; randomly disturbing a plurality of feature columns to obtain the accuracy or F1 value of the plurality of feature columns; under the condition that the characteristic data of the characteristic column is balance data, acquiring the accuracy of the characteristic column; acquiring an F1 value of the feature column under the condition that the feature data of the feature column is unbalanced data; and recording the accuracy or F1 values of the plurality of feature columns, and determining the accuracy or F1 values of the plurality of feature columns as importance thresholds of the plurality of feature columns.
In one possible implementation, determining the security rule according to the feature combination and the decision tree specifically includes: determining a decision tree model according to the target feature combination and the decision tree; determining a path from a root node to a leaf node according to the trained structure of the decision tree model; for each path, determining a decision rule corresponding to each path; for each decision rule, carrying out condition mapping according to each condition and category mapping table of the decision rule so as to screen decision rule content corresponding to each condition; and determining the decision rule subjected to the condition mapping as a safety rule.
In a second aspect, the present application provides a security rule generating apparatus, comprising: an acquisition unit and a processing unit; an acquisition unit configured to acquire sample data; the sample data comprises safety data and label information, and the label information is used for marking abnormal conditions of the safety data; the processing unit is used for carrying out data preprocessing on the sample data and determining a preprocessing result; the processing unit is also used for determining an optimal model according to the preprocessing result; the processing unit is also used for determining a target feature combination according to the preprocessing result and the optimal model; the importance threshold of each feature combination in the target feature combination meets a first preset condition, and the importance threshold is used for representing the feature importance degree of each feature combination; and the processing unit is also used for determining the safety rule according to the feature combination and the decision tree.
In one possible implementation, the data preprocessing includes one or more of the following: performing control filling, classification feature recognition and class coding on the sample data; the pretreatment result comprises: characteristic data, a category mapping table and unbalanced data set attribution information; the characteristic data are used for representing abnormal conditions, grades and types of the safety data, the class mapping table is used for representing class codes of the safety data, and the unbalanced data set attribution information is used for representing whether the characteristic data are unbalanced data or not.
In a possible implementation manner, the processing unit is further configured to input the feature data into a preset model set to obtain an accuracy of each model in the preset model set if the feature data is balance data; determining a model with highest accuracy as an optimal model; the processing unit is further used for inputting the characteristic data into a preset model set under the condition that the characteristic data are unbalanced data, and obtaining an F1 value of each model in the preset model set; determining the model with the highest F1 value as an optimal model; wherein the set of preset models includes one or more of: xgboost model, LGBM model, catoost model, random forest model, and deep learning model.
In a possible implementation manner, the processing unit is further configured to, in a case where the optimal model is a tree model, perform feature importance descending order arrangement on feature combinations output by the optimal model to obtain a feature importance sequence; determining the feature combinations with the sequences smaller than or equal to a second preset threshold value in the feature importance sequence as target feature combinations; the processing unit is also used for determining an importance threshold according to the accuracy rate and the F1 value of the optimal model under the condition that the optimal model is a deep learning model; performing feature importance descending order arrangement on the feature combinations output by the optimal model to obtain a feature importance sequence; and determining the feature combinations with the sequences smaller than or equal to a preset threshold value in the feature importance sequence as target feature combinations.
In a possible implementation manner, the processing unit is further configured to obtain a plurality of feature columns according to the feature data and a model file of the optimal model; the processing unit is also used for randomly disturbing the plurality of characteristic columns to obtain the accuracy or F1 value of the plurality of characteristic columns; under the condition that the characteristic data of the characteristic column is balance data, acquiring the accuracy of the characteristic column; acquiring an F1 value of the feature column under the condition that the feature data of the feature column is unbalanced data; the processing unit is further used for recording the accuracy or the F1 value of the plurality of feature columns and determining the accuracy or the F1 value of the plurality of feature columns as an importance threshold of the plurality of feature columns.
In a possible implementation manner, the processing unit is further configured to determine a decision tree model according to the target feature combination and the decision tree; the processing unit is also used for determining a path from the root node to the leaf node according to the trained structure of the decision tree model; the processing unit is also used for determining a decision rule corresponding to each path for each path; the processing unit is also used for carrying out condition mapping on each decision rule according to each condition and the category mapping table of the decision rule so as to screen the decision rule content corresponding to each condition; and the processing unit is also used for determining the decision rule subjected to the condition mapping as a safety rule.
In addition, the technical effects of the security rule generating method according to the second aspect may refer to the technical effects of the security rule generating method according to the first aspect, which are not described herein.
In a third aspect, the present application provides a computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by an electronic device of the present application, cause the electronic device to perform a security rule generation method as described in any one of the possible implementations of the first aspect and the first aspect.
In a fourth aspect, the present application provides an electronic device comprising: a processor and a memory; wherein the memory is for storing one or more programs, the one or more programs comprising computer-executable instructions, which when executed by the electronic device, cause the electronic device to perform the security rule generation method as described in any one of the possible implementations of the first aspect and the first aspect.
In a fifth aspect, the application provides a computer program product comprising instructions which, when run on a computer, cause an electronic device of the application to perform a security rule generation method as described in any one of the possible implementations of the first aspect and the first aspect.
In a sixth aspect, the present application provides a chip system applied to a security rule generating apparatus; the system-on-chip includes one or more interface circuits, and one or more processors. The interface circuit and the processor are interconnected through a circuit; the interface circuit is configured to receive a signal from a memory of the security rule generation device and to send the signal to the processor, the signal comprising computer instructions stored in the memory. When the processor executes the computer instructions, the security rule generating means performs the security rule generating method according to the first aspect and any one of its possible designs.
Drawings
Fig. 1 is a schematic diagram of an architecture of a security rule generating device according to an embodiment of the present application;
fig. 2 is a flow chart of a security rule generating method according to an embodiment of the present application;
FIG. 3 is a flowchart illustrating another security rule generation method according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a pretreatment result according to an embodiment of the present application;
FIG. 5 is a schematic diagram of an Xgboost model result provided by an embodiment of the present application;
FIG. 6 is a schematic diagram of an LGBM model according to an embodiment of the present application;
FIG. 7 is a schematic diagram illustrating a portion of a security rule according to an embodiment of the present application;
FIG. 8 is a partial schematic diagram of a rule map provided by an embodiment of the present application;
fig. 9 is a schematic structural diagram of a security rule generating device according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of another security rule generating device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The character "/" herein generally indicates that the associated object is an "or" relationship. For example, A/B may be understood as A or B.
The terms "first" and "second" in the description and in the claims of the application are used for distinguishing between different objects and not for describing a particular sequential order of objects. For example, the first edge service node and the second edge service node are used to distinguish between different edge service nodes, rather than to describe a characteristic order of the edge service nodes.
Furthermore, references to the terms "comprising" and "having" and any variations thereof in the description of the present application are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed but may optionally include other steps or elements not listed or inherent to such process, method, article, or apparatus.
In addition, in the embodiments of the present application, words such as "exemplary", or "such as" are used to mean serving as examples, illustrations, or descriptions. Any embodiment or design described herein as "exemplary" or "for example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary", or "such as" is intended to present concepts in a concrete fashion.
In order to facilitate understanding of the technical scheme of the present application, the technical terms related to the present application are explained as follows:
1. decision tree
Decision trees are a machine learning algorithm based on tree structures for solving classification and regression problems. It classifies or predicts data by recursively partitioning the feature space to construct a tree.
2. XGBoost (eXtreme Gradient Boosting) model
The XGBoost model is a machine learning model based on a gradient-lifted decision tree (Gradient Boosting Decision Tree, GBDT) algorithm. It was proposed in 2016 and achieved very good results in machine learning contests.
3. Gradient lifting decision tree model
LGBM (Light Gradient Boosting Machine) is a machine learning model based on a gradient-lifted decision tree (Gradient Boosting Decision Tree, GBDT). It was developed by microsoft institute in 2017 to provide a more efficient, faster gradient-lifting decision tree algorithm.
4. Accuracy (Accuracy) and F1 value (F1 Score)
Both the accuracy and the F1 value are indicators of evaluating the performance of the classification model, but the performance of the model is measured from different angles.
The accuracy refers to the proportion of the number of correct samples to the total number of samples predicted by the classification model, and is one of the most commonly used evaluation indexes. The calculation formula of the accuracy is as follows:
accuracy = number of correctly predicted samples/total number of samples
The accuracy measures the classification accuracy of the model on the whole, and is a proper index for the data set with balanced category distribution. However, when there is a class imbalance in the data set, the accuracy may be insufficient to reflect the performance of the model, as the model may be biased to predict a majority class and ignore a minority class.
The F1 value is an index that comprehensively considers the accuracy (Precision) and Recall (Recall) of the model, and can better cope with the case of class imbalance. The precision rate refers to the proportion of the model predicted to be true positive in the samples of the model predicted to be true positive, and the recall rate refers to the proportion of the model predicted to be true positive in all the samples of the true positive. The calculation formula of the F1 value is as follows:
f1 value = 2 (precision rate x recall)/(precision rate + recall)
The value range of the F1 value is between 0 and 1, and the closer to 1 is the better the classification performance of the model. The F1 value combines the accuracy and the recall rate, comprehensively considers the accuracy and the completeness of the model, and is suitable for the situation of unbalanced category.
The technical terms related to the application are described above.
At present, the traditional safety rule extraction method generally depends on manual experience and rule writing, and the method has the problems of artificial subjective factors, low efficiency, difficulty in adapting to dynamic environments and the like.
In recent years, rapid development of artificial intelligence technologies such as machine learning and deep learning has provided new possibilities for solving the problem of security rule extraction. Artificial intelligence techniques are capable of learning patterns and rules from large-scale data and automatically discovering implicit security rules and abnormal behavior.
In view of this, in order to solve the defects existing in the prior art, the present application provides a method and an apparatus for generating a security rule, which are capable of automatically completing data preprocessing and optimal model selection after collecting a sample of security data by constructing a security rule automatic extraction framework based on machine learning/deep learning, and further generating the security rule by combining the sample. Therefore, the application eliminates the imperfection and one-sided property of rules caused by subjective bias of expert individuals; meanwhile, the application can realize the construction of complex safety rules based on multiple fields by only correlating the data in the related fields. In addition, the safety rule obtained by the application is based on the related abnormal information which has occurred, so that the problem of lack of practical verification is avoided, the safety rule can be updated rapidly, and timeliness can be considered.
In development, the safety rule generating method and the safety rule generating device provided by the application can construct a safety rule automatic extraction framework based on machine learning/deep learning, and finally determine the safety rule, and have the following three value points: firstly, the security rules facilitate operation of an operation team under different scenes and conditions; secondly, compared with the method of directly detecting the safety rule by using a model, the method has the advantages of less consumed resources and higher efficiency; thirdly, the security rules are more interpretable than the model and can be complementary to the model; as for the final use of the security rules, the security rules can be embedded into a system for rule configuration and used alone or in combination with a model.
As shown in fig. 1, fig. 1 is a schematic architecture diagram of a security rule generating apparatus 100 according to the present application. The security rule generation device 100 includes: a data acquisition module 101, an automatic preprocessing module 102, a model automatic selection module 103, a feature automatic screening module 104 and a rule generation module 105.
The data acquisition module 101 is configured to acquire sample data; the sample data comprises safety data and label information, and the label information is used for marking abnormal conditions of the safety data. Illustratively, the abnormal condition includes the secure data being normal and the secure data being abnormal.
The automatic preprocessing module 102 includes functions of null filling, category feature encoding, category number statistics, and the like, and is used for performing data preprocessing on sample data to determine a preprocessing result.
Optionally, the data preprocessing includes one or more of: performing control filling, classification feature recognition and class coding on the sample data;
optionally, the preprocessing result includes: characteristic data, a category mapping table and unbalanced data set attribution information; the characteristic data are used for representing abnormal conditions, grades and types of the safety data, the class mapping table is used for representing class codes of the safety data, and the unbalanced data set attribution information is used for representing whether the characteristic data are unbalanced data or not.
The model automatic selection module 103 is configured to automatically determine an optimal model according to the preprocessing result.
Illustratively, when the model auto-selection module 103 determines the optimal model, it includes the following two cases:
in the first case, the characteristic data is balance data. At this time, inputting the characteristic data into a preset model set, and obtaining the accuracy of each model in the preset model set; and determining the model with the highest accuracy as the optimal model.
And in the second case, the characteristic data are unbalanced data. At this time, inputting the characteristic data into a preset model set, and obtaining an F1 value of each model in the preset model set; and determining the model with the highest F1 value as the optimal model.
Illustratively, the set of preset models includes one or more of the following: xgboost model, LGBM model, catoost model, random forest model, and deep learning model.
The feature automatic screening module 104 is configured to determine a target feature combination according to the preprocessing result and the optimal model; the importance threshold of each feature combination in the target feature combination meets a first preset condition, and the importance threshold is used for representing the feature importance degree of each feature combination.
Optionally, the first preset condition is that the importance threshold of the feature combination is ranked less than or equal to a preset threshold in the feature importance threshold sequence. For example, setting the preset threshold to 5 indicates that the feature combination of the importance threshold rank TOP5 is to be extracted.
In one possible implementation, feature auto-screening module 104 determines that the target feature combinations fall into two cases:
the optimal model in the first case is a tree model. The feature automatic screening module 104 performs feature importance descending order arrangement on feature combinations output by the optimal model to obtain a feature importance sequence; determining the feature combinations with the sequences smaller than or equal to a second preset threshold value in the feature importance sequence as target feature combinations;
and in the second case, the optimal model is a deep learning model. The feature automatic screening module 104 determines an importance threshold according to the accuracy rate and the F1 value of the optimal model; and determining the feature combination with the importance threshold value larger than or equal to a first preset threshold value as a target feature combination in the feature combinations output by the optimal model.
A rule generation module 105 for determining security rules based on the feature combinations and the decision tree. Specifically, two sub-modules, a decision tree module 1051 and a rule mapping module 1052, may be included. The decision tree module 1051 is used for making a depth of the decision tree and the number of child nodes, which determine the complexity of the rule generation. The rule mapping module 1052 is used to map rules generated by the decision tree model back to specific feature parameter values.
The flow of determining the security rule by the specific rule generating module 105 may be referred to in the following embodiments, and will not be described herein.
The application scenario of the embodiment of the application is not limited. The system/device architecture and the service scenario described in the embodiments of the present application are for more clearly describing the technical solution of the embodiments of the present application, and do not constitute a limitation on the technical solution provided by the embodiments of the present application, and those skilled in the art can know that, with the evolution of the network architecture and the appearance of the new service scenario, the technical solution provided by the embodiments of the present application is applicable to similar technical problems.
As shown in fig. 2, fig. 2 is a flow chart of a security rule generating method according to the present application, which includes the following steps:
s201, the security rule generating device acquires sample data.
The sample data comprises safety data and label information, and the label information is used for marking abnormal conditions of the safety data.
Illustratively, the abnormal condition includes the secure data being normal and the secure data being abnormal.
In one possible implementation, S201 may be performed by the data acquisition module described above, so that the security rule generating device acquires the sample data.
S202, the safety rule generating device performs data preprocessing on the sample data and determines a preprocessing result.
Optionally, the data preprocessing includes one or more of: performing control filling, classification feature recognition and class coding on the sample data;
optionally, the preprocessing result includes: characteristic data, a category mapping table and unbalanced data set attribution information; the characteristic data are used for representing abnormal conditions, grades and types of the safety data, the class mapping table is used for representing class codes of the safety data, and the unbalanced data set attribution information is used for representing whether the characteristic data are unbalanced data or not.
Optionally, the safety rule generating device sets a threshold value, and if the positive-negative value of the characteristic data exceeds the threshold value, the characteristic data is output as unbalanced data, otherwise, the characteristic data is balanced data.
Illustratively, the category mapping table may be as shown in table 1 below:
table 1 category mapping table
It should be noted that, only part of the content of the category mapping table is shown in table 1, and the content of the category mapping table in practical application can be expanded, which is not limited by the present application.
In a possible implementation, S202 may be executed by the automatic preprocessing module described above, so that the security rule generating device performs data preprocessing on the sample data, and determines a preprocessing result.
S203, the safety rule generating device determines an optimal model according to the preprocessing result.
It can be understood that the optimal model is the model with the highest accuracy or F1 value in the preset model set. The determination of the specific optimal model can be divided into two cases, and the determination of the optimal model is described below in the following cases:
in the first case, the characteristic data is balance data.
At this time, the safety rule generating device inputs the feature data into a preset model set, and obtains the accuracy of each model in the preset model set.
Further, the security rule generating means determines the model with the highest accuracy as the optimal model.
And in the second case, the characteristic data are unbalanced data.
At this time, the safety rule generating device inputs the feature data into a preset model set, and obtains an F1 value of each model in the preset model set.
Further, the security rule generating means determines the model with the highest F1 value as the optimal model.
In the above, the determination of the optimal model was explained in each case.
It can be understood that the security rule generating device outputs the model file and the model name of the optimal model after determining the optimal model, so as to facilitate the smooth proceeding of the subsequent steps.
In one possible implementation, S203 may be performed by the model automatic selection module described above, so that the security rule generating device determines an optimal model according to the preprocessing result.
S204, the safety rule generating device determines target feature combinations according to the preprocessing result and the optimal model.
The importance threshold of each feature combination in the target feature combination meets a first preset condition, and the importance threshold is used for representing the feature importance degree of each feature combination.
Optionally, the first preset condition is that the importance threshold of the feature combination is ranked less than or equal to a preset threshold in the feature importance threshold sequence. For example, setting the preset threshold to 5 indicates that the feature combination of the importance threshold rank TOP5 is to be extracted. The specific security rule generating means determines that the target feature combination includes two cases, and the determination of the target feature combination is described below in the following cases:
the optimal model in the first case is a tree model.
The safety rule generating device performs feature importance descending order arrangement on the feature combinations output by the optimal model to obtain a feature importance sequence.
Further, the security rule generating device determines the feature combinations with the sequences smaller than or equal to a second preset threshold value in the feature importance sequence as target feature combinations;
And in the second case, the optimal model is a deep learning model.
At this time, the security rule generating means determines the importance threshold value based on the accuracy of the optimal model and the F1 value.
Further, among the feature combinations output by the optimal model, a feature combination with an importance threshold value greater than or equal to a first preset threshold value is determined as a target feature combination.
In the second case, determining the importance threshold according to the accuracy and the F1 value of the optimal model is specifically:
1) And (3) acquiring one feature column at a time according to the feature data determined in the step (S202) and the model file of the optimal model, and acquiring a plurality of feature columns altogether.
2) And randomly disturbing the plurality of characteristic columns, and predicting the characteristic columns by using a model to obtain the accuracy and the F1 value.
In this step, when the feature data to which the feature column belongs is balance data, the accuracy of the feature column is obtained; and when the characteristic data to which the characteristic column belongs is unbalanced data, acquiring an F1 value of the characteristic column.
Since whether the feature data is the balance data has been determined in the foregoing step, it is determined which index of the accuracy and the F1 value is selected as the importance threshold of the feature column by the feature data to which the feature column belongs.
3) Each feature column and its corresponding accuracy or F1 value are recorded to determine which of the accuracy and F1 values is the importance threshold corresponding to that feature column.
That is, if the feature column is balance data, determining the accuracy of the feature column as an importance threshold; if the feature column is unbalanced data, the F1 value of the feature column is determined as an importance threshold. At this time, if the accuracy and F1 of one feature sequence are larger than the accuracy and F1 value of the basic model, it is explained that the feature sequence is less important for the optimal model; conversely, the more important.
The above sub-scenario illustrates the determination of the target feature combination.
In one possible implementation, S204 may be performed by the feature auto-screening module described above, so that the security rule generating device determines the target feature combination according to the preprocessing result and the optimal model.
S205, the safety rule generating device determines the safety rule according to the feature combination and the decision tree.
Optionally, the security rule generating device makes a decision tree depth and the number of child nodes, and then combines the feature combination determined in the step S204 to establish a decision tree model; after that, the safety rule generating device determines a path from the root node to the leaf node according to the trained structure of the decision tree model; for each path, the security rule generating device determines a decision rule corresponding to each path; for each decision rule, the security rule generating device performs condition mapping according to each condition and the category mapping table of the decision rule so as to screen decision rule contents corresponding to each condition; the security rule generating device determines the decision rule after the condition mapping as a security rule. It should be noted that, the specific security rule generating device determines the security rule according to the feature combination and the decision tree, see S301-S305 below, which are not described herein.
In one possible implementation, S205 may be performed by the rule generation module described above, so that the security rule generation means determines the security rule from the feature combination and the decision tree.
Based on the technical scheme, the embodiment of the application constructs the safety rule automatic extraction framework based on machine learning/deep learning, so that after the samples of the safety data are collected, the data preprocessing and the optimal model selection can be automatically completed, and then the safety rule is generated by combining the samples. Therefore, the application eliminates the imperfection and one-sided property of rules caused by subjective bias of expert individuals; meanwhile, the application can realize the construction of complex safety rules based on multiple fields by only correlating the data in the related fields. In addition, the safety rule obtained by the application is based on the related abnormal information which has occurred, so that the problem of lack of practical verification is avoided, the safety rule can be updated rapidly, and timeliness can be considered.
As shown in fig. 3, fig. 3 is a schematic flow chart of another security rule generating method according to the present application, and the method includes the following steps:
S301, the safety rule generating device determines a decision tree model according to the target feature combination and the decision tree.
Optionally, the security rule generating device first determines the depth of the decision tree and the number of child nodes, and then combines the target feature combination to build a decision tree model.
In one possible way, S301 may be specifically executed by the decision tree module described above, so that the security rule generating device determines a decision tree model according to the target feature combination and the decision tree.
S302, the safety rule generating device determines a path from the root node to the leaf node according to the trained structure of the decision tree model.
It should be noted that, after the training is completed, the decision tree model has a structure represented as a tree diagram, and is composed of root nodes, internal nodes and leaf nodes. Each node represents a feature or attribute that is used to divide the input data.
At this point, the structure of the decision tree is parsed, a set of decision rules can be extracted that describe the path from the root node to the leaf node and can be used to classify new input data. In this step, the action performed by the security rule generating device is to determine the path.
In a possible manner, S301 may be specifically executed by the rule generating module described above, so that the security rule generating device determines the path from the root node to the leaf node according to the trained structure of the decision tree model.
S303, for each path, the security rule generating device determines a decision rule corresponding to each path.
In this step, the security rule generating device generates a decision rule along each path from the root node of the path in accordance with the structure of the decision tree.
In one possible manner, S303 may be specifically executed by the rule generating module described above, so that, for each path, the security rule generating means determines a decision rule corresponding to each path.
S304, for each decision rule, the security rule generating device performs condition mapping according to each condition and the category mapping table of the decision rule so as to screen the decision rule content corresponding to each condition.
Illustratively, the decision rule generated in S303 is: event_original_type < = 35.5& gt
event_type>4.5&Event_original_type>7.0&destination_IP_address<=16.5&event_name>64.5。
The security rule generating device extracts each condition in the decision rule (for example, the first condition is event_original_type < = 35.5, the subsequent conditions are the same, and so on), performs condition mapping according to the category mapping table output in S202, that is, table 2, finally determines the true value screened in each rule, and finally outputs the result of screening the rule and the related rule.
Illustratively, the results of the first conditional screening in combination with the examples above are as follows: the method comprises the following steps of [ "CMS attack", "XML injection", "XPATH", "XSS attack", "middleware attack", "other events", "other injections", "anti-serialization", "command injection", "sensitive file access", "server information leakage", "Trojan horse attack", "browser attack", "third party application attack", "network installation attack", "network scanning" ].
In a possible manner, S304 may be specifically executed by the rule mapping module in the rule generating module described above, so that, for each decision rule, the security rule generating device performs condition mapping according to each condition and category mapping table of the decision rule, so as to filter the decision rule content corresponding to each condition.
S305, the security rule generating device determines the decision rule subjected to the condition mapping as a security rule.
Optionally, the security rule generating means outputs the finally determined security rule visually.
In a possible manner, S305 may be specifically executed by the rule generating module described above, so that the security rule generating device determines the decision rule after the condition mapping as the security rule.
Based on the technical scheme, the embodiment of the application can automatically complete data preprocessing and optimal model selection after the samples of the safety data are collected by constructing the safety rule automatic extraction framework based on machine learning/deep learning, and further generate the safety rule by combining the samples. Therefore, the application eliminates the imperfection and one-sided property of rules caused by subjective bias of expert individuals; meanwhile, the application can realize the construction of complex safety rules based on multiple fields by only correlating the data in the related fields. In addition, the safety rule obtained by the application is based on the related abnormal information which has occurred, so that the problem of lack of practical verification is avoided, the safety rule can be updated rapidly, and timeliness can be considered.
The security rule generation method of the present application is described below, by way of example, with reference to sample data, which is tagged data and has predefined anomaly and normal tags, embodied as web application protection system (web application firewall, WAF) alert data for an operator:
s1, a data acquisition module acquires data and WAF alarm data.
After the WAF alarm data is acquired, the data acquisition module sends the WAF alarm data to the automatic preprocessing module.
S2, the automatic preprocessing module performs data preprocessing on the WAF alarm data.
The data preprocessing comprises the following steps: performing corresponding null filling, classification feature recognition and category coding, and outputting related category mapping tables as mapping (partial display), processed feature data and whether the feature data are category unbalanced data sets, wherein the specific process is as follows:
1) And (3) checking the null value of the data acquired in the step (S1), counting the null value duty ratio of each feature, eliminating the related features if the null value is more, and otherwise, filling with UK.
2) And (3) automatically performing classification characteristic recognition, judging the data type of the filled data, judging the data type as classification characteristics if the data type is object type, then coding by using class codes, counting the duty ratio of each class of labels according to the labels, and outputting a class code mapping relation and whether the data type is balance data or not, as shown in the table 1.
Illustratively, the partial preprocessing results of WAF alert data are shown in FIG. 4.
S3, transmitting the characteristic data obtained in the S2 into an automatic model selection module to automatically select models, and outputting names and model weights of optimal models, wherein the specific process is as follows:
1) And (3) performing data segmentation (training set: and 7:3), respectively training and testing the models in the model library, outputting the accuracy rate and the F1 value of the models, and judging index data to be compared according to whether the model is a balance data set or not.
For example, the WAF alarm data is balance data, so that the observation accuracy is sufficient. As shown in fig. 5 and 6, fig. 5 and 6 show partial model results, respectively. Among them, fig. 5 is the result of the Xgboost model, and fig. 6 is the result of the LGBM model.
S4, selecting an Xgboost model as a tree model according to the S3. And when the subtree model is split, the importance of the features is determined according to the number of features used, the features are the most important as the number of features used is larger, the features are sequentially ranked, feature screening is carried out for the top10 according to a set feature importance threshold, and a feature combination satisfying the threshold [ 'event type', 'Event Original Level', 'event level', 'event name', 'destination IP address', 'Source IP address', 'degradation Port', 'Response', 'Source Port', 'Event original type' ].
S5, according to the output characteristic combination, rule generation is carried out by using a rule generation module, and an interpretable safety rule is output, wherein the specific steps are as follows:
1) And modeling the selected characteristics by using a decision tree, wherein data segmentation is not performed.
2) Extracting the segmentation conditions of the root node and the direct point in the decision tree to form a decision rule.
3) And taking out the decision condition in each decision rule, matching with the previous mapping relation table, outputting and determining a screening result of each condition in each rule, and enhancing the interpretability of the result. The partial results are shown in fig. 7 and 8, for example. Wherein fig. 7 shows security rules and fig. 8 shows partial rule mapping.
The method for generating the security rule is introduced by combining the sample data to be specifically realized as WAF alarm data of a certain operator.
The embodiment of the application can divide the functional modules or functional units of the security rule generating device according to the method example, for example, each functional module or functional unit can be divided corresponding to each function, and two or more functions can be integrated in one processing module. The integrated modules may be implemented in hardware, or in software functional modules or functional units. The division of the modules or units in the embodiment of the present application is schematic, which is merely a logic function division, and other division manners may be implemented in practice.
Illustratively, as shown in fig. 9, a schematic diagram of a possible configuration of a security rule generating device according to an embodiment of the present application is shown. The security rule generation apparatus 900 includes: an acquisition unit 901 and a processing unit 902.
Wherein, the acquisition unit 901 is used for acquiring sample data. The sample data comprises safety data and label information, and the label information is used for marking abnormal conditions of the safety data.
And a processing unit 902, configured to perform data preprocessing on the sample data, and determine a preprocessing result.
The processing unit 902 is further configured to determine an optimal model according to the preprocessing result.
The processing unit 902 is further configured to determine a target feature combination according to the preprocessing result and the optimal model. The importance threshold of each feature combination in the target feature combination meets a first preset condition, and the importance threshold is used for representing the feature importance degree of each feature combination.
The processing unit 902 is further configured to determine a security rule according to the feature combination and the decision tree.
Optionally, the processing unit 902 is further configured to, in a case where the feature data is balance data, input the feature data into a preset model set, and obtain an accuracy of each model in the preset model set. And determining the model with the highest accuracy as the optimal model.
Optionally, the processing unit 902 is further configured to, in a case where the feature data is unbalanced data, input the feature data into a preset model set, and obtain an F1 value of each model in the preset model set. And determining the model with the highest F1 value as the optimal model.
Optionally, the processing unit 902 is further configured to, in a case where the optimal model is a tree model, perform feature importance descending order on feature combinations output by the optimal model to obtain a feature importance sequence. And determining the feature combinations with the sequences smaller than or equal to a second preset threshold value in the feature importance sequence as target feature combinations.
Optionally, the processing unit 902 is further configured to determine, if the optimal model is a deep learning model, an importance threshold according to an accuracy rate and an F1 value of the optimal model; and determining the feature combinations with the sequences smaller than or equal to a preset threshold value in the feature importance sequence as target feature combinations.
Optionally, the processing unit 902 is further configured to obtain a plurality of feature columns according to the feature data and a model file of the optimal model.
Optionally, the processing unit 902 is further configured to randomly scramble the plurality of feature columns to obtain accuracy or F1 values of the plurality of feature columns; under the condition that the characteristic data of the characteristic column is balance data, acquiring the accuracy of the characteristic column; and when the characteristic data to which the characteristic column belongs is unbalanced data, acquiring an F1 value of the characteristic column.
Optionally, the processing unit 902 is further configured to record the accuracy or the F1 value of the plurality of feature columns, and determine the accuracy or the F1 value of the plurality of feature columns as the importance threshold of the plurality of feature columns.
Optionally, the processing unit 902 is further configured to determine a decision tree model according to the target feature combination and the decision tree.
Optionally, the processing unit 902 is further configured to determine a path from the root node to the leaf node according to the trained structure of the decision tree model.
Optionally, the processing unit 902 is further configured to determine, for each path, a decision rule corresponding to each path.
Optionally, the processing unit 902 is further configured to, for each decision rule, perform condition mapping according to each condition and the category mapping table of the decision rule, so as to filter the decision rule content corresponding to each condition.
Optionally, the processing unit 902 is further configured to determine the decision rule after the condition mapping as a security rule.
Alternatively, the security rule generating apparatus 900 may further include a storage unit (shown in a dashed line box in fig. 9) storing a program or instructions that, when executed by the acquiring unit 901 and the processing unit 902, enable the security rule generating apparatus to perform the security rule generating method described in the above-described method embodiment.
In addition, the technical effects of the security rule generating apparatus described in fig. 9 may refer to the technical effects of the security rule generating method described in the foregoing embodiments, and will not be described herein.
Fig. 10 is a schematic diagram illustrating still another possible configuration of the security rule generation device according to the above embodiment. As shown in fig. 10, the security rule generation device 1000 includes: a processor 1002.
The processor 1002 is configured to control and manage the actions of the security rule generating device, for example, perform the steps performed by the acquiring unit 901 and the processing unit 902, and/or perform other processes of the technical solutions described herein.
The processor 1002 may be implemented or realized with the various illustrative logical blocks, modules, and circuits described in connection with the present disclosure. The processor may be a central processing unit, a general purpose processor, a digital signal processor, an application specific integrated circuit, a field programmable gate array or other programmable logic device, a transistor logic device, a hardware component, or any combination thereof. Which may implement or perform the various exemplary logic blocks, modules and circuits described in connection with this disclosure. The processor may also be a combination that performs the function of a computation, e.g., a combination comprising one or more microprocessors, a combination of a DSP and a microprocessor, etc.
Optionally, the security rule generation apparatus 1000 may further comprise a communication interface 1003, a memory 1001 and a bus 1004. Wherein the communication interface 1003 is used to support communication of the security rule generation device 1000 with other network entities. The memory 1001 is used for storing program codes and data of the security rule generating apparatus.
Wherein the memory 1001 may be a memory in the security rule generating means, which may comprise a volatile memory, such as a random access memory; the memory may also include non-volatile memory, such as read-only memory, flash memory, hard disk or solid state disk; the memory may also comprise a combination of the above types of memories.
Bus 1004 may be an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus or the like. The bus 1004 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in fig. 10, but not only one bus or one type of bus.
From the foregoing description of the embodiments, it will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of functional modules is illustrated, and in practical application, the above-described functional allocation may be implemented by different functional modules according to needs, i.e. the internal structure of the apparatus is divided into different functional modules to implement all or part of the functions described above. The specific working processes of the above-described systems, devices and modules may refer to the corresponding processes in the foregoing method embodiments, which are not described herein.
An embodiment of the present application provides a computer program product containing instructions, which when run on an electronic device of the present application, cause the computer to perform the security rule generating method described in the above method embodiment.
The embodiment of the application also provides a computer readable storage medium, wherein the computer readable storage medium stores instructions, and when the computer executes the instructions, the electronic equipment executes each step executed by the security rule generating device in the method flow shown in the method embodiment.
The computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: electrical connections having one or more wires, portable computer diskette, hard disk. Random access Memory (Random Access Memory, RAM), read-Only Memory (ROM), erasable programmable Read-Only Memory (Erasable Programmable Read Only Memory, EPROM), registers, hard disk, optical fiber, portable compact disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any other form of computer-readable storage medium suitable for use by a person or persons of skill in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an application specific integrated circuit (Application Specific Integrated Circuit, ASIC). In embodiments of the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The foregoing is merely illustrative of specific embodiments of the present application, and the scope of the present application is not limited thereto, but any changes or substitutions within the technical scope of the present application should be covered by the scope of the present application. Therefore, the protection scope of the present application should be subject to the protection scope of the claims.

Claims (14)

1. A security rule generation method, the method comprising:
acquiring sample data; the sample data comprises safety data and label information, wherein the label information is used for marking abnormal conditions of the safety data;
performing data preprocessing on the sample data, and determining a preprocessing result;
determining an optimal model according to the preprocessing result;
determining a target feature combination according to the preprocessing result and the optimal model; the importance threshold of each feature combination in the target feature combination meets a first preset condition, and the importance threshold is used for representing the feature importance degree of each feature combination;
and determining a safety rule according to the characteristic combination and the decision tree.
2. The method of claim 1, wherein the data preprocessing comprises one or more of: performing control filling, classification feature recognition and category coding on the sample data;
The pretreatment result comprises: the characteristic data, the category mapping table and the unbalanced data set attribution information; the characteristic data are used for representing abnormal conditions, grades and types of the safety data, the class mapping table is used for representing class codes of the safety data, and the unbalanced data set attribution information is used for representing whether the characteristic data are unbalanced data or not.
3. The method according to claim 2, wherein determining an optimal model according to the preprocessing result specifically comprises:
under the condition that the characteristic data are balance data, inputting the characteristic data into a preset model set, and obtaining the accuracy of each model in the preset model set; determining the model with the highest accuracy as the optimal model;
inputting the characteristic data into a preset model set under the condition that the characteristic data are unbalanced data, and acquiring an F1 value of each model in the preset model set; determining the model with the highest F1 value as the optimal model;
wherein the set of preset models includes one or more of: xgboost model, gradient lifting decision tree LGBM model, catboost model, random forest model, and deep learning model.
4. A method according to claim 3, wherein the first preset condition is that the importance threshold of the feature combinations is ranked less than or equal to a preset threshold in a sequence of feature importance thresholds;
and determining a target feature combination according to the preprocessing result and the optimal model, wherein the method specifically comprises the following steps of:
under the condition that the optimal model is a tree model, feature importance descending arrangement is carried out on feature combinations output by the optimal model so as to obtain the feature importance sequence; determining the feature combination with the sequence of the feature importance degree smaller than or equal to the preset threshold value as the target feature combination;
under the condition that the optimal model is a deep learning model, determining the importance threshold according to the accuracy rate and the F1 value of the optimal model; performing feature importance descending order arrangement on the feature combinations output by the optimal model to obtain the feature importance sequence; and determining the feature combination with the rank smaller than or equal to the preset threshold value in the feature importance sequence as the target feature combination.
5. The method according to claim 4, wherein, in the case where the optimal model is a deep learning model, determining the importance threshold according to the accuracy of the optimal model and the F1 value, specifically comprises:
Acquiring a plurality of feature columns according to the feature data and the model file of the optimal model;
randomly disturbing the plurality of feature columns to obtain the accuracy or F1 value of the plurality of feature columns; acquiring the accuracy of the feature column under the condition that the feature data of the feature column is balance data; acquiring an F1 value of the feature column under the condition that the feature data of the feature column is unbalanced data;
and recording the accuracy or F1 values of the plurality of feature columns, and determining the accuracy or F1 values of the plurality of feature columns as importance thresholds of the plurality of feature columns.
6. The method according to claim 5, wherein said determining a security rule from said feature combinations and decision trees, in particular comprises:
determining a decision tree model according to the target feature combination and the decision tree;
determining a path from a root node to a leaf node according to the trained structure of the decision tree model;
for each path, determining a decision rule corresponding to each path;
for each decision rule, carrying out condition mapping according to each condition of the decision rule and the category mapping table so as to screen decision rule contents corresponding to each condition;
And determining the decision rule subjected to the condition mapping as the safety rule.
7. A security rule generation device, characterized in that the security rule generation device comprises: an acquisition unit and a processing unit;
the acquisition unit is used for acquiring sample data; the sample data comprises safety data and label information, wherein the label information is used for marking abnormal conditions of the safety data;
the processing unit is used for carrying out data preprocessing on the sample data and determining a preprocessing result;
the processing unit is also used for determining an optimal model according to the preprocessing result;
the processing unit is further used for determining a target feature combination according to the preprocessing result and the optimal model; the importance threshold of each feature combination in the target feature combination meets a first preset condition, and the importance threshold is used for representing the feature importance degree of each feature combination;
the processing unit is further used for determining a safety rule according to the feature combination and the decision tree.
8. The security rule generation apparatus of claim 7, wherein the data preprocessing comprises one or more of: performing control filling, classification feature recognition and category coding on the sample data;
The pretreatment result comprises: the characteristic data, the category mapping table and the unbalanced data set attribution information; the characteristic data are used for representing abnormal conditions, grades and types of the safety data, the class mapping table is used for representing class codes of the safety data, and the unbalanced data set attribution information is used for representing whether the characteristic data are unbalanced data or not.
9. The security rule generating apparatus according to claim 8, wherein,
the processing unit is further configured to input the feature data into a preset model set under the condition that the feature data is balance data, and obtain accuracy of each model in the preset model set; determining the model with the highest accuracy as the optimal model;
the processing unit is further configured to input the feature data into a preset model set to obtain an F1 value of each model in the preset model set when the feature data is unbalanced data; determining the model with the highest F1 value as the optimal model;
wherein the set of preset models includes one or more of: xgboost model, gradient lifting decision tree LGBM model, catboost model, random forest model, and deep learning model.
10. The security rule generating apparatus according to claim 9, wherein,
the processing unit is further configured to, when the optimal model is a tree model, perform feature importance descending order arrangement on feature combinations output by the optimal model to obtain the feature importance sequence; determining a feature combination with the rank less than or equal to a second preset threshold value in the feature importance sequence as the target feature combination;
the processing unit is further configured to determine the importance threshold according to the accuracy and the F1 value of the optimal model when the optimal model is a deep learning model; performing feature importance descending order arrangement on the feature combinations output by the optimal model to obtain the feature importance sequence; and determining the feature combination with the rank smaller than or equal to the preset threshold value in the feature importance sequence as the target feature combination.
11. The security rule generating apparatus according to claim 9, wherein,
the processing unit is further used for acquiring a plurality of feature columns according to the feature data and the model file of the optimal model;
the processing unit is further configured to randomly scramble the plurality of feature columns, and obtain accuracy or F1 values of the plurality of feature columns; acquiring the accuracy of the feature column under the condition that the feature data of the feature column is balance data; acquiring an F1 value of the feature column under the condition that the feature data of the feature column is unbalanced data;
The processing unit is further configured to record accuracy or F1 values of the plurality of feature columns, and determine the accuracy or F1 values of the plurality of feature columns as importance thresholds of the plurality of feature columns.
12. The security rule generating apparatus according to claim 11, wherein,
the processing unit is further used for determining a decision tree model according to the target feature combination and the decision tree;
the processing unit is further used for determining a path from the root node to the leaf node according to the trained structure of the decision tree model;
the processing unit is further configured to determine, for each path, a decision rule corresponding to each path;
the processing unit is further configured to perform condition mapping on each decision rule according to each condition of the decision rule and the category mapping table, so as to screen decision rule content corresponding to each condition;
the processing unit is further configured to determine, as the security rule, a decision rule after the condition mapping is performed.
13. An electronic device, comprising: a processor and a memory; wherein the memory is configured to store computer-executable instructions that, when executed by the electronic device, cause the electronic device to perform the security rule generation method of any one of claims 1-6.
14. A computer readable storage medium comprising instructions that, when executed by an electronic device, enable the electronic device to perform the security rule generation method of any one of claims 1-6.
CN202311250655.XA 2023-09-25 2023-09-25 Security rule generation method and device Pending CN117176459A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311250655.XA CN117176459A (en) 2023-09-25 2023-09-25 Security rule generation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311250655.XA CN117176459A (en) 2023-09-25 2023-09-25 Security rule generation method and device

Publications (1)

Publication Number Publication Date
CN117176459A true CN117176459A (en) 2023-12-05

Family

ID=88933695

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311250655.XA Pending CN117176459A (en) 2023-09-25 2023-09-25 Security rule generation method and device

Country Status (1)

Country Link
CN (1) CN117176459A (en)

Similar Documents

Publication Publication Date Title
CN107659570A (en) Webshell detection methods and system based on machine learning and static and dynamic analysis
CN109933984B (en) Optimal clustering result screening method and device and electronic equipment
CN111881289B (en) Training method of classification model, and detection method and device of data risk class
CN108833139B (en) OSSEC alarm data aggregation method based on category attribute division
CN110674360B (en) Tracing method and system for data
CN111368289B (en) Malicious software detection method and device
CN111338692A (en) Vulnerability classification method and device based on vulnerability codes and electronic equipment
CN108491228A (en) A kind of binary vulnerability Code Clones detection method and system
US20240036841A1 (en) Method and Apparatus for Compatibility Detection, Device and Non-transitory computer-readable storage medium
CN111338622B (en) Supply chain code identification method, device, server and readable storage medium
CN109002712B (en) Pollution data analysis method and system based on value dependency graph and electronic equipment
CN113132311A (en) Abnormal access detection method, device and equipment
CN111414402A (en) Log threat analysis rule generation method and device
CN110517154A (en) Data model training method, system and computer equipment
CN110990523A (en) Legal document determining method and system
CN113098989B (en) Dictionary generation method, domain name detection method, device, equipment and medium
CN117376228A (en) Network security testing tool determining method and device
CN116702157A (en) Intelligent contract vulnerability detection method based on neural network
CN110263618A (en) The alternative manner and device of one seed nucleus body model
CN110389897A (en) SDK logic test method, device, storage medium and server
CN117176459A (en) Security rule generation method and device
KR102217092B1 (en) Method and apparatus for providing quality information of application
CN111385342B (en) Internet of things industry identification method and device, electronic equipment and storage medium
CN107239704A (en) Malicious web pages find method and device
CN117311806B (en) Weighted directed coupling network-based software structure risk identification method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination