CN112149818A - Threat identification result evaluation method and device - Google Patents

Threat identification result evaluation method and device Download PDF

Info

Publication number
CN112149818A
CN112149818A CN201910569491.4A CN201910569491A CN112149818A CN 112149818 A CN112149818 A CN 112149818A CN 201910569491 A CN201910569491 A CN 201910569491A CN 112149818 A CN112149818 A CN 112149818A
Authority
CN
China
Prior art keywords
threat
evaluation
training
neural network
feedback
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910569491.4A
Other languages
Chinese (zh)
Other versions
CN112149818B (en
Inventor
陈哲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Shuan Xinyun Information Technology Co ltd
Original Assignee
Beijing Shuan Xinyun Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Shuan Xinyun Information Technology Co ltd filed Critical Beijing Shuan Xinyun Information Technology Co ltd
Priority to CN201910569491.4A priority Critical patent/CN112149818B/en
Publication of CN112149818A publication Critical patent/CN112149818A/en
Application granted granted Critical
Publication of CN112149818B publication Critical patent/CN112149818B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1441Countermeasures against malicious traffic

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The patent refers to the field of 'electric digital data processing'. The method relates to the field of intelligent network security information event management, and solves the problem that whether a rule base is applicable or not can not be judged by determining threats. The method comprises the following steps: collecting feedback labels of the threat events obtained by automatic detection by a user, and constructing a training sample according to the feedback labels; training by using the training sample to obtain an evaluation neural network; evaluating an accuracy of a subsequently generated threat event using the evaluation neural network. The technical scheme provided by the invention realizes threat rule applicability evaluation based on the intelligent neural network.

Description

Threat identification result evaluation method and device
Technical Field
The present disclosure relates to the field of intelligent network security information event management, and in particular, to a method and an apparatus for evaluating threat identification result, a storage medium, and a computer device.
Background
Cyber threat identification products can generally be divided into two parts: a behavior feature calculation algorithm and a threat judgment rule base based on feature data. Because the preset threat judgment rule base considers the factors of universality, applicability and the like, the recall rate and the accuracy rate are not high. For different network environments, a preset threat judgment rule base is applicable, and a reliable evaluation mechanism is not available.
Disclosure of Invention
To overcome the problems in the related art, a cyber-threat detection method and apparatus are provided.
According to a first aspect herein, there is provided a threat identification result evaluation method comprising:
collecting feedback labels of the threat events obtained by automatic detection by a user, and constructing a training sample according to the feedback labels;
training by using the training sample to obtain an evaluation neural network;
evaluating an accuracy of a subsequently generated threat event using the evaluation neural network.
Preferably, the step of collecting feedback labels of the user on the threat events obtained by automatic detection and constructing the training sample according to the feedback labels includes:
receiving a feedback label of the threat event obtained by the automatic detection by a user, wherein the feedback label indicates that the threat event is correct or wrong;
after each behavior characteristic of the threat event and the feedback label are converted into a digital form, an evaluation data is formed;
and forming an input matrix by using evaluation data of a plurality of threat events aiming at the same attack reason under the same domain name, wherein each evaluation data is a row of the input matrix.
Preferably, the step of training the estimated neural network by using the training samples comprises:
running a training algorithm to perform iterative training by taking the input matrix as input;
and when the loss value is not reduced any more, ending the training to obtain the evaluation neural network.
Preferably, the step of using the evaluating neural network to evaluate the accuracy of the subsequently generated threat event comprises:
and inputting all the characteristics of the threat event detected according to a preset threat judgment rule base into the evaluation neural network to obtain the evaluation score of the threat event.
According to another aspect of the present disclosure, there is also provided a threat identification result evaluation apparatus including:
the sample acquisition module is used for collecting feedback labels of the threat events obtained by automatic detection by the user and constructing training samples according to the feedback labels;
the neural network training module is used for training by using the training sample to obtain an evaluation neural network;
and the intelligent evaluation module is used for evaluating the accuracy of the subsequently generated threat event by using the evaluation neural network.
Preferably, the sample collection module comprises:
a feedback collection unit, configured to receive a feedback label of the threat event obtained by the automatic detection from a user, where the feedback label indicates that the threat event is correct or incorrect;
the data conversion unit is used for converting each behavior characteristic of the threat event and the feedback label into a digital form to form evaluation data;
the system comprises a sample generation unit, a storage unit and a processing unit, wherein the sample generation unit is used for forming an input matrix by evaluation data of a plurality of threat events aiming at the same attack reason under the same domain name, and each evaluation data is a row of the input matrix.
Preferably, the neural network training module step includes:
the iterative operation unit is used for running a training algorithm for iterative training by taking the input matrix as input;
and the training management unit is used for finishing training when the loss value is not reduced any more to obtain the evaluation neural network.
Preferably, the intelligent evaluation module is specifically configured to input all features of the threat event detected according to a preset threat judgment rule base into the evaluation neural network, so as to obtain an evaluation score of the threat event.
According to another aspect herein, there is also provided a computer readable storage medium having stored thereon a computer program which, when executed, performs the steps of the above-described cyber-threat detection method.
According to another aspect of the present document, there is also provided a computer device comprising a processor, a memory and a computer program stored on the memory, the processor implementing the steps of the above-described cyber-threat detection method when executing the computer program.
Provided are a threat identification result evaluation method and device. And collecting the evaluation of the user on the threat event obtained by automatic detection, training by taking the evaluation as a training sample to obtain an evaluation neural network, and evaluating the accuracy of the threat event generated subsequently by using the evaluation neural network. The threat rule applicability evaluation based on the intelligent neural network is realized, the problem that whether the threat judgment rule base is applicable or not cannot be determined is solved, the specific applicability evaluation can be carried out on each rule in the preset threat judgment rule base, and effective reference data is provided for the modification and adjustment of the rules.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, are included to provide a further understanding of the disclosure, and are incorporated in and constitute a part of this specification. In the drawings:
fig. 1 schematically illustrates a flow of a threat identification result evaluation method provided by an embodiment of the present disclosure.
Fig. 2 exemplarily shows a specific flow of step 101 in fig. 1.
Fig. 3 exemplarily shows a specific flow of step 102 in fig. 3.
Fig. 4 schematically illustrates a structure of a threat identification result evaluation apparatus provided in an embodiment of the present disclosure.
Fig. 5 schematically shows the structure of the sample collection module 401 of fig. 4.
Fig. 6 exemplarily shows the structure of the neural network training module 402 in fig. 4.
Fig. 7 exemplarily shows a block diagram of a computer apparatus (a general structure of a server) provided in an embodiment herein.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention. It should be noted that the embodiments and features of the embodiments in the present application may be arbitrarily combined with each other without conflict.
Because the preset threat judgment rule base considers the factors of universality, applicability and the like, the recall rate and the accuracy rate are not high. For different network environments, a preset threat judgment rule base is applicable, and a reliable evaluation mechanism is not available.
In order to solve the above problem, embodiments of the present invention provide a threat identification result evaluation method, apparatus, storage medium, and computer device. By evaluating the threat identification result, the applicability of the preset threat judgment rule in the current network environment can be accurately judged, and the problem that whether the threat judgment rule base is applicable cannot be determined is solved.
An embodiment of the present invention provides a method for evaluating a threat identification result, where a process of evaluating accuracy of a preset threat determination rule base by using the method is shown in fig. 1, and includes:
step 101, collecting feedback labels of the threat events obtained by automatic detection by the user, and constructing training samples according to the feedback labels.
In the embodiment of the invention, the system judges the rule base according to the preset threat and comprises the judgment rules for a plurality of different threat events, each attack reason corresponds to at least one rule, and when any rule under one attack reason is satisfied, a threat event related to the attack reason is considered to occur. Each rule in turn corresponds to a number of features, such as: the amount of page views within the window (PV), the number of URIs visited, whether a loop has occurred in visiting URIs, various status code statistics, average request time, average request content length, etc.
As shown in fig. 2, the steps include:
and step 1011, receiving feedback labels of the threat events obtained by the automatic detection by the user.
After the threat event is detected according to the preset threat judgment rule base, the threat event can be output to a user to prompt the user to carry out labeling operation so as to obtain feedback labeling. The feedback annotation indicates that the threat event is correct or incorrect.
Preferably, in an embodiment of the present invention, in order to further improve efficiency, not all threat events may be output to the user to obtain the feedback labels, but a part of threat events may be screened out from the operation. Specifically, in the threat determination rule base, the computation logic of the threat scores of different rules is the same, and the threat scores reflect the possible consequences or the degree of influence of the corresponding threat events. Because the rule is a logic expression and comprises logic operation, the judgment whether an event is a threat event can be obtained through the rule, and the threat degree of the threat event can be evaluated through the threat score. In general, a higher threat score indicates that the corresponding threat event is more consistent with the characteristics of the attack behavior; the lower the threat score, the more consistent the corresponding threat event is with the characteristics of normal behavior. And when the threat event is established, the threat score calculated according to the behavior characteristics is the threat score of the threat event.
Selecting a part of threat events with lower threat scores as threat events needing feedback marking, outputting the threat events to a user, and collecting feedback marking of the user; the collection may be performed within a time window.
For example, the threat events in the first 5 minutes of the search every 5 minutes, one threat event with the lowest threat score is taken from each domain name for each threat reason, and the threat events and the corresponding specific behavior characteristics are stored in a recommendation alternative library for labeling recommendation. And preferentially outputting the threat events in the recommended alternative library to a user.
In addition, since the raw log has a limited time to save, premature threat events are not as convenient to backtrack and judge without the raw log. Therefore, earlier collected threat events (e.g., threat events 24 hours or more ago) may also be culled from the recommended alternatives library, taking into account timeliness factors of the collected samples.
And then, the evaluation aiming at the threat event is formed by combining all the behavior characteristics contained in the threat event and the feedback label.
Step 1012, after each behavior characteristic of the threat event and the feedback label are converted into a digital form, an evaluation data is formed.
In this step, first, a first prediction score is added to the evaluation indicating the correctness of the threat event by the feedback label, a second prediction score is added to the evaluation indicating the mistakes of the threat event by the feedback label, and the content of the feedback label is expressed by using numbers, wherein the first prediction score is larger than the second prediction score. For example, a first prediction score added for a feedback score indicating a correct rating is 100 points, and a second prediction score added for a feedback score indicating an incorrect rating is-100 points.
And performing characteristic engineering on each characteristic of the threat event, acquiring characteristic data applicable to an input matrix, and converting the characteristic value into corresponding digital representation, wherein the specific conversion mode can be configured according to actual application requirements.
After the processing, all the behavior characteristics of a threat event and the feedback labels are converted into digital forms, and evaluation data corresponding to the threat event is formed.
And 1013, forming an input matrix by using the evaluation data of a plurality of threat events aiming at the same attack reason under the same domain name.
In this step, the evaluation data of a plurality of threat events of the attack reason under the domain name are taken and converted into an input matrix with a uniform format, and each evaluation data is a row of the input matrix.
And 102, training by using the training sample to obtain an evaluation neural network.
After enough evaluation data which can be used as training samples are collected, the neural network training can be carried out. For example, when the threat event set corresponding to a certain attack cause of a certain domain name is updated, the number of threat events with feedback labels reaches more than 20, and no feedback label is added within 5 minutes, the training process is started.
In this step, the evaluation neural network is generated by taking the same attack reason under the same domain name as a unit. A plurality of different evaluation neural networks may be generated for different domain names and different attack causes. As shown in fig. 3, the method includes:
and 1021, running a training algorithm to perform iterative training by taking the input matrix as input.
And step 1022, when the loss value is not reduced any more, ending the training to obtain the evaluation neural network.
In this step, when the loss value of the neural network is no longer reduced, for example, when no smaller loss value is obtained for 5000 consecutive iterations, it indicates that the evaluation neural network has been trained, and the training is ended.
And 103, evaluating the accuracy of the subsequently generated threat event by using the evaluation neural network.
In this step, all features of the threat event detected according to the preset threat judgment rule base are input into the evaluation neural network, and the evaluation score of the threat event can be obtained. The evaluation score is equal to or greater than the second prediction score and equal to or less than the first prediction score, i.e., a value between the first prediction score and the second prediction score. When the evaluation score is closer to the first prediction score, the judgment accuracy of the corresponding threat event is higher, and the applicability of the corresponding judgment rule in the preset threat judgment rule base under the current network environment is further determined to be stronger; when the evaluation score is closer to the second prediction score, the judgment accuracy of the corresponding threat event is lower, and the applicability of the corresponding judgment rule in the preset threat judgment rule base under the current network environment is further determined to be weaker. After the operation is continued for a period of time, the applicability information of each threat event (or corresponding rule thereof) in the current network environment can be obtained.
An embodiment of the present invention further provides a threat identification result evaluation apparatus, a structure of which is shown in fig. 4, including:
the sample acquisition module 401 is configured to collect feedback labels of the threat events obtained by the automatic detection by the user, and construct a training sample according to the feedback labels;
a neural network training module 402, configured to train using the training samples to obtain an estimated neural network;
an intelligent evaluation module 403, configured to evaluate the accuracy of the subsequently generated threat event using the evaluation neural network.
Preferably, the structure of the sample collection module 401 is shown in fig. 5, and includes:
a feedback collection unit 4011, configured to receive a feedback label of the automatically detected threat event from a user, where the feedback label indicates that the threat event is correct or incorrect;
and the data conversion unit 4012 is configured to convert each behavior characteristic of the threat event and the feedback label into a digital form, and then form an evaluation data.
The data conversion unit 4012 is specifically configured to add a first prediction score to the evaluation indicating that the threat event is correct for the feedback label, and add a second prediction score to the evaluation indicating that the threat event is incorrect for the feedback label, where the first prediction score is greater than the second prediction score; and performing characteristic engineering on each characteristic of the threat event to obtain characteristic data.
The sample generating unit 4013 is configured to combine evaluation data of multiple threat events for the same attack cause in the same domain name into an input matrix, where each evaluation data is a row of the input matrix.
Preferably, the neural network training module 402 is configured as shown in fig. 6, and includes:
the iterative operation unit 4021 is configured to run a training algorithm for iterative training with the input matrix as an input;
the training management unit 4022 is configured to end training when the loss value is no longer reduced, so as to obtain the estimated neural network.
Preferably, the intelligent evaluation module 403 is specifically configured to input all features of the threat event detected according to a preset threat judgment rule base into the evaluation neural network, so as to obtain an evaluation score of the threat event. Preferably, the evaluation score is equal to or greater than the second prediction score and equal to or less than the first prediction score.
An embodiment of the present invention also provides a computer-readable storage medium on which a computer program is stored, which, when executed, implements the steps of the threat identification result assessment method as provided by an embodiment of the present invention.
An embodiment of the present invention further provides a computer device, and fig. 7 is a block diagram of the computer device 700. For example, the computer device 700 may be provided as a server. Referring to FIG. 7, the computer device 600 includes a processor component 701, which may alternatively be comprised of one or more processors, the number of processors specifically utilized being configurable as desired. The computer device 700 further comprises a memory component 702 for storing instructions, e.g. application programs, executable by the processor component 701. The memory component 702 may be stored by one or more memories, and the particular amount of memory used may be configured as desired. Which may store one or more application programs. The processor component 701 is configured to execute instructions to perform the above-described method.
The embodiment of the invention provides a threat identification result evaluation method, a threat identification result evaluation device, a storage medium and computer equipment. And collecting the evaluation of the user on the threat event obtained by automatic detection, training by taking the evaluation as a training sample to obtain an evaluation neural network, and evaluating the accuracy of the threat event generated subsequently by using the evaluation neural network. The threat rule applicability evaluation based on the intelligent neural network is realized, the problem that whether the threat judgment rule base is applicable or not cannot be determined is solved, the specific applicability evaluation can be carried out on each rule in the preset threat judgment rule base, and effective reference data is provided for the modification and adjustment of the rules. .
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus (device), or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media having computer-usable program code embodied in the medium. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, including, but not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by the computer, and the like. In addition, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media as known to those skilled in the art.
The present invention has been described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (devices) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that an article or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such article or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of additional like elements in the article or device comprising the element.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, it is intended that the present invention cover the modifications and variations of this invention provided they come within the scope of the appended claims and their equivalents.

Claims (10)

1. A method for evaluating threat identification results, comprising:
collecting feedback labels of the threat events obtained by automatic detection by a user, and constructing a training sample according to the feedback labels;
training by using the training sample to obtain an evaluation neural network;
evaluating an accuracy of a subsequently generated threat event using the evaluation neural network.
2. The method for evaluating threat identification result according to claim 1, wherein the step of collecting feedback labels of the user on the automatically detected threat events comprises:
receiving a feedback label of the threat event obtained by the automatic detection by a user, wherein the feedback label indicates that the threat event is correct or wrong;
after each behavior characteristic of the threat event and the feedback label are converted into a digital form, an evaluation data is formed;
and forming an input matrix by using evaluation data of a plurality of threat events aiming at the same attack reason under the same domain name, wherein each evaluation data is a row of the input matrix.
3. The threat recognition result evaluation method according to claim 2, wherein the step of training the evaluation neural network using the training samples includes:
running a training algorithm to perform iterative training by taking the input matrix as input;
and when the loss value is not reduced any more, ending the training to obtain the evaluation neural network.
4. The threat identification result evaluation method according to claim 2 or 3, wherein the step of evaluating the accuracy of the subsequently generated threat event using the evaluation neural network comprises:
and inputting all the characteristics of the threat event detected according to a preset threat judgment rule base into the evaluation neural network to obtain the evaluation score of the threat event.
5. A threat identification result evaluation apparatus, comprising:
the sample acquisition module is used for collecting feedback labels of the threat events obtained by automatic detection by the user and constructing training samples according to the feedback labels;
the neural network training module is used for training by using the training sample to obtain an evaluation neural network;
and the intelligent evaluation module is used for evaluating the accuracy of the subsequently generated threat event by using the evaluation neural network.
6. The threat identification result evaluation apparatus according to claim 5, wherein the sample collection module includes:
a feedback collection unit, configured to receive a feedback label of the threat event obtained by the automatic detection from a user, where the feedback label indicates that the threat event is correct or incorrect;
the data conversion unit is used for converting each behavior characteristic of the threat event and the feedback label into a digital form to form evaluation data;
the system comprises a sample generation unit, a storage unit and a processing unit, wherein the sample generation unit is used for forming an input matrix by evaluation data of a plurality of threat events aiming at the same attack reason under the same domain name, and each evaluation data is a row of the input matrix.
7. The threat recognition result evaluation apparatus according to claim 6, wherein the neural network training module step includes:
the iterative operation unit is used for running a training algorithm for iterative training by taking the input matrix as input;
and the training management unit is used for finishing training when the loss value is not reduced any more to obtain the evaluation neural network.
8. The threat recognition result evaluation apparatus according to claim 6 or 7,
the intelligent evaluation module is specifically configured to input all features of the threat event detected according to a preset threat judgment rule base into the evaluation neural network to obtain an evaluation score of the threat event.
9. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when executed, implements the steps of the method according to any one of claims 1-4.
10. A computer arrangement comprising a processor, a memory and a computer program stored on the memory, characterized in that the steps of the method according to any of claims 1-4 are implemented when the computer program is executed by the processor.
CN201910569491.4A 2019-06-27 2019-06-27 Threat identification result evaluation method and device Active CN112149818B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910569491.4A CN112149818B (en) 2019-06-27 2019-06-27 Threat identification result evaluation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910569491.4A CN112149818B (en) 2019-06-27 2019-06-27 Threat identification result evaluation method and device

Publications (2)

Publication Number Publication Date
CN112149818A true CN112149818A (en) 2020-12-29
CN112149818B CN112149818B (en) 2024-04-09

Family

ID=73868799

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910569491.4A Active CN112149818B (en) 2019-06-27 2019-06-27 Threat identification result evaluation method and device

Country Status (1)

Country Link
CN (1) CN112149818B (en)

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102298728A (en) * 2011-08-17 2011-12-28 电子科技大学 Method for evaluating target threat degree
CN104506385A (en) * 2014-12-25 2015-04-08 西安电子科技大学 Software defined network security situation assessment method
WO2015103520A1 (en) * 2014-01-06 2015-07-09 Cisco Technolgy, Inc. Distributed training of a machine learning model to detect network attacks in a computer network
CN105046560A (en) * 2015-07-13 2015-11-11 江苏秉信成金融信息服务有限公司 Third-party credit supervision and risk assessment system and method
CN105407103A (en) * 2015-12-19 2016-03-16 中国人民解放军信息工程大学 Network threat evaluation method based on multi-granularity anomaly detection
US20160217022A1 (en) * 2015-01-23 2016-07-28 Opsclarity, Inc. Anomaly detection using circumstance-specific detectors
CN106951778A (en) * 2017-03-13 2017-07-14 步步高电子商务有限责任公司 A kind of intrusion detection method towards complicated flow data event analysis
CN107483887A (en) * 2017-08-11 2017-12-15 中国地质大学(武汉) The early-warning detection method of emergency case in a kind of smart city video monitoring
CN107798390A (en) * 2017-11-22 2018-03-13 阿里巴巴集团控股有限公司 A kind of training method of machine learning model, device and electronic equipment
CN108289221A (en) * 2018-01-17 2018-07-17 华中科技大学 The non-reference picture quality appraisement model and construction method of rejecting outliers
CN108306894A (en) * 2018-03-19 2018-07-20 西安电子科技大学 A kind of network security situation evaluating method and system that confidence level occurring based on attack
CN108337223A (en) * 2017-11-30 2018-07-27 中国电子科技集团公司电子科学研究院 A kind of appraisal procedure of network attack
CN108900513A (en) * 2018-07-02 2018-11-27 哈尔滨工业大学 A kind of DDOS effect evaluation method based on BP neural network
CN109150868A (en) * 2018-08-10 2019-01-04 海南大学 network security situation evaluating method and device
CN109190667A (en) * 2018-07-31 2019-01-11 中国电子科技集团公司第二十九研究所 A kind of Object Threat Evaluation method, model and model building method based on electronic reconnaissance signal
CN109308494A (en) * 2018-09-27 2019-02-05 厦门服云信息科技有限公司 LSTM Recognition with Recurrent Neural Network model and network attack identification method based on this model
CN109743325A (en) * 2019-01-11 2019-05-10 北京中睿天下信息技术有限公司 A kind of Brute Force attack detection method, system, equipment and storage medium
CN109858018A (en) * 2018-12-25 2019-06-07 中国科学院信息工程研究所 A kind of entity recognition method and system towards threat information

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102298728A (en) * 2011-08-17 2011-12-28 电子科技大学 Method for evaluating target threat degree
WO2015103520A1 (en) * 2014-01-06 2015-07-09 Cisco Technolgy, Inc. Distributed training of a machine learning model to detect network attacks in a computer network
CN104506385A (en) * 2014-12-25 2015-04-08 西安电子科技大学 Software defined network security situation assessment method
US20160217022A1 (en) * 2015-01-23 2016-07-28 Opsclarity, Inc. Anomaly detection using circumstance-specific detectors
CN105046560A (en) * 2015-07-13 2015-11-11 江苏秉信成金融信息服务有限公司 Third-party credit supervision and risk assessment system and method
CN105407103A (en) * 2015-12-19 2016-03-16 中国人民解放军信息工程大学 Network threat evaluation method based on multi-granularity anomaly detection
CN106951778A (en) * 2017-03-13 2017-07-14 步步高电子商务有限责任公司 A kind of intrusion detection method towards complicated flow data event analysis
CN107483887A (en) * 2017-08-11 2017-12-15 中国地质大学(武汉) The early-warning detection method of emergency case in a kind of smart city video monitoring
CN107798390A (en) * 2017-11-22 2018-03-13 阿里巴巴集团控股有限公司 A kind of training method of machine learning model, device and electronic equipment
CN108337223A (en) * 2017-11-30 2018-07-27 中国电子科技集团公司电子科学研究院 A kind of appraisal procedure of network attack
CN108289221A (en) * 2018-01-17 2018-07-17 华中科技大学 The non-reference picture quality appraisement model and construction method of rejecting outliers
CN108306894A (en) * 2018-03-19 2018-07-20 西安电子科技大学 A kind of network security situation evaluating method and system that confidence level occurring based on attack
CN108900513A (en) * 2018-07-02 2018-11-27 哈尔滨工业大学 A kind of DDOS effect evaluation method based on BP neural network
CN109190667A (en) * 2018-07-31 2019-01-11 中国电子科技集团公司第二十九研究所 A kind of Object Threat Evaluation method, model and model building method based on electronic reconnaissance signal
CN109150868A (en) * 2018-08-10 2019-01-04 海南大学 network security situation evaluating method and device
CN109308494A (en) * 2018-09-27 2019-02-05 厦门服云信息科技有限公司 LSTM Recognition with Recurrent Neural Network model and network attack identification method based on this model
CN109858018A (en) * 2018-12-25 2019-06-07 中国科学院信息工程研究所 A kind of entity recognition method and system towards threat information
CN109743325A (en) * 2019-01-11 2019-05-10 北京中睿天下信息技术有限公司 A kind of Brute Force attack detection method, system, equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
WEI, YC等: "Performance evaluation of the recommendation mechanism of information security risk identification", 《NEUROCOMPUTING》, no. 279, 1 May 2018 (2018-05-01), pages 48 - 53 *
张惠: "信息系统运维阶段信息安全风险评估工作研究", 《网络安全技术与应用》, no. 06, 30 June 2018 (2018-06-30), pages 15 - 17 *

Also Published As

Publication number Publication date
CN112149818B (en) 2024-04-09

Similar Documents

Publication Publication Date Title
CN106815530B (en) Data storage method, data verification method and device
CN111078513B (en) Log processing method, device, equipment, storage medium and log alarm system
CN108197177B (en) Business object monitoring method and device, storage medium and computer equipment
CN111177485B (en) Parameter rule matching based equipment fault prediction method, equipment and medium
CN110263869B (en) Method and device for predicting duration of Spark task
CN110275992B (en) Emergency processing method, device, server and computer readable storage medium
CN115048370B (en) Artificial intelligence processing method for big data cleaning and big data cleaning system
CN110740356A (en) Live broadcast data monitoring method and system based on block chain
CN116089224B (en) Alarm analysis method, alarm analysis device, calculation node and computer readable storage medium
WO2018182442A1 (en) Machine learning system and method for generating a decision stream and automonously operating device using the decision stream
CN102546235A (en) Performance diagnosis method and system of web-oriented application under cloud computing environment
CN112149818B (en) Threat identification result evaluation method and device
CN110457332B (en) Information processing method and related equipment
CN112152968B (en) Network threat detection method and device
CN116991455A (en) API asset identification method and device
CN115098362B (en) Page test method, page test device, electronic equipment and storage medium
CN110880117A (en) False service identification method, device, equipment and storage medium
CN109587198B (en) Image-text information pushing method and device
CN111935279B (en) Internet of things network maintenance method based on block chain and big data and computing node
CN114090372A (en) Fault processing method and device
CN112925831A (en) Big data mining method and big data mining service system based on cloud computing service
CN114417817B (en) Session information cutting method and device
CN117952547A (en) Intelligent examination and approval method and device for express expense based on machine learning
CN116108331A (en) Method and device for generating industrial equipment monitoring data prediction curve
CN112765196A (en) Data processing and data recognition method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant