CN112417440B - Method for reducing false alarm rate of system - Google Patents
Method for reducing false alarm rate of system Download PDFInfo
- Publication number
- CN112417440B CN112417440B CN202011105390.0A CN202011105390A CN112417440B CN 112417440 B CN112417440 B CN 112417440B CN 202011105390 A CN202011105390 A CN 202011105390A CN 112417440 B CN112417440 B CN 112417440B
- Authority
- CN
- China
- Prior art keywords
- instruction
- white list
- false alarm
- training
- characteristic value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/50—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
- G06F21/52—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity ; Preventing unwanted data erasure; Buffer overflow
- G06F21/54—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity ; Preventing unwanted data erasure; Buffer overflow by adding security routines or objects to programs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/50—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
- G06F21/57—Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
- G06F21/577—Assessing vulnerabilities and evaluating computer system security
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/60—Protecting data
- G06F21/64—Protecting data integrity, e.g. using checksums, certificates or signatures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Security & Cryptography (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Computer Hardware Design (AREA)
- General Physics & Mathematics (AREA)
- Computing Systems (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Evolutionary Computation (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioethics (AREA)
- Alarm Systems (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention discloses a method for reducing the false alarm rate of a system, which comprises the following specific steps: (1) constructing a white list model; (2) analyzing program dependencies; (3) constructing an instruction list; (4) comparing instruction lists; (5) performing false alarm analysis; the step (2) comprises the following steps: step (2.1) dividing instructions in the function into a plurality of instruction units; step (2.2) building a corresponding instruction list for each instruction unit; step (2.3) removing the instruction without changing the original state; step (2.4) removes, from the preceding instruction unit, the instruction unit corresponding to the variable which is repeatedly defined but not used in the subsequent instruction units. According to the invention, through the analysis of the program dependency relationship by the white list, the analysis speed is improved, the false alarm is effectively analyzed and processed, and the false alarm rate is reduced.
Description
Technical Field
The invention relates to the technical field of program analysis, in particular to a method for reducing the false alarm rate of a system.
Background
Program dependencies are mainly control dependencies and data dependencies of programs, meaning that changing one class may result in changing another class, e.g., the Cart class depends on the Product class because the Product class is used as a parameter for "add" operations in the Cart class. The user class temporarily uses the provider class with global scope when executing one of the following operations, uses the provider class temporarily as a parameter of a certain operation of the provider class, uses the provider class temporarily as a local variable of a certain operation of the provider class, sends a message to the provider class, generates dependence among modules in a main way of data reference and function call, checks whether the dependence degree of the modules is reasonable, and mainly looks at the easiness of 'change', and the calling mode among software modules can be divided into three modes, namely synchronous call, callback and asynchronous call (a transmission-callback mechanism of asynchronous message); synchronous call is a one-way dependency relationship; callback is a bi-directional dependency; pacing is often accompanied by message registration operations and is therefore also a bi-directional dependency in nature; if an object is a member variable of another object, then this means that the object can use the functions and data provided by another object, conversely if you depend too much on an object, you will have to bear all the effects of this object change, if this object is stable enough, it will not be so much influenced by us, such as some objects of a standard library, but if this object is variable, the consequences are very serious, one method is less changed, one class is changed and then the associated class is modified, from which we can see that the dependence of the member variable is tight, there are two classes of dependencies that occur in the method: parameters and return values. Program dependency analysis is generally used for detecting whether variables are affected by input data, and the method is widely applied to the aspect of detecting attacks and vulnerabilities and has become an important method in the analysis field;
the concept of whitelisting corresponds to "blacklisting", the specific principle being that by identifying whether a process or file in a system has approved properties, common process names, file names, publisher names, digital signatures, whitelisting techniques enable enterprises to approve which processes are allowed to run on a particular system, some provider products include only executable files, while others include scripts and macros, and can block a wider range of files, wherein an increasingly popular whitelisting approach is known as "application control", which is specifically focused on managing the behavior of endpoint applications.
In the traditional technology, a dynamic analysis method and a static analysis method are mostly adopted for analyzing the program dependency relationship, so that the analysis speed is low, and particularly, the analysis of a large-scale program is more time-consuming and the false alarm cannot be effectively controlled timely.
Disclosure of Invention
The present invention is directed to a method for reducing the false alarm rate of a system, so as to solve the problems set forth in the background art.
In order to achieve the above purpose, the present invention provides the following technical solutions: a method for reducing the false alarm rate of a system comprises the following specific steps:
(1) Constructing a white list model;
(2) Analyzing the program dependency relationship;
(3) Constructing an instruction list;
(4) Comparing instruction lists;
(5) And performing false alarm analysis.
Preferably, in the step (1), the method includes the following steps: step (1.1), constructing a white list training module and a white list model through an artificial neural network; and (1.2) constructing a white list dependency characteristic value set M and a dependency index set Q in the white list model.
Preferably, the artificial neural network in the step (1.1) includes a plurality of convolution layers.
Preferably, the white list training module trains the white list model through a set training algorithm, and when the training result converges, the training is ended.
Preferably, the training of the white list comprises white list identification training, white list tampering detection training and white list updating training.
Preferably, in the step (2), when analyzing the program dependency relationship, the method includes the following steps: dividing an instruction in a function into a plurality of instruction units, and independently endowing each instruction unit with an instruction characteristic value Y; step (2.2) building a corresponding instruction list for each instruction unit; step (2.3) analyzing the instruction characteristic value of each instruction unit to remove the instruction without changing the original state; and (2.4) analyzing the instruction characteristic value of each instruction unit, and removing the instruction unit corresponding to the variable which is repeatedly defined but not used in each subsequent instruction unit from the previous instruction unit.
Preferably, in the step (2.1), the feature value is instructedWherein y is 1 For instruction unit feature value, y 2 To call the relation characteristic value, y 3 Is the called relationship feature value.
Preferably, in the step 3, a set N of characteristic values of the recording instruction is defined, the instruction list is defined as P, and the P is a set formed by a plurality of sets N of characteristic values of the instruction.
Preferably, the step (4) specifically includes the following steps: step (4.1), constructing a comparison buffer area; step (4.2) constructing a white list model interface; step (4.3) calling a white list dependency characteristic value set M; step (4.4) comparing M with P through a comparison algorithm to obtain a dependence index q; step (4.5) put Q into Q for comparison and return the result.
Preferably, in the step (5), if the returned result value is 0, it may be determined as a false alarm, and the alarm is stopped; if the return result value is 1, it may be determined as a non-false alarm.
Compared with the prior art, the invention has the beneficial effects that:
1. according to the invention, the white list training model and the white list model are constructed by arranging the artificial neural network, the training algorithm is arranged in the white list training model, different training algorithms are selected according to the training content to train the white list model, the white list identification capability of the white list model is improved through the white list identification training, the capability of the white list model for judging whether the white list is tampered or not is improved through the white list tampering detection training, and the safety of the white list model is improved; through the white list updating training, the white list model is automatically updated through the capacity of the white list model, so that the recognition comprehensiveness of the white list model is improved;
2. the invention constructs instruction list at the same time, in the comparison instruction list, constructs white list model interface by constructing comparison buffer area, calls white list dependency relation characteristic value set M, obtains dependency index Q by comparing M and P by comparison algorithm, puts Q into Q for comparison, returns result, if the returned result value is 0, can judge false alarm, stops alarm; if the returned result value is 1, the false alarm can be judged, and the comparison and analysis operation can be rapidly executed through the white list model, so that the program dependency analysis and comparison time is saved, and the white list model can be automatically updated and optimized, so that the judgment and analysis speed is continuously improved, the false alarm can be rapidly processed, and the false alarm rate is reduced.
Drawings
Fig. 1 is a block diagram of the overall steps of a method for reducing the false alarm rate of a system according to the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, the present invention provides a technical solution: a method for reducing the false alarm rate of a system comprises the following specific steps: (1) constructing a white list model; (2) analyzing program dependencies; (3) constructing an instruction list; (4) comparing instruction lists; and (5) performing false alarm analysis.
The step (1) comprises the following steps: step (1.1), constructing a white list training module and a white list model through an artificial neural network; and (1.2) constructing a white list dependency characteristic value set M and a dependency index set Q in the white list model.
The artificial neural network in the step (1.1) comprises a plurality of convolution layers.
The white list training module trains the white list model through a set training algorithm, and when the training result converges, the training is ended.
The training of the whitelist comprises whitelist identification training, whitelist tampering detection training and whitelist updating training.
In the step (2), when analyzing the program dependency relationship, the method includes the following steps: dividing an instruction in a function into a plurality of instruction units, and independently endowing each instruction unit with an instruction characteristic value Y; step (2.2) building a corresponding instruction list for each instruction unit; step (2.3) analyzing the instruction characteristic value of each instruction unit to remove the instruction without changing the original state; and (2.4) analyzing the instruction characteristic value of each instruction unit, and removing the instruction unit corresponding to the variable which is repeatedly defined but not used in each subsequent instruction unit from the previous instruction unit.
In the step (2.1), the characteristic value is instructedWherein y is 1 For instruction unit feature value, y 2 To call the relation characteristic value, y 3 Is the called relationship feature value.
In the step 3, a set N of recording instruction characteristic values is defined, the instruction list is defined as P, and the P is a set formed by a plurality of instruction characteristic value sets N.
The step (4) specifically comprises the following steps: step (4.1), constructing a comparison buffer area; step (4.2) constructing a white list model interface; step (4.3) calling a white list dependency characteristic value set M; step (4.4) comparing M with P through a comparison algorithm to obtain a dependence index q; step (4.5) put Q into Q for comparison and return the result.
In the step (5), if the returned result value is 0, the false alarm can be judged, and the alarm is stopped; if the return result value is 1, it may be determined as a non-false alarm.
Working principle: the white list training model and the white list model are constructed through setting an artificial neural network, training algorithms are arranged in the white list training model, different training algorithms are selected according to training content to train the white list model, white list identification capability of the white list model is improved through white list identification training, white list tampering detection training is carried out, white list judging capability of the white list model for judging whether the white list is tampered is improved, and safety of the white list model is improved; through white list updating training, the capacity of autonomously updating a white list through a white list model is utilized to improve the recognition comprehensiveness of the white list model, a white list model interface is constructed through a comparison buffer zone, a white list dependency characteristic value set M is called, a dependency index Q is obtained through comparison of M and P through a comparison algorithm, Q is put into Q for comparison, a result is returned, if the returned result value is 0, false alarm can be judged, and the alarm is stopped; if the returned result value is 1, the false alarm can be judged, and the comparison and analysis operation can be rapidly executed through the white list model, so that the program dependency analysis and comparison time is saved, and the white list model can be automatically updated and optimized, so that the judgment and analysis speed is continuously improved, the false alarm can be rapidly processed, and the false alarm rate is reduced.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Although embodiments of the present invention have been shown and described, it will be understood by those skilled in the art that various changes, modifications, substitutions and alterations can be made therein without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.
Claims (4)
1. A method for reducing the false alarm rate of a system is characterized by comprising the following specific steps:
(1) Building a whitelist model, comprising:
step (1.1), constructing a white list training module and a white list model through an artificial neural network;
step (1.2), a white list dependency characteristic value set M and a dependency index set Q are built in a white list model;
step (2) analyzing program dependency relationship; in the step (2), when analyzing the program dependency relationship, the method includes the following steps:
dividing an instruction in a function into a plurality of instruction units, and independently endowing each instruction unit with an instruction characteristic value Y;
instruction characteristic valueWherein y is 1 For instruction unit feature value, y 2 To call the relation characteristic value, y 3 Is a called relationship characteristic value;
step (2.2) building a corresponding instruction list for each instruction unit;
step (2.3) analyzing the instruction characteristic value of each instruction unit to remove the instruction without changing the original state;
step (2.4) analyzing the instruction characteristic value of each instruction unit, and removing the instruction unit corresponding to the variables which are repeatedly defined but not used in the subsequent instruction units from the previous instruction unit;
(3) Constructing an instruction list, wherein a set N for recording the characteristic values of the instructions is defined in the step (3), the instruction list is defined as P, and the P is a set formed by a plurality of instruction characteristic value sets N;
(4) Comparing instruction lists;
step (4.1), constructing a comparison buffer area;
step (4.2) constructing a white list model interface;
step (4.3) calling a white list dependency characteristic value set M;
step (4.4) calculating through a comparison algorithm, and comparing M with P to obtain a dependence index q;
step (4.5), putting Q into Q for comparison, and returning a result; if the returned result value is 0, the false alarm can be judged, and the alarm is stopped; if the returned result value is 1, the false alarm is judged;
(5) Performing false alarm analysis;
meanwhile, an instruction list is built, a white list model interface is built by building a comparison buffer area in a comparison instruction list, a white list dependency relationship characteristic value set M is called, a dependency index Q is obtained by comparing M and P through a comparison algorithm, Q is put into Q for comparison, a result is returned, if the returned result value is 0, false alarm can be judged, and the alarm is stopped; if the return result value is 1, it may be determined as a non-false alarm.
2. The method for reducing the false alarm rate of a system according to claim 1, wherein: the artificial neural network in the step (1.1) comprises a plurality of convolution layers.
3. The method for reducing the false alarm rate of a system according to claim 1, wherein: the white list training module trains the white list model through a set training algorithm, and when the training result converges, the training is ended.
4. A method for reducing the false alarm rate of a system according to claim 3, wherein: the training of the whitelist comprises whitelist identification training, whitelist tampering detection training and whitelist updating training.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011105390.0A CN112417440B (en) | 2020-10-15 | 2020-10-15 | Method for reducing false alarm rate of system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011105390.0A CN112417440B (en) | 2020-10-15 | 2020-10-15 | Method for reducing false alarm rate of system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112417440A CN112417440A (en) | 2021-02-26 |
CN112417440B true CN112417440B (en) | 2023-10-17 |
Family
ID=74854583
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011105390.0A Active CN112417440B (en) | 2020-10-15 | 2020-10-15 | Method for reducing false alarm rate of system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112417440B (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101515320A (en) * | 2009-04-10 | 2009-08-26 | 中国科学院软件研究所 | Vulnerability testing method in attack and system thereof |
CN105897676A (en) * | 2015-12-01 | 2016-08-24 | 乐视网信息技术(北京)股份有限公司 | User resource access behavior processing method and device |
CN107241224A (en) * | 2017-06-09 | 2017-10-10 | 珠海市鸿瑞软件技术有限公司 | The network risks monitoring method and system of a kind of transformer station |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8347272B2 (en) * | 2008-07-23 | 2013-01-01 | International Business Machines Corporation | Call graph dependency extraction by static source code analysis |
US10728250B2 (en) * | 2017-07-31 | 2020-07-28 | International Business Machines Corporation | Managing a whitelist of internet domains |
-
2020
- 2020-10-15 CN CN202011105390.0A patent/CN112417440B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101515320A (en) * | 2009-04-10 | 2009-08-26 | 中国科学院软件研究所 | Vulnerability testing method in attack and system thereof |
CN105897676A (en) * | 2015-12-01 | 2016-08-24 | 乐视网信息技术(北京)股份有限公司 | User resource access behavior processing method and device |
CN107241224A (en) * | 2017-06-09 | 2017-10-10 | 珠海市鸿瑞软件技术有限公司 | The network risks monitoring method and system of a kind of transformer station |
Also Published As
Publication number | Publication date |
---|---|
CN112417440A (en) | 2021-02-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108520180B (en) | Multi-dimension-based firmware Web vulnerability detection method and system | |
CN105807631B (en) | Industry control intrusion detection method and intruding detection system based on PLC emulation | |
CN105335099A (en) | Memory cleaning method and terminal | |
CN103164331B (en) | A kind of leak detection method of application program and device | |
CN111160624B (en) | User intention prediction method, user intention prediction device and terminal equipment | |
US20210334371A1 (en) | Malicious File Detection Technology Based on Random Forest Algorithm | |
US20200026859A1 (en) | Methods and systems for system call reduction | |
CN112286575A (en) | Intelligent contract similarity detection method and system based on graph matching model | |
Xu et al. | Manis: Evading malware detection system on graph structure | |
CN105630797A (en) | Data processing method and system | |
CN115659356A (en) | Method for realizing self-adaptive adjustment of path search depth based on abstract syntax tree | |
CN112417440B (en) | Method for reducing false alarm rate of system | |
CN113468524B (en) | RASP-based machine learning model security detection method | |
CN102982282B (en) | The detection system of bug and method | |
Mukherjee et al. | Cross-diffusion induced Turing and non-Turing patterns in Rosenzweig–MacArthur model | |
CN110990834B (en) | Static detection method, system and medium for android malicious software | |
CN109858204B (en) | Program code protection method and device based on LLVM | |
CN111901318A (en) | Method, system and equipment for detecting command injection attack | |
CN110533054B (en) | Multi-mode self-adaptive machine learning method and device | |
CN112487421B (en) | Android malicious application detection method and system based on heterogeneous network | |
CN113359530A (en) | Sequence control method and device | |
EP3872655A1 (en) | Webpage illustration processing method, system and device, and storage medium | |
CN102662834B (en) | Detection method for buffer overflow of reconstructed CoSy intermediate representation | |
Xie et al. | Webshell Detection Based on Explicit Duration Recurrent Network | |
Xiang et al. | Rumor blocking with pertinence set in large graphs |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |