US12056241B2 - Integrated static and dynamic analysis for malware detection - Google Patents

Integrated static and dynamic analysis for malware detection Download PDF

Info

Publication number
US12056241B2
US12056241B2 US17/646,128 US202117646128A US12056241B2 US 12056241 B2 US12056241 B2 US 12056241B2 US 202117646128 A US202117646128 A US 202117646128A US 12056241 B2 US12056241 B2 US 12056241B2
Authority
US
United States
Prior art keywords
dynamic
file
verdict
classification
features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US17/646,128
Other versions
US20230205883A1 (en
Inventor
Sergey ULASEN
Vladimir STROGOV
Serguei Beloussov
Stanislav Protasov
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Acronis International GmbH
Original Assignee
Acronis International GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Acronis International GmbH filed Critical Acronis International GmbH
Priority to US17/646,128 priority Critical patent/US12056241B2/en
Assigned to ACRONIS INTERNATIONAL GMBH reassignment ACRONIS INTERNATIONAL GMBH ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BELOUSSOV, SERGUEI, PROTASOV, STANISLAV, Strogov, Vladimir, ULASEN, Sergey
Publication of US20230205883A1 publication Critical patent/US20230205883A1/en
Application granted granted Critical
Publication of US12056241B2 publication Critical patent/US12056241B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/55Detecting local intrusion or implementing counter-measures
    • G06F21/56Computer malware detection or handling, e.g. anti-virus arrangements
    • G06F21/566Dynamic detection, i.e. detection performed at run-time, e.g. emulation, suspicious activities
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/52Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity ; Preventing unwanted data erasure; Buffer overflow
    • G06F21/53Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity ; Preventing unwanted data erasure; Buffer overflow by executing in a restricted environment, e.g. sandbox or secure virtual machine
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/55Detecting local intrusion or implementing counter-measures
    • G06F21/56Computer malware detection or handling, e.g. anti-virus arrangements
    • G06F21/562Static detection
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/55Detecting local intrusion or implementing counter-measures
    • G06F21/56Computer malware detection or handling, e.g. anti-virus arrangements
    • G06F21/562Static detection
    • G06F21/565Static detection by checking file integrity
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2221/00Indexing scheme relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/03Indexing scheme relating to G06F21/50, monitoring users, programs or devices to maintain the integrity of platforms
    • G06F2221/033Test or assess software

Definitions

  • the invention pertains to the field of computer security, in particular the analysis of untrusted files and processes for malicious behavior.
  • Dynamic analysis also called “behavior analysis” focuses on how an untrusted file or process acts.
  • Static analysis is concerned with what can be known about an untrusted file or process before runtime.
  • Static analysis and behavior analysis are perceived as very different approaches to malware detection. This limits the effectiveness of these tools to their own strengths. Behavior analysis, for example, although effective for detecting malware at runtime, lacks the depth of static analysis when used on its own. There is a need for more effective malware analysis tools that augment the usefulness of behavior analysis and static analysis.
  • Metadata from static analyzers are used during behavior analysis of an untrusted file or process.
  • static Portable Executable (PE) metadata is combined with behavioral tools such as stack traces and Application Programming Interface (API) calls sequences.
  • API Application Programming Interface
  • the invention comprises systems and methods for detecting and classifying malware in an unknown file on a target computing system.
  • a detection and classification method is executed on a processor associated with the target computing system comprising the following steps. First an unknown file is classified with a static analysis machine-learning model based on static features extracted from the file before execution. The verdict of static classification includes a rate of conformity to at least one class of files. Then the file is executed on the target computing system.
  • the target file is executed in a secure environment, such as a sandbox or isolated virtual machine.
  • the secure environment preferably resembles the target computing system so that results in the secure environment are generally predictive of the target computing system.
  • the method continues with collecting data related to file execution on a target computing system. Dynamic features of a first and second dynamic feature sets are extracted from collected data. The file is classified with a first dynamic analysis machine-learning model based on extracted dynamic features of the first dynamic feature set. The verdict of the first dynamic classification includes a rate of conformity to at least one class of files. The file is also classified using a second dynamic analysis machine-learning model based on extracted dynamic features of the second dynamic feature set. The verdict of the second dynamic classification also includes a rate of conformity to at least one class of files.
  • the file is then classified with a malware classification machine learning (“ML”) model based on the verdict of the static classification, the verdict of the first dynamic classification, and the verdict of the second dynamic classification.
  • ML machine learning
  • the malware classification verdict is processed by an endpoint protection agent to detect malware.
  • a detection response action is performed at the endpoint protection agent to counter the malware.
  • FIG. 1 shows a system configuration for creating a malware classification model in accordance with the invention.
  • FIG. 2 shows a system configuration for feature extraction comprising dynamic and static analysis.
  • FIGS. 3 A, 3 B, and 3 C show a timeline of steps implementing a method of malware classification in accordance with the invention.
  • FIG. 4 shows a system configuration for reaching a verdict about an unknown file in accordance with the invention.
  • FIG. 5 shows method steps for detecting the presence of malware in an unknown file in accordance with the invention.
  • a constructed static model is supplemented with the functions of a behavioral analyzer.
  • the static model is built independently of behavioral attributes and creates added helper functions that identify malicious and safe files with the required accuracy.
  • the constructed dynamic model is supplemented with the features of the static analyzer model.
  • the dynamic model is built as if nothing is known about the static data and the static analyzer model is built independently from the dynamic analyzer. After being created in the training process, the dynamic model is supplemented with auxiliary attributes of the static analyzer. This approach improves the accuracy of the dynamic analyzer and reduces the number of false positives.
  • Static features also include strings. String features are based on plain text encoded into executables. Examples of strings found in a Microsoft Windows environment include “windows,” “getversion,” “getstartupinfo,” “getmodulefilename,” “message box,” “library,” and so on. Static features may also be extracted from .exe files. For example, data from a PE header describes the physical and logical structure of a PE binary. Dynamic features are extracted during runtime of an unknown file. Such features are generally function based, such as stack traces, API calls, instruction sets, control flow graphing, function parameter analysis, and system calls.
  • a machine learning model refers to a file that has been trained to recognize patterns by being passed a training dataset and being provided an algorithm that can be used to analyze and learn from that training dataset.
  • the training dataset includes labels. These labels correspond to the output of the algorithm.
  • a typical model attempts to apply correct labels for the data by applying an algorithm. For example, when the training dataset comprises files to be classified, a predicted label for a given file is calculated. These calculations are then compared to the actual label for that file. The degree of error, the variation between the predicted label and the actual label, is calculated by way of another algorithm, such as a loss function. By repeated attempts (epochs) at classifying the training data, the model will iteratively improve its accuracy.
  • the trained machine learning model can then be used to analyze testing data.
  • Optimization in this context refers to a model that is trained to classify the test data with an acceptable level of accuracy but not overtrained to the point that the model is so sensitive to idiosyncrasies in the training dataset that testing dataset results suffer.
  • Testing data refers to data that has not been seen before.
  • Modules in this context refer to a file containing a set of functions, arrays, dictionaries, objects, and so on.
  • Python language for example, a module is created by saving program code in a file with the extension .py.
  • the results of classification by a machine learning model depend on the classification task. For example, in malware detection the task is to determine whether an unknown file is malware or not. To simplify calculations, the strings “malware” and “not malware” are converted to integers. In this context, the label “0” can be assigned to “not malware” and the label “1” can be assigned to “malware.”
  • a suitable algorithm for binary classification is then chosen. Some examples of such algorithms include logistic regression, k-nearest neighbors, decision trees, support vector machines, or Bayesian networks. Alternatively, neural networks may be chosen, including neural networks configured for binary classification.
  • FIG. 1 shows system 100 , which combines static and dynamic analysis models to create a malware classification model.
  • Static analysis machine learning (“ML”) model 102 is linked with static analysis feature extractor 104 and static analysis ML module 106 .
  • Static ML module 106 is configured to pass the results of static analysis to malware classification ML module 108 and the results are stored as malware classification ML model 110 .
  • Dynamic ML model for dynamic attributes types A 112 and B 114 is linked with dynamic analysis feature extractor 116 and dynamic analysis ML module 118 .
  • Type A and type B dynamic attributes can be divided in several ways. For example, type A dynamic attributes can be stack traces and type B dynamic attributes can be API calls sequences.
  • type A dynamic attributes can be related to operations with files and type B dynamic attributes can be operations with a register or network.
  • type A dynamic attributes can be related to file modifications and type B to reading files.
  • Dynamic analysis ML module 118 is configured to pass the results of dynamic analysis to malware classification module 108 .
  • This malware classification module 108 is configured to be saved as malware classification machine learning model 110 .
  • system 200 is directed to analysis of unknown file 202 in storage medium 204 .
  • Static feature extractor 206 extracts features 1 to N ( 208 , 210 , 212 , 214 , 216 , and 218 ).
  • Unknown file 202 is also loaded into computer memory 205 and executed by way of application 220 .
  • System events from application 220 are recorded by system events collection module 222 .
  • Dynamic feature extractor 224 identifies dynamic features N+1, N+2, N+3, . . . N+M ( 226 , 228 , 230 , 232 , 234 , 236 ).
  • These dynamic features are grouped into categories, for example Type A 238 , Type B 240 , and Type C 242 .
  • the dynamic feature categories are of multiple types. For example, two, three, or more types are used. In FIG. 2 , three types are shown as examples. These types are chosen from function-based features, such as stack traces, API calls, and system calls.
  • FIGS. 3 A, 3 B, and 3 C show method 300 and its steps performed over some time, T.
  • FIG. 3 A shows a first timeline A ( 301 ) where static features of files are extracted from a malware sample files collection at step 302 . Then static features are extracted from a collection of trusted files as step 304 . The features extracted at steps 302 and 304 are used at step 306 to train and save a static analysis machine learning model.
  • FIG. 3 B continues the timeline of FIG. 3 A from point A ( 301 ) to point B ( 303 ).
  • a file sample is loaded for analysis at step 310 .
  • static features from the file sample are extracted at step 312 .
  • a static machine learning model verdict is calculated for the extracted static features.
  • a first type of dynamic feature (Type A) is extracted from the file sample at step 315 .
  • the static analysis machine learning model is augmented with a dynamic machine learning model based on Type A features at step 316 .
  • a second type of dynamic feature (Type B) is extracted from the same file at step 317 .
  • the static analysis machine learning model is further augmented with a dynamic machine learning model based on Type B features at step 318 .
  • a malware classification model is trained to classify the sample at step 320 .
  • FIG. 3 C focuses on classification of an unknown file that has been received for analysis at step 330 starting from point B ( 303 ) on the timeline.
  • Static features are extracted from the sample file at step 332 .
  • the static machine learning model's verdict is calculated for the extracted static features at step 334 .
  • Dynamic features are then extracted at step 335 while the unknown file is executed.
  • the related dynamic machine learning model calculates a verdict for the extracted dynamic features at step 336 .
  • the unknown file is then classified with the malware classification machine learning model at step 340 .
  • FIG. 4 shows system 400 for malware classification.
  • Feature set 402 comprises static features 1 -N, which are configured for passing to static analysis machine learning model 404 .
  • the static analysis machine learning model 404 comprises training set data 406 and rules 408 .
  • static analysis machine learning model 404 outputs a verdict 410 based on static features.
  • This verdict 410 is configured to be passed to malware classification model 412 comprising rules 414 and training dataset 416 .
  • Malware classification machine learning model 412 also receives verdicts from dynamic analysis of the given file.
  • Feature set (Type A) 420 comprises features of a first type. These features are configured for passing to a Type A dynamic analysis machine learning model 422 .
  • the Type A dynamic analysis machine learning model 422 comprises a training dataset 424 and rules 426 .
  • machine learning model 422 outputs a verdict 428 based on Type A features.
  • a second feature set (Type B) 430 comprises features of a second type. These features are configured for passing to a Type B dynamic analysis machine learning model 432 .
  • the Type B dynamic analysis machine learning model 432 comprises a training dataset 434 and rules 436 .
  • machine learning model 432 outputs a verdict 438 based on Type B features.
  • malware classification model 412 is configured to classify the file and pass this classification 440 to a verification and supervising process 442 .
  • This process 442 is configured to output a final verdict 444 with respect to the file.
  • the final verdict 444 for the unknown file is then added as a training data set that can be used to update and correct loss in malware classification model 412 .
  • FIG. 5 shows method 500 , which starts at step 502 with classifying an unknown file with a static analysis machine learning model based on static features extracted from the file before execution.
  • the verdict of static analysis classification includes a rate of conformity to at least one class of files.
  • the unknown file is executed (run) on the target computing system.
  • Systems events and attributes related to file execution on a target computing system are collected at step 506 .
  • the method continues with extracting dynamic features of a first dynamic feature set during execution of the file at step 508 and classifying the file with a first dynamic analysis machine learning model based on extracted dynamic features of the first dynamic feature set at step 510 .
  • the verdict at step 510 includes a rate of conformity to at least one class of files.
  • step 512 dynamic features of a second dynamic feature set are extracted during execution of the file. Then at step 514 the file is classified with a second dynamic analysis machine learning model based on extracted dynamic features of the second dynamic feature set.
  • the verdict at step 514 includes a rate of conformity to at least one class of files.
  • the file is classified with a malware classification machine learning model based on a verdict of static analysis and a verdict of at least one dynamic analysis.
  • the result of step 516 is passed to an endpoint protection agent for processing a malware classification verdict at step 518 .
  • a classification is reached at step 520 that determines whether the file is malware or not. If not, the method loops back to step 508 or 512 , respectively, and repeats the steps from 508 to 520 or 512 to 520 , respectively. If malware is detected at step 520 , then a detection response action is performed at step 522 to counter the malware.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Virology (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

A system and method for malware detection uses static and dynamic analysis to train a machine learning model. At the training step, static and dynamic features are extracted from training datasets and used to train a malware classification model. The malware classification model is used to classify unknown files based on verdicts from both static and dynamic models.

Description

FIELD OF THE INVENTION
The invention pertains to the field of computer security, in particular the analysis of untrusted files and processes for malicious behavior.
BACKGROUND OF THE INVENTION
Malicious processes in computer systems can be detected using dynamic analysis and static analysis. Dynamic analysis, also called “behavior analysis” focuses on how an untrusted file or process acts. Static analysis, on the other hand, is concerned with what can be known about an untrusted file or process before runtime.
Static analysis and behavior analysis are perceived as very different approaches to malware detection. This limits the effectiveness of these tools to their own strengths. Behavior analysis, for example, although effective for detecting malware at runtime, lacks the depth of static analysis when used on its own. There is a need for more effective malware analysis tools that augment the usefulness of behavior analysis and static analysis.
SUMMARY OF THE INVENTION
Metadata from static analyzers are used during behavior analysis of an untrusted file or process. For example, static Portable Executable (PE) metadata is combined with behavioral tools such as stack traces and Application Programming Interface (API) calls sequences.
The invention comprises systems and methods for detecting and classifying malware in an unknown file on a target computing system. In an embodiment, a detection and classification method is executed on a processor associated with the target computing system comprising the following steps. First an unknown file is classified with a static analysis machine-learning model based on static features extracted from the file before execution. The verdict of static classification includes a rate of conformity to at least one class of files. Then the file is executed on the target computing system. Alternatively, the target file is executed in a secure environment, such as a sandbox or isolated virtual machine. The secure environment preferably resembles the target computing system so that results in the secure environment are generally predictive of the target computing system.
The method continues with collecting data related to file execution on a target computing system. Dynamic features of a first and second dynamic feature sets are extracted from collected data. The file is classified with a first dynamic analysis machine-learning model based on extracted dynamic features of the first dynamic feature set. The verdict of the first dynamic classification includes a rate of conformity to at least one class of files. The file is also classified using a second dynamic analysis machine-learning model based on extracted dynamic features of the second dynamic feature set. The verdict of the second dynamic classification also includes a rate of conformity to at least one class of files.
The file is then classified with a malware classification machine learning (“ML”) model based on the verdict of the static classification, the verdict of the first dynamic classification, and the verdict of the second dynamic classification. The malware classification verdict is processed by an endpoint protection agent to detect malware. A detection response action is performed at the endpoint protection agent to counter the malware.
DESCRIPTION OF THE DRAWINGS
FIG. 1 shows a system configuration for creating a malware classification model in accordance with the invention.
FIG. 2 shows a system configuration for feature extraction comprising dynamic and static analysis.
FIGS. 3A, 3B, and 3C show a timeline of steps implementing a method of malware classification in accordance with the invention.
FIG. 4 shows a system configuration for reaching a verdict about an unknown file in accordance with the invention.
FIG. 5 shows method steps for detecting the presence of malware in an unknown file in accordance with the invention.
DETAILED DESCRIPTION
To improve malware detection, a constructed static model is supplemented with the functions of a behavioral analyzer. The static model is built independently of behavioral attributes and creates added helper functions that identify malicious and safe files with the required accuracy.
To further improve detection, the constructed dynamic model is supplemented with the features of the static analyzer model. The dynamic model is built as if nothing is known about the static data and the static analyzer model is built independently from the dynamic analyzer. After being created in the training process, the dynamic model is supplemented with auxiliary attributes of the static analyzer. This approach improves the accuracy of the dynamic analyzer and reduces the number of false positives.
While processing files and processes, the static analyzer and the dynamic analyzer fill a feature table for system objects. These tables are used to build a machine learning model for detecting threats.
Features in this context refer to input variables used in making predictions. Examples of static features include byte n-grams and opcode n-grams. Static features also include strings. String features are based on plain text encoded into executables. Examples of strings found in a Microsoft Windows environment include “windows,” “getversion,” “getstartupinfo,” “getmodulefilename,” “message box,” “library,” and so on. Static features may also be extracted from .exe files. For example, data from a PE header describes the physical and logical structure of a PE binary. Dynamic features are extracted during runtime of an unknown file. Such features are generally function based, such as stack traces, API calls, instruction sets, control flow graphing, function parameter analysis, and system calls.
A machine learning model refers to a file that has been trained to recognize patterns by being passed a training dataset and being provided an algorithm that can be used to analyze and learn from that training dataset. For a supervised learning model, the training dataset includes labels. These labels correspond to the output of the algorithm. A typical model attempts to apply correct labels for the data by applying an algorithm. For example, when the training dataset comprises files to be classified, a predicted label for a given file is calculated. These calculations are then compared to the actual label for that file. The degree of error, the variation between the predicted label and the actual label, is calculated by way of another algorithm, such as a loss function. By repeated attempts (epochs) at classifying the training data, the model will iteratively improve its accuracy. When the accuracy of the model on the training data is optimal, the trained machine learning model can then be used to analyze testing data. Optimization in this context refers to a model that is trained to classify the test data with an acceptable level of accuracy but not overtrained to the point that the model is so sensitive to idiosyncrasies in the training dataset that testing dataset results suffer. Testing data refers to data that has not been seen before.
Modules in this context refer to a file containing a set of functions, arrays, dictionaries, objects, and so on. In the Python language, for example, a module is created by saving program code in a file with the extension .py.
The results of classification by a machine learning model depend on the classification task. For example, in malware detection the task is to determine whether an unknown file is malware or not. To simplify calculations, the strings “malware” and “not malware” are converted to integers. In this context, the label “0” can be assigned to “not malware” and the label “1” can be assigned to “malware.” A suitable algorithm for binary classification is then chosen. Some examples of such algorithms include logistic regression, k-nearest neighbors, decision trees, support vector machines, or Bayesian networks. Alternatively, neural networks may be chosen, including neural networks configured for binary classification.
FIG. 1 shows system 100, which combines static and dynamic analysis models to create a malware classification model. Static analysis machine learning (“ML”) model 102 is linked with static analysis feature extractor 104 and static analysis ML module 106. Static ML module 106 is configured to pass the results of static analysis to malware classification ML module 108 and the results are stored as malware classification ML model 110. Dynamic ML model for dynamic attributes types A 112 and B 114 is linked with dynamic analysis feature extractor 116 and dynamic analysis ML module 118. Type A and type B dynamic attributes can be divided in several ways. For example, type A dynamic attributes can be stack traces and type B dynamic attributes can be API calls sequences. Alternatively, type A dynamic attributes can be related to operations with files and type B dynamic attributes can be operations with a register or network. Or type A dynamic attributes can be related to file modifications and type B to reading files. Dynamic analysis ML module 118 is configured to pass the results of dynamic analysis to malware classification module 108. This malware classification module 108 is configured to be saved as malware classification machine learning model 110.
As shown in FIG. 2 , system 200 is directed to analysis of unknown file 202 in storage medium 204. Static feature extractor 206 extracts features 1 to N (208, 210, 212, 214, 216, and 218). Unknown file 202 is also loaded into computer memory 205 and executed by way of application 220. System events from application 220 are recorded by system events collection module 222. Dynamic feature extractor 224 identifies dynamic features N+1, N+2, N+3, . . . N+M (226, 228, 230, 232, 234, 236). These dynamic features are grouped into categories, for example Type A 238, Type B 240, and Type C 242. The dynamic feature categories are of multiple types. For example, two, three, or more types are used. In FIG. 2 , three types are shown as examples. These types are chosen from function-based features, such as stack traces, API calls, and system calls.
FIGS. 3A, 3B, and 3 C show method 300 and its steps performed over some time, T. FIG. 3A shows a first timeline A (301) where static features of files are extracted from a malware sample files collection at step 302. Then static features are extracted from a collection of trusted files as step 304. The features extracted at steps 302 and 304 are used at step 306 to train and save a static analysis machine learning model.
FIG. 3B continues the timeline of FIG. 3A from point A (301) to point B (303). First, a file sample is loaded for analysis at step 310. Then static features from the file sample are extracted at step 312. At step 314, a static machine learning model verdict is calculated for the extracted static features. Next, a first type of dynamic feature (Type A) is extracted from the file sample at step 315. Then the static analysis machine learning model is augmented with a dynamic machine learning model based on Type A features at step 316. A second type of dynamic feature (Type B) is extracted from the same file at step 317. The static analysis machine learning model is further augmented with a dynamic machine learning model based on Type B features at step 318. Based on the verdicts of the static analysis machine learning model and the dynamic machine learning models, a malware classification model is trained to classify the sample at step 320.
FIG. 3C focuses on classification of an unknown file that has been received for analysis at step 330 starting from point B (303) on the timeline. Static features are extracted from the sample file at step 332. Then the static machine learning model's verdict is calculated for the extracted static features at step 334. Dynamic features are then extracted at step 335 while the unknown file is executed. The related dynamic machine learning model calculates a verdict for the extracted dynamic features at step 336. The unknown file is then classified with the malware classification machine learning model at step 340.
FIG. 4 shows system 400 for malware classification. Feature set 402 comprises static features 1-N, which are configured for passing to static analysis machine learning model 404. The static analysis machine learning model 404 comprises training set data 406 and rules 408. For a given file, static analysis machine learning model 404 outputs a verdict 410 based on static features. This verdict 410 is configured to be passed to malware classification model 412 comprising rules 414 and training dataset 416.
Malware classification machine learning model 412 also receives verdicts from dynamic analysis of the given file. Feature set (Type A) 420 comprises features of a first type. These features are configured for passing to a Type A dynamic analysis machine learning model 422. The Type A dynamic analysis machine learning model 422 comprises a training dataset 424 and rules 426. For the same file, machine learning model 422 outputs a verdict 428 based on Type A features. A second feature set (Type B) 430 comprises features of a second type. These features are configured for passing to a Type B dynamic analysis machine learning model 432. The Type B dynamic analysis machine learning model 432 comprises a training dataset 434 and rules 436. For the same file, machine learning model 432 outputs a verdict 438 based on Type B features.
Having received verdicts 410, 428, and 438 with respect to a given file, malware classification model 412 is configured to classify the file and pass this classification 440 to a verification and supervising process 442. This process 442 is configured to output a final verdict 444 with respect to the file. The final verdict 444 for the unknown file is then added as a training data set that can be used to update and correct loss in malware classification model 412.
FIG. 5 shows method 500, which starts at step 502 with classifying an unknown file with a static analysis machine learning model based on static features extracted from the file before execution. The verdict of static analysis classification includes a rate of conformity to at least one class of files. At step 504 the unknown file is executed (run) on the target computing system. Systems events and attributes related to file execution on a target computing system are collected at step 506. The method continues with extracting dynamic features of a first dynamic feature set during execution of the file at step 508 and classifying the file with a first dynamic analysis machine learning model based on extracted dynamic features of the first dynamic feature set at step 510. The verdict at step 510 includes a rate of conformity to at least one class of files. In parallel with steps 508 and 510, at step 512 dynamic features of a second dynamic feature set are extracted during execution of the file. Then at step 514 the file is classified with a second dynamic analysis machine learning model based on extracted dynamic features of the second dynamic feature set. The verdict at step 514 includes a rate of conformity to at least one class of files.
At step 516, the file is classified with a malware classification machine learning model based on a verdict of static analysis and a verdict of at least one dynamic analysis. The result of step 516 is passed to an endpoint protection agent for processing a malware classification verdict at step 518. A classification is reached at step 520 that determines whether the file is malware or not. If not, the method loops back to step 508 or 512, respectively, and repeats the steps from 508 to 520 or 512 to 520, respectively. If malware is detected at step 520, then a detection response action is performed at step 522 to counter the malware.

Claims (11)

The invention claimed is:
1. A computer implemented method for detecting and classifying malware in a file on a target computing system, the method executed on a processor of the target computing system, the method comprising:
a) Classifying the file with a static analysis machine-learning model based on static features extracted from the file before execution, wherein a verdict of static classification includes a rate of conformity to at least one class of files;
b) Executing the file on a target computing system;
c) Collecting data related to file execution on a target computing system;
d) Extracting dynamic features of a first dynamic feature set from the collected data;
e) Extracting dynamic features of a second dynamic feature set from the collected data;
f) Classifying the file with a first dynamic analysis machine-learning model based on the extracted dynamic features of the first dynamic feature set, wherein a verdict of the first dynamic classification includes a rate of conformity to at least one class of files;
g) Classifying the file with a second dynamic analysis machine-learning model based on extracted dynamic features of the second dynamic feature set, wherein a verdict of the second dynamic classification includes a rate of conformity to at least one class of files; and
h) Classifying the file with a malware classification machine learning model based on the verdict of the static classification, the verdict of the first dynamic classification and the verdict of the second dynamic classification.
2. The method of claim 1, further comprising the step of processing a malware classification verdict at an endpoint protection agent to detect malware.
3. The method of claim 2, further comprising the step of performing a detection response action at the endpoint protection agent to counter the malware.
4. The method of claim 1, wherein the extracted features of the first dynamic feature set comprises at least one of stack traces, operations with files, or file modifications and wherein the extracted features of the second dynamic feature set comprises at least one of Application Program Interface calls sequences, operations with a register or network, or reading files.
5. A system for detecting and classifying malware in a file on a target computing system comprising:
a) A processor coupled to a storage device configured for training storing a plurality of machine learning models;
b) A static analysis machine learning model based on static features extracted from the file before execution, wherein a verdict of static classification includes a rate of conformity to at least one class of files;
c) A target computing system for executing the file;
d) Collected data related to file execution on the target computing system;
e) Extracted dynamic features of a first dynamic feature set from the collected data;
f) Dynamic features of a second dynamic feature set extracted from the collected data;
g) A first dynamic analysis machine-learning model based on the extracted dynamic features of the first dynamic feature set, wherein a verdict of the first dynamic classification includes a rate of conformity to at least one class of files; and
h) A second dynamic analysis machine-learning model based on the extracted dynamic features of the second dynamic feature set, wherein a verdict of the second dynamic classification includes a rate of conformity to at least one class of files; and
i) A malware classification machine learning model based on the verdict of the static classification, the verdict of the first dynamic classification and the verdict of the second dynamic classification.
6. The system of claim 5, wherein the target computing system further comprises an endpoint agent for performing detection of malicious files based on the verdicts of the static analysis machine learning model and the first and second dynamic analysis machine learning models.
7. The system of claim 6, wherein the endpoint agent evaluates independently the verdicts of the static analysis machine learning model and the first and second dynamic analysis machine learning models.
8. A computer implemented method for detecting and classifying malware in a file on a target computing system, the method executed on a processor of the target computing system, the method comprising:
a. Classifying the file with a static analysis machine-learning model trained on static features extracted from the file before execution, wherein a verdict of static classification includes a rate of conformity to at least one class of files;
b. Executing the file on a target computing system;
c. Collecting data related to file execution on a target computing system;
d. Extracting dynamic features of a first dynamic feature set from the collected data;
e. Extracting dynamic features of a second dynamic feature set from the collected data;
f. Classifying the file with a first dynamic analysis machine-learning model trained independently of the static analysis machine learning model on the extracted dynamic features of the first dynamic feature set, wherein a verdict of the first dynamic classification includes a rate of conformity to at least one class of files;
g. Classifying the file with a second dynamic analysis machine-learning model trained independently of the static analysis machine learning model and the first dynamic analysis machine-learning model, wherein the second dynamic analysis machine learning model is trained on the extracted dynamic features of the second dynamic feature set, and wherein a verdict of the second dynamic classification includes a rate of conformity to at least one class of files; and
h. Classifying the file with a malware classification machine learning model based on the verdict of the static classification, the verdict of the first dynamic classification and the verdict of the second dynamic classification.
9. The method of claim 8, further comprising the step of processing a malware classification verdict at an endpoint protection agent to detect malware.
10. The method of claim 9, further comprising the step of performing a detection response action at the endpoint protection agent to counter the malware.
11. The method of claim 8, wherein the extracted features of the first dynamic feature set comprises at least one of stack traces, operations with files, or file modifications and wherein the extracted features of the second dynamic feature set comprises at least one of Application Program Interface calls sequences, operations with a register or network, or reading files.
US17/646,128 2021-12-27 2021-12-27 Integrated static and dynamic analysis for malware detection Active 2042-07-30 US12056241B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/646,128 US12056241B2 (en) 2021-12-27 2021-12-27 Integrated static and dynamic analysis for malware detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US17/646,128 US12056241B2 (en) 2021-12-27 2021-12-27 Integrated static and dynamic analysis for malware detection

Publications (2)

Publication Number Publication Date
US20230205883A1 US20230205883A1 (en) 2023-06-29
US12056241B2 true US12056241B2 (en) 2024-08-06

Family

ID=86897839

Family Applications (1)

Application Number Title Priority Date Filing Date
US17/646,128 Active 2042-07-30 US12056241B2 (en) 2021-12-27 2021-12-27 Integrated static and dynamic analysis for malware detection

Country Status (1)

Country Link
US (1) US12056241B2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US12229261B1 (en) * 2024-05-03 2025-02-18 Halcyon Tech, Inc. Antiransomware file analysis and scoring

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US12518012B2 (en) * 2023-10-19 2026-01-06 Google Llc Structure-aware neural networks for malware detection
CN117521068B (en) * 2023-12-08 2024-07-26 北京云弈科技有限公司 Linux host malicious software detection method, system, device and medium

Citations (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8122509B1 (en) 2009-03-30 2012-02-21 Kaspersky Lab, Zao Method for accelerating hardware emulator used for malware detection and analysis
US8401982B1 (en) 2010-01-14 2013-03-19 Symantec Corporation Using sequencing and timing information of behavior events in machine learning to detect malware
US8775333B1 (en) 2008-08-20 2014-07-08 Symantec Corporation Systems and methods for generating a threat classifier to determine a malicious process
US20150096022A1 (en) * 2013-09-30 2015-04-02 Michael Vincent Dynamically adaptive framework and method for classifying malware using intelligent static, emulation, and dynamic analyses
US20190034632A1 (en) 2017-07-25 2019-01-31 Trend Micro Incorporated Method and system for static behavior-predictive malware detection
US20190044964A1 (en) 2017-08-03 2019-02-07 International Business Machines Corporation Malware Clustering Approaches Based on Cognitive Computing Techniques
CN109492395A (en) 2018-10-31 2019-03-19 厦门安胜网络科技有限公司 A kind of method, apparatus and storage medium detecting rogue program
US20190132334A1 (en) * 2017-10-27 2019-05-02 Fireeye, Inc. System and method for analyzing binary code for malware classification using artificial neural network techniques
CN109840417A (en) 2017-11-28 2019-06-04 清华大学 A kind of malware detection method and device
US20200004956A1 (en) * 2018-06-29 2020-01-02 AO Kaspersky Lab System and method for detecting malicious files using two-stage file classification
CN110674497A (en) 2019-09-27 2020-01-10 厦门安胜网络科技有限公司 Malicious program similarity calculation method and device
US20200026851A1 (en) * 2018-07-19 2020-01-23 Juniper Networks, Inc. Extending dynamic detection of malware using static and dynamic malware analyses
US20200082083A1 (en) 2018-09-06 2020-03-12 Wins Co., Ltd. Apparatus and method for verifying malicious code machine learning classification model
US10637874B2 (en) * 2016-09-01 2020-04-28 Cylance Inc. Container file analysis using machine learning model
US20200175152A1 (en) 2018-11-29 2020-06-04 Palo Alto Networks, Inc. Application-level sandboxing on devices
US20200210575A1 (en) * 2018-12-28 2020-07-02 Mcafee, Llc Methods and apparatus to detect adversarial malware
CN111680297A (en) 2020-07-09 2020-09-18 腾讯科技(深圳)有限公司 Method, device and electronic device for detecting script file based on artificial intelligence
KR20200109677A (en) 2019-03-14 2020-09-23 주식회사 에프원시큐리티 An apparatus and method for detecting malicious codes using ai based machine running cross validation techniques
US10880328B2 (en) 2018-11-16 2020-12-29 Accenture Global Solutions Limited Malware detection
RU2739865C2 (en) 2018-12-28 2020-12-29 Акционерное общество "Лаборатория Касперского" System and method of detecting a malicious file
KR20210089849A (en) 2020-01-09 2021-07-19 주식회사 씨티아이랩 Malware Detection System and Method based on API Function Extraction
US20220237289A1 (en) * 2021-01-27 2022-07-28 AVAST Software s.r.o. Automated malware classification with human-readable explanations
US20230205880A1 (en) * 2021-12-27 2023-06-29 Acronis International Gmbh Augmented machine learning malware detection based on static and dynamic analysis

Patent Citations (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8775333B1 (en) 2008-08-20 2014-07-08 Symantec Corporation Systems and methods for generating a threat classifier to determine a malicious process
US8122509B1 (en) 2009-03-30 2012-02-21 Kaspersky Lab, Zao Method for accelerating hardware emulator used for malware detection and analysis
US8401982B1 (en) 2010-01-14 2013-03-19 Symantec Corporation Using sequencing and timing information of behavior events in machine learning to detect malware
US20150096022A1 (en) * 2013-09-30 2015-04-02 Michael Vincent Dynamically adaptive framework and method for classifying malware using intelligent static, emulation, and dynamic analyses
US10637874B2 (en) * 2016-09-01 2020-04-28 Cylance Inc. Container file analysis using machine learning model
US20190034632A1 (en) 2017-07-25 2019-01-31 Trend Micro Incorporated Method and system for static behavior-predictive malware detection
US20190044964A1 (en) 2017-08-03 2019-02-07 International Business Machines Corporation Malware Clustering Approaches Based on Cognitive Computing Techniques
US20190132334A1 (en) * 2017-10-27 2019-05-02 Fireeye, Inc. System and method for analyzing binary code for malware classification using artificial neural network techniques
CN109840417A (en) 2017-11-28 2019-06-04 清华大学 A kind of malware detection method and device
US20200004956A1 (en) * 2018-06-29 2020-01-02 AO Kaspersky Lab System and method for detecting malicious files using two-stage file classification
US10997291B2 (en) * 2018-07-19 2021-05-04 Juniper Networks, Inc. Extending dynamic detection of malware using static and dynamic malware analyses
US20200026851A1 (en) * 2018-07-19 2020-01-23 Juniper Networks, Inc. Extending dynamic detection of malware using static and dynamic malware analyses
US20200082083A1 (en) 2018-09-06 2020-03-12 Wins Co., Ltd. Apparatus and method for verifying malicious code machine learning classification model
CN109492395A (en) 2018-10-31 2019-03-19 厦门安胜网络科技有限公司 A kind of method, apparatus and storage medium detecting rogue program
US10880328B2 (en) 2018-11-16 2020-12-29 Accenture Global Solutions Limited Malware detection
US20200175152A1 (en) 2018-11-29 2020-06-04 Palo Alto Networks, Inc. Application-level sandboxing on devices
US20200210575A1 (en) * 2018-12-28 2020-07-02 Mcafee, Llc Methods and apparatus to detect adversarial malware
RU2739865C2 (en) 2018-12-28 2020-12-29 Акционерное общество "Лаборатория Касперского" System and method of detecting a malicious file
KR20200109677A (en) 2019-03-14 2020-09-23 주식회사 에프원시큐리티 An apparatus and method for detecting malicious codes using ai based machine running cross validation techniques
CN110674497A (en) 2019-09-27 2020-01-10 厦门安胜网络科技有限公司 Malicious program similarity calculation method and device
KR20210089849A (en) 2020-01-09 2021-07-19 주식회사 씨티아이랩 Malware Detection System and Method based on API Function Extraction
CN111680297A (en) 2020-07-09 2020-09-18 腾讯科技(深圳)有限公司 Method, device and electronic device for detecting script file based on artificial intelligence
US20220237289A1 (en) * 2021-01-27 2022-07-28 AVAST Software s.r.o. Automated malware classification with human-readable explanations
US20230205880A1 (en) * 2021-12-27 2023-06-29 Acronis International Gmbh Augmented machine learning malware detection based on static and dynamic analysis

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
Jaime Devesa, et. al: "Automatic Behaviour-based Analysis and Classification System for Malware Detection", Retrieved from the Internet: URL: https://www.researchgate.net/publication/220708645_Automatic_Behaviour-based_Analysis_and_Classification_System_for_Malware_Detection.
Joshua Cannell: "Five PE Analysis Tools Worth Looking At", May 28, 2014, Retrieved from the Internet: URL: https://www.malwarebytes.com/blog/news/2014/05/five-pe-analysis-tools-worth-looking-at.
Kaspersky: "Emulator", Retrieved from the Internet: URL: https://www.kaspersky.com/enterprise-security/wiki-section/products/emulator.
Kaspersky: "Sandbox", Retrieved from the Internet: URL: https://www.kaspersky.com/enterprise-security/wiki-section/products/sandbox.
Kaspersky: "System behavior analyzer", Retrieved from the Internet: URL: https://support.kaspersky.com/KESWin/10SP2/en-us/128012.htm.
Subash Poudyal: "PEFile Analysis: A Static Approach to Ransomware Analysis", Retrieved from the Internet: URL: https://www.researchgate.net/publication/336813424_PEFile_Analysis_A_Static_Approach_To_Ransomware_Analysis.

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US12229261B1 (en) * 2024-05-03 2025-02-18 Halcyon Tech, Inc. Antiransomware file analysis and scoring

Also Published As

Publication number Publication date
US20230205883A1 (en) 2023-06-29

Similar Documents

Publication Publication Date Title
US11977633B2 (en) Augmented machine learning malware detection based on static and dynamic analysis
Demırcı et al. Static malware detection using stacked BiLSTM and GPT-2
Demirkıran et al. An ensemble of pre-trained transformer models for imbalanced multiclass malware classification
Aslan et al. A new malware classification framework based on deep learning algorithms
US20240220617A1 (en) Deep learning based detection of malicious shell scripts
Cakir et al. Malware classification using deep learning methods
US12056241B2 (en) Integrated static and dynamic analysis for malware detection
Kakisim et al. Sequential opcode embedding-based malware detection method
Ghiasi et al. Dynamic VSA: a framework for malware detection based on register contents
Abawajy et al. Iterative classifier fusion system for the detection of Android malware
Rahul et al. Analysis of machine learning models for malware detection
Yesir et al. Malware detection and classification using fasttext and bert
CN113709134A (en) Malicious software detection method and system based on N-gram and machine learning
CN111753290B (en) Software type detection method and related equipment
Chowdhury et al. Protecting data from malware threats using machine learning technique
CN114969755A (en) Cross-language unknown executable program binary vulnerability analysis method
Or-Meir et al. Pay attention: Improving classification of pe malware using attention mechanisms based on system call analysis
US12265618B1 (en) System and method for efficient malicious code detection and malicious open-source software package detection using large language models
O'Kane et al. N-gram density based malware detection
Park et al. Birds of a feature: Intrafamily clustering for version identification of packed malware
Zhao et al. Malware detection using machine learning based on the combination of dynamic and static features
Seas et al. Automated vulnerability detection in source code using deep representation learning
Pektaş et al. Runtime-behavior based malware classification using online machine learning
Meng et al. A survey on machine learning-based detection and classification technology of malware
US20220129550A1 (en) Method for constructing behavioural software signatures

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

AS Assignment

Owner name: ACRONIS INTERNATIONAL GMBH, SWITZERLAND

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ULASEN, SERGEY;STROGOV, VLADIMIR;BELOUSSOV, SERGUEI;AND OTHERS;REEL/FRAME:063984/0947

Effective date: 20230508

STPP Information on status: patent application and granting procedure in general

Free format text: EX PARTE QUAYLE ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO EX PARTE QUAYLE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

ZAAA Notice of allowance and fees due

Free format text: ORIGINAL CODE: NOA

ZAAB Notice of allowance mailed

Free format text: ORIGINAL CODE: MN/=.

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT RECEIVED

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE