CN114091046A - System and method for identifying encryptor encoding files of computer system - Google Patents

System and method for identifying encryptor encoding files of computer system Download PDF

Info

Publication number
CN114091046A
CN114091046A CN202110685763.4A CN202110685763A CN114091046A CN 114091046 A CN114091046 A CN 114091046A CN 202110685763 A CN202110685763 A CN 202110685763A CN 114091046 A CN114091046 A CN 114091046A
Authority
CN
China
Prior art keywords
file
identifying
identified
files
suspicious process
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110685763.4A
Other languages
Chinese (zh)
Inventor
叶夫根尼·I·洛帕廷
德米特里·A·康德拉泰耶夫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kaspersky Lab AO
Original Assignee
Kaspersky Lab AO
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from RU2020128090A external-priority patent/RU2770570C2/en
Application filed by Kaspersky Lab AO filed Critical Kaspersky Lab AO
Publication of CN114091046A publication Critical patent/CN114091046A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/602Providing cryptographic facilities or services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/55Detecting local intrusion or implementing counter-measures
    • G06F21/56Computer malware detection or handling, e.g. anti-virus arrangements
    • G06F21/566Dynamic detection, i.e. detection performed at run-time, e.g. emulation, suspicious activities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/55Detecting local intrusion or implementing counter-measures
    • G06F21/56Computer malware detection or handling, e.g. anti-virus arrangements
    • G06F21/568Computer malware detection or handling, e.g. anti-virus arrangements eliminating virus, restoring damaged files
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Virology (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Bioethics (AREA)
  • Databases & Information Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Medical Informatics (AREA)
  • Storage Device Security (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The present application relates to systems and methods for identifying an encryptor that encodes files of a computer system. An exemplary method comprises: identifying one or more files for which data entry was performed by the suspicious process; for each identified file, determining characteristics of the identified file; identifying a category of document modification using the trained machine learning model and corresponding characteristics of the identified document; identifying a suspicious process as being associated with the encrypter based on the identified category of file modifications of the file; and protecting the computer system from the encrypter.

Description

System and method for identifying encryptor encoding files of computer system
Technical Field
The present invention relates to the field of data security. And more particularly, to a system and method for identifying an encryptor encoding a file of a computer system.
Background
The rapid development of computer technology and the widespread distribution of computing devices (personal computers, laptops, tablets, smartphones, etc.) in recent years has strongly stimulated the use of these devices in various fields of activity and for a large number of tasks (from the processing and storage of private photographs to bank transfers and electronic document management). The growth in the number of computing devices and software running in these devices is accompanied by a rapid growth in the amount of unwanted software.
There are currently many different types of unwanted software, most of which are intended to conspire for their creators. One type of unwanted software steals a user's personal and confidential data (e.g., login name and password, bank account details, electronic documents, etc.) from the user's device. Another type of unwanted software forms a so-called botnet from a user's device to attack other computers, computer networks, or Web resources (e.g., through a "denial of service" attack or a "brute force" attack). The third type of harmful software provides paid content to users by means of continuous advertisement, paid subscription, sending short messages to paid numbers, etc.
One type of unwanted software includes unwanted software programs for the purpose of lasso (referred to as lasso software). When these lassos appear on the user's device, they may render the device inoperable (e.g., by blocking input to the device, corrupting data, restricting access to interface elements, etc.). The victim is typically urged to pay for the resumption of access to his/her files, but even then, a malicious party does not always regain control of the data or device to their legitimate owner. The most dangerous ransom program includes harmful software (encryptor) that encrypts files. Their harmful actions include destroying data that is valuable to the user (e.g., databases, Microsoft Office documents, photographs, video files, etc.). Data is corrupted by encrypting, renaming, or hiding the file containing the data. Protecting data is an important task, as confidentiality and integrity of data are often highly valued.
One way to combat the above-mentioned threats is to timely detect harmful applications on the user's device and then deactivate them, thereby protecting the data from unauthorized modification, while also periodically creating backup copies of the data so that the data can be restored even in the event of unauthorized modification. However, new forms of encryptors are continually being created. With the advent of new forms of encryptors, new signatures must be written periodically to identify them. A variety of significant computing resources are consumed in the task of updating the signature. However, in some cases, the encryptor decrypts itself after booting using a unique decompressor — thereby adding complexity to the signature-based detection needed to ensure computational security from such attacks.
Another approach is based on behavior detection. Behavior detection provides a more flexible approach compared to signature detection. This is mainly because the behavior-based detection is based on monitoring the characteristic features of the encryptor distribution. This provides an opportunity to detect the presence or modification of a large number of files. However, these methods also have disadvantages.
First, a distinguishing feature of encryptors is that they deny access to many or even all files of a given type (images, financial documents, etc.) on the hard disk. When such behavior is detected, some data will be encrypted. To combat the operation of the encryptor, a backup copy of the affected files may be used, but this requires further resource consumption and therefore slows down the running of legitimate software.
Second, there are many programs whose behavior also exhibits characteristics associated with actions in the file system that exhibit encrypter behavior. For example, archiver applications have many behavioral characteristics that present encryptor characteristics. For example, both types of software may modify a large number of user files quickly, and the files resulting from the execution of the user files have many similar characteristics. It is difficult to find any difference between the encryptor and the archiver application using certain rules or analysis algorithms. This may result in the archiving type of program being mistakenly classified as harmful. In other words, class II errors (false negatives) may increase. However, files installed by malicious software exhibit some differences from legitimate user files, and it has been proposed that encrypted files should be searched against these differences to detect Trojan encryptor activity.
However, most existing methods of detecting encryptors do not take advantage of the key feature of harmful encryption software, namely the creation of a victim file on a computer where information cannot be read by a user without being decrypted by a malicious party. This creates a technical problem of high levels of class I errors (false positives) and class II errors for identifying unwanted software that encrypts files of a computer system.
Therefore, there is a need for a more optimized and efficient way to detect encryptors using the latest machine learning methods. In other words, there is a need for a system and method of identifying an encryptor that encodes files of a computer system.
Disclosure of Invention
Aspects of the present invention relate to data security, and more particularly, to systems and methods for identifying an encryptor that encodes a file for a computer system.
In one exemplary aspect, there is provided a method for identifying an encryptor encoding a file of a computer system, the method comprising: identifying one or more files for which data entry was performed by the suspicious process; for each identified file, determining characteristics of the identified file; identifying a category of document modification using the trained machine learning model and the identified corresponding characteristics of the document; and identifying a suspicious process as being associated with the encrypter based on the identified category of file modification of the file; and protecting the computer system from the encrypter.
In one aspect, the method further comprises: harmful software is detected by sequentially inspecting all processes of a computer system, wherein the sequential inspection includes identifying the inspected process as a suspicious process.
In an aspect, the suspicious process is associated with an updatable list of predetermined processes.
In one aspect, the categories of file modifications include at least one category of file modifications made by the encryptor and at least one other category of file modifications made by legitimate software.
In an aspect, the protection of the computer system comprises at least one of: stopping the suspicious process and all flows and other processes associated with the suspicious process; deleting or isolating the file for starting the suspicious process; restoring the one or more files from the backup copy for which data entry was performed by the suspicious process, wherein the backup copy of the one or more files was created and stored prior to the occurrence of data entry to the one or more files by the suspicious process; and updating the antivirus database and launching antivirus software to perform the on-demand scan.
In one aspect, for each identified document, the machine learning model determines a probability that a modification of the document belongs to one of the categories of document modifications.
In one aspect, the method further comprises: determining, by the analyzer, a number of the one or more files for which the probability of the encryptor modifying the file exceeds a first threshold; and identifying the suspicious process as being associated with the encrypter when the determined number of the one or more files having the probability of modifying the file that exceeds the first threshold is greater than a second threshold.
In an aspect, identifying the suspicious process as being associated with the encrypter is performed using a trained second machine learning model that receives the identified class of file modifications as input data.
In one aspect, the trained second machine learning model also receives characteristics of the identified suspicious processes as input data.
In an aspect, the trained second machine learning model comprises a machine learning model trained based on at least one of: neural networks, decision trees, random forests, support vector machines, k-nearest neighbor methods, logistic regression methods, linear regression methods, bayesian classification methods, and gradient boosting methods.
In an aspect, identifying one or more files for which data input is performed is based on processing of system calls to operations that use streams and/or write streams.
In an aspect, identifying the one or more files includes identifying characteristics of each identified file, the characteristics including at least one of: entropy of at least a portion of the file, metadata of the at least a portion of the file, information about an application or process that has entered data into the file.
In one aspect, a trained machine learning model for identifying a category of file modifications includes a first machine learning model based on at least one of: neural networks, decision trees, random forests, support vector machines, k-nearest neighbor methods, logistic regression methods, linear regression methods, bayesian classification methods, and gradient boosting methods.
In an aspect, identifying the suspicious process as being associated with the encrypter further comprises: identifying characteristics of the suspicious process, the characteristics including at least an identifier and a context of the suspicious process; and identifying an event associated with the suspicious process, the event comprising one or more of: determination of an antivirus program, modification of an auto-launch list, internet access, and information about the system.
According to one aspect of the invention, there is provided a system for identifying an encryptor encoding a file of a computer system, the system comprising a hardware processor configured to: identifying one or more files for which data entry was performed by the suspicious process; for each identified file, determining characteristics of the identified file; identifying a category of document modification using the trained machine learning model and the identified corresponding characteristics of the document; and identifying a suspicious process as being associated with the encrypter based on the identified category of file modification of the file; and protecting the computer system from the encrypter.
In one exemplary aspect, a non-transitory computer readable medium is provided having stored thereon a set of instructions for identifying an encryptor that encodes a file of a computer system, wherein the set of instructions includes instructions for: identifying one or more files for which data entry was performed by the suspicious process; for each identified file, determining characteristics of the identified file; identifying a category of document modification using the trained machine learning model and the identified corresponding characteristics of the document; and identifying a suspicious process as being associated with the encrypter based on the identified category of file modification of the file; and protecting the computer system from the encryptor.
The method and system of the present invention are directed to providing data security in a more optimized and efficient manner and identifying unwanted software that encodes files of a computer system. The first technical result is that the level of protection of computer system files from the encryptor is increased by using a trained machine learning model to identify suspicious processes as being associated with the encryptor. The machine learning model receives as its input data characteristics of files created or modified by the suspicious process. The second technical result is a reduction in class I and class II errors when using a trained machine learning model to identify suspicious processes associated with a encryptor.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate one or more exemplary aspects of the present invention and, together with the detailed description, serve to explain the principles and implementations of these exemplary aspects.
Fig. 1 shows a system for identifying encryptors.
FIG. 2a shows a first example of a computer directory after being operated on by an encryptor.
FIG. 2b shows a second example of a computer directory after being operated on by an encryptor.
FIG. 2c shows a third example of a computer directory after being operated on by an encryptor.
FIG. 2d shows a fourth example of a computer directory after being operated on by an encryptor.
FIG. 3 illustrates an exemplary protector of a computer system.
FIG. 4 illustrates an exemplary method for identifying processes associated with unwanted software encoding files of a computer system.
FIG. 5 presents an example of a general-purpose computer system on which aspects of the invention may be implemented.
Detailed Description
Exemplary aspects are described herein in the context of systems, methods, and computer programs for identifying processes associated with unwanted software encoding files of a computer system according to aspects of the present invention. Those of ordinary skill in the art will realize that the following description is illustrative only and is not intended to be in any way limiting. Other aspects will readily suggest themselves to such skilled persons having the benefit of this disclosure. Reference will now be made in detail to implementations of the exemplary aspects as illustrated in the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings and the following description to refer to the same or like items
The system and method of the present invention is capable of identifying software that causes damage to a computer system by encoding files of the computer system. The system and method also reduces class I and class II errors in the identification of suspicious processes associated with the encryptor using the trained machine learning model.
Fig. 1 illustrates an exemplary system 100 for identifying encryptors. The system 100 is implemented on a computing system (e.g., a computer), the computing system 100 including real-world devices, systems, components, and combinations of components, which are implemented using hardware, such as an integrated microcircuit (application specific integrated circuit, ASIC) or Field Programmable Gate Array (FPGA), or in a combination of software and hardware (e.g., a microprocessor system and a set of program instructions), for example, and also on a neurosynaptic chip. The functions of these modules of the system may be implemented by hardware only, and may also be implemented in a combined form, in which some functions of the system modules are implemented by software and some functions are implemented by hardware. In certain aspects, some or all of the components, systems, etc. may execute on a processor of a general purpose computer, such as the computer shown in FIG. 5. Further, system components may be implemented within a single computing device or may be dispersed among multiple interconnected computing devices. Thus, the system 100 may be implemented using suitable hardware components and/or software modules, which may be arranged together or may reside at multiple locations or on multiple devices. The components of the system and/or the modules of the system may then interact or exchange data via one or more wireless communication lines, wired communication lines, cellular communication, a client/server architecture, a peer-to-peer architecture, or the like.
System 100 includes a file handler 103, file handler 103 designed to identify at least one file 102 (also referred to as a modified file) to which data has been input by suspicious process 101. Files 102 are identified based on the processing of system calls, particularly system calls to the operation of files used by a suspicious process 101 (e.g., WinAPI CreateFile function). In an aspect, system calls to operations using the stream, writing the stream, etc. are also monitored. Document processor 103 is also designed to identify characteristics of each identified document 102. In an aspect, the characteristics of the file 102 include at least one of: entropy (entropy of information) of at least a portion of the file 102, metadata (e.g., file extension, type, title, etc.) of the file 102, and the portion of the file 102. In another aspect, the characteristics of the file 102 include information about the application or process that has entered data into the file 102. Other examples of characteristics of file 102 are described below.
System 100 also includes an analyzer 104, analyzer 104 being associated with file processor 103 and designed to identify one or more modification categories for each file 102, files 102 being identified as modified files. The identification of the modification category is performed using a trained first machine learning model 106, the trained first machine learning model 106 receiving the above-described characteristics of the document 102 as its input data. In one aspect, the modification categories for file 102 include at least the following categories: modifications made by encryptors, and modifications made by legitimate software. For each of the above files 102, analyzer 104 is further designed to identify suspicious process 101 as being associated with an encrypter based on the identified modification category of file 102.
The system 100 further comprises a trainer 105, the trainer 105 being designed for training the first machine learning model 106 on data of training samples comprising characteristics of files created or modified by at least one known process and associated with at least one known encryptor.
In one aspect, the first machine learning model 106 is one of the following types:
a) a neural network;
b) a decision tree;
c) random forests;
d) a support vector machine;
e) k-nearest neighbor method;
f) performing logistic regression;
g) performing linear regression;
h) bayesian classification; and
i) and (5) gradient lifting.
It should be noted that, in an aspect, the training samples only include characteristics of files created or modified by at least one known process associated with at least one known encryptor. That is, the characteristics of the initial files will not be used to train the first machine learning model 106 until they are modified by the process associated with the encryptor. This approach has many advantages. In particular, the first machine learning model 106 trained on the training samples described above will have a high classification quality in encryptor classification and a low number of class I and class II errors, because the file cannot identify the state of the file before it is modified by the encryptor.
However, in another aspect, the training sample also includes characteristics of a file created or modified by at least one known process initiated by a legitimate file (application).
The first machine learning model 106 identifies whether the modification of the file belongs to one of two modification categories. In one aspect, the first machine learning model 106 used for this purpose is a classification (supervised learning) model that operates on the two categories described above. In another aspect, the first machine learning model used is a clustering model or an anomaly detection (unsupervised learning) model.
In an aspect, the first machine learning model 106 includes a fully connected neural network based learning model. In one aspect, parameters of the neural network, such as the number of internal layers and neurons, activation functions, etc., are selected to provide the best classification quality and the least number of class I and class II errors. In one aspect, the following function may be used as the activation function: ReLU (rectified linear unit), softmax, logistic function, Heaviside function, etc.
When the anomaly detection model is used for classification of the first machine learning model 106, a training sample consisting of a single class may be used. That is, the training samples will include characteristics of the file created or modified by the legitimate software or encryptor. Each group may be checked for anomalies relative to the other. Thus, an anomaly detection will correspond to one category, while no anomaly will correspond to another category.
In one aspect, the training samples contain files created or modified by known encryptors. In this case, files created or modified by previously unknown encryptors will also be detected by the system 100 to identify the encryptors as training attributes of the machine learning model. Thus, computer system files will obtain a higher level of protection against not only known encryptors, but also previously unknown encryptors. By using the method of the present invention, class I and class II errors in identifying suspicious process 101 associated with a encryptor may be reduced. In another aspect, the training samples include characteristics of files created or modified by legitimate software.
In yet another aspect, trainer 105 is further designed for: the trained first machine learning model 106 is tested on the test data, and the sample is validated. Testing the trained first machine learning model includes testing characteristics of a file created or modified by the at least one known process and associated with the at least one known encryptor, wherein the file under test is not in the training sample. Similarly, the verification is for a characteristic of a file created or modified by at least one known process and associated with at least one known encryptor, wherein the verification sample is not in the training sample.
In an aspect, for each identified document 102, the first machine learning model 106 also determines a probability that a modification of the document belongs to one of the modification categories.
In one aspect, analyzer 104 determines (in identified files 102) a number of files for which the encryptor's probability of file modification exceeds a first threshold (e.g., 0.5). Then, when the determined number of files exceeds a second threshold (e.g., 3), analyzer 104 identifies suspicious process 101 as being associated with an encrypter. For example, if the number of files for which the probability of an encrypter modifying a file exceeds 0.5 is greater than 3, then the suspicious process 101 will be identified as being associated with the encrypter.
The above approach can reduce the number of false positives compared to signature and heuristic approaches that examine the modification of a file according to specified rules. When using signature and heuristics, if a file is modified by legitimate software (e.g., a archiver) and contains the modified nature of an encryptor, then the suspicious process 101 may be identified as an encryptor even though it is actually a legitimate process.
In one aspect, as described above, the present invention identifies encryptors using unique parameters and characteristics that may be used by analyzer 104. Some examples of various unique parameters and characteristics that are characteristic of the file created or modified by the encryptor are given below.
In an aspect, the file parameters and characteristics mentioned below may also be used as file characteristics to be used by the analyzer 104.
One file characteristic is entropy (information entropy). The entropy of information can be calculated by any formula known in the relevant art, in particular as follows:
Figure BDA0003124585430000101
where H is the information entropy and p (i) is the probability of occurrence of a symbol of value i.
The encryptor operates using an encryption algorithm such as AES, RSA, RC4, etc. There are various criteria for evaluating the quality of the encryption algorithm, such as a comparison of entropy values. Entropy values can be used for identification of encrypted data or random data, and particular entropy values are characteristic of various file formats. However, identifying the file modified by the encryptor based only on the file entropy value will result in a class I error. This is because the archive formats "rar", "zip", "7 z", etc. have similar entropy (if they have similar compression levels) as the files created by the harmful encryptor, but are distinguished by the presence of a particular file structure through which the files can be decompressed. Special attention must be paid to formats such as "docx" because they also contain data archives (archives), are widely used and require encryption. It must also be remembered that there are a large number of well-known file formats and that companies may create special formats for internal use. Therefore, comparing a program-created file to a set of known formats is ineffective as a means of identifying unwanted activity and encryptors.
Additional difficulties are caused by encryptors that do not modify the entire file, but only encrypt a portion of the file. For this case, the entropy of the various parts of the file may be calculated. For example, a file may be divided into a plurality of parts of equal size, and then the entropy of each part of the file (hereinafter referred to as segment entropy) may be calculated. In many data archives, a portion of the file is split to preserve the original file name, but this rarely occurs in the encryptor. Thus, the minimum segment entropy can be used as another file characteristic.
The entropy value is affected by the file size; thus, the file size may also be used as a file characteristic.
In many cases, the characteristic feature of the file created by the encryptor is the particular name of the file being encrypted. Thus, the file name may also be one of the file characteristics.
FIG. 2a shows a first example of a computer directory after being operated on by an encryptor. In some cases, the e-mail address of the malicious party is typically present in the file name. For example, in FIG. 2a, an example of a directory after operation of a Trojan-random.Win 32.Crysis encryptor is shown. Therefore, information on whether there is a character string of a name corresponding to the RFC 5322 mailbox format can be used as another file property.
FIG. 2b shows a second example of a computer directory after being operated on by an encryptor. In FIG. 2b, an example of a directory after another type of encryptor operation is shown, where the encryptor has modified a file name, the modified file name consisting of a string of hexadecimal characters. Thus, conforming or non-conforming formats may be used as input to the analysis portion of the system. In another aspect, parameters such as symbol ranges may be used, names with file extensions or period symbols may be used, and the like. These other features are not typically characteristic of legitimate files, but are encountered in files encrypted by the encryptor, so files with these atypical features can be identified and placed in separate groups. It is also possible to check whether there are special signs, such as "[", "]", "{", "}", "@", arithmetic operation signs, etc., which are rarely encountered in a legitimate file name but are present in an encrypted file name. FIG. 2c shows a third example of a computer directory encountering a special symbol after an encryptor operation. All of the above parameters and characteristics may be used by the analyzer 104 as file characteristics.
In executable, data archive, and "docx" and "pdf" formats, there is a sequence of bytes that can be interpreted as a string of a given length. However, in encrypted files, the strings are extremely short in length and they take the form of random sequences of symbols, which are rarely encountered in legitimate files. The number of character strings, the longest length of the character strings, and the average character string length may be selected as the file characteristics.
In most cases, the encryptor exchanges the file extension for a non-existent file extension, and therefore the property that the file extension corresponds to a non-existent format can also be used as the file property. FIG. 2d shows a fourth example of a computer directory after being operated by an encryptor (e.g., a fast encryptor). In this case, the file extension is long, which is not a typical characteristic of a legitimate file.
Yet another unique attribute of an encrypted file is the presence of a particular word, such as "lol", "fox", "ransom", and the like. When using signature-based detection methods, it is necessary to create a "white list" that allows file extensions or a "black list" that prohibits extensions. That is, software that has created a large number of files with such extensions would be considered harmful. However, creating such whitelists and blacklists is an extremely laborious task, and the lists must always contain up-to-date information about all extensions used in both legitimate and unwanted software, but this is not possible. Furthermore, there are situations where unwanted software may mimic legitimate software, in which case the encryptor will not be detected. Thus, the above method is not complete per se. However, this type of list may be added as a file property to the first machine learning model 106. In this case, the file extension will be compared to a scale, e.g., zero represents a trusted extension, and a greater value assigned to the group with the more suspect extension. By taking into account other file characteristics and parameters of the first machine learning model 106, the level of encryptor detection may be increased.
In one aspect, if the number of files classified as modified by the encryptor exceeds a second threshold, analyzer 104 identifies suspicious process 101 as being associated with the encryptor (that is, the category of modification corresponds to the modification made by the encryptor). In a preferred aspect, when analyzer 104 identifies a modification category for each identified file 102, it is not necessary to determine a probability of belonging to a modification category.
In an aspect, the analyzer 104 is further operable to identify suspicious processes as being associated with the encrypter by using the trained second machine learning model 107. The second machine learning model 107 receives as its input data the identified modification categories for each identified document 102; that is, the second machine learning model receives the results of the use of the first machine learning model 106.
In yet another aspect, file processor 103 also identifies:
a) characteristics of the suspicious process 101, in particular an identifier of the suspicious process and a context of the suspicious process;
b) events in the protector 108 (e.g., antivirus system) associated with the process under inspection, in particular, decisions of the antivirus program and various modules of the antivirus system, changes to auto-launch lists, internet access, etc.; and
c) information about the system.
In one aspect, the second machine learning model 107 also receives as input data the identified characteristics of the suspicious process, the antivirus events and system information described above. The above-mentioned anti-virus events and system information can be determined by the file processor 103 itself or by means of the protector 108.
In an aspect, the second machine learning model 107 comprises a machine learning model trained based on at least one of:
a) a neural network;
b) a decision tree;
c) random forests;
d) a support vector machine;
e) k-nearest neighbor method;
f) performing logistic regression;
g) performing linear regression;
h) bayesian classification; and
i) and (5) gradient lifting.
In yet another aspect, the trainer 105 is designed to train the second machine learning model 107 on data from a second training sample that includes a modification category of a file created or modified by at least one known process associated with at least one known encryptor.
In an aspect, the second training sample includes a modification category of a file created or modified by at least one known legitimate process associated with at least one known legitimate software fragment.
In another aspect, the trainer 105 is further designed to test and validate the trained second machine learning model 107. In an aspect, the test is performed on data of a second test that includes characteristics of a file created or modified by the at least one known process and associated with the at least one known encryptor, wherein the file under the second test is not in the second training sample. Similarly, the verification is performed on a verification sample that includes characteristics of a file created or modified by the at least one known process and associated with the at least one known encryptor, wherein the second verification sample is not in the second training sample. It should also be noted that other specific implementations described above with respect to the first machine learning model 106 may also be used for the second machine learning model 107.
FIG. 3 illustrates an exemplary protector of a computer system, such as protector 108 of computer 20. The protector 108 (an anti-virus program or other form of protection of the device) may contain modules designed to ensure the security of the device. In one example, the protector 108 includes at least one of: on-demand scanners, e-mail antivirus software, network antivirus software, active protection modules, HIPS (host intrusion prevention System) modules, DLP (data loss prevention) modules, vulnerability scanners, simulators, firewalls, and the like. In one aspect, the module may be an integral part of the protector 108. In another aspect, the modules may be implemented in the form of separate software components.
The per-access scanner contains functionality for identifying harmful activity in all openable, executable and storable files in a user's computer system.
An on-demand scanner differs from a per-access scanner in that it scans user-specified files and directories, for example, at the request of a user.
Email antivirus software is essential to control incoming and outgoing emails. The e-mail antivirus software may examine incoming and outgoing e-mails to determine if the e-mail contains harmful software.
Web antivirus software is used to prevent the execution of harmful code that may be present on a website accessed by a user, and to prevent the opening of such websites.
The HIPS module is used to identify any undesirable and detrimental activity of the program and to prevent the program when executed.
DLP modules are used to identify and prevent confidential data from leaking outside the boundaries of a computer or network.
The vulnerability scanner is necessary to identify vulnerabilities in the device (e.g., if certain components of the protector 108 are shut down, if the virus database is not up-to-date, if a network port is shut down, etc.).
Firewalls are used to control and filter network traffic according to specified rules.
The simulator operates by simulating the guest system in the simulator during execution of the file instructions and obtains results that will be examined in detail later.
The active protection module uses the behavior signatures to identify behaviors of the executable file and its classification by confidence level. It should be noted that the protector 108 may also include other modules for implementing the functionality described in fig. 1.
Fig. 4 illustrates an exemplary method 400 for identifying an encryptor. For example, the method 400 identifies a process associated with unwanted software (e.g., encryptors) that encodes files of a computer system. When an unknown file is executed, a new process is created (e.g., via the WinAPI CreateProcess function) with a new Process Identifier (PID). The method of the present invention examines the new process as a suspicious process. It should also be noted that the method of the present invention also allows processes to be identified as being associated with an encrypter when the process is trusted (e.g., a system process) but creates or modifies a file in the same manner as the encrypter. This situation may occur when file-free malware (file-free malware) is used. For example, such unwanted software may use trusted PowerShell software to execute unwanted commands. Harmful codes are not stored on a disk in the form of files, but are contained only in a random access memory, and thus cannot be detected by scanning files in a computer system.
Thus, in one aspect, the method of the present invention detects file-free unwanted software by examining all processes of a computer system as suspicious processes. In another aspect, rather than scanning all processes as suspicious, the method scans only those processes on the specified updatable list. The updatable list may include suspicious system processes and those processes most commonly used by encryptors. In an aspect, the method 400 for identifying an encryptor is implemented in a computing device (e.g., as shown in FIG. 5). The updatable list may be updated manually or automatically if new suspicious processes or new processes most frequently used by encryptors are detected.
In step 401, method 400 identifies, by file handler 103, one or more files 102 for which data input is performed by suspicious process 101. The data entry includes at least one of creating a new file and modifying an existing file. In an aspect, at least one file 102 is identified based on the processing of a system call to a file operation used by suspicious process 101.
In step 402, method 400 determines, by file processor 103, characteristics of the identified files 102 for each identified file 102 of the one or more files.
In step 403, the method 400 identifies, by the analyzer 104, a category of file modification for each identified file. The identification of the document modification categories is performed using the trained first machine learning model 106 and the corresponding characteristics of the identified documents. Thus, in one aspect, the first machine learning model 106 receives the above-described characteristics of the identified document 102 as its input data.
In one aspect, the categories of file modifications include at least one category of file modifications made by the encryptor and at least one other category of file modifications made by legitimate software.
In step 404, method 400 identifies, by analyzer 104 for each identified file, suspicious process 101 as being associated with an encrypter based on the category of identified file modifications for each file 102.
In step 405, method 400 protects the computer system from the encrypter through protector 108. Thus, after suspicious process 101 is identified as being associated with a encrypter, protector 108 protects the computer system from the actions of the encrypter.
In an aspect, the protection of the computer system includes at least one of:
a) stopping the suspicious process and all flows and other processes associated with the suspicious process;
b) deleting or isolating the file for starting the suspicious process;
c) restoring one or more files for which data entry by the suspicious process was performed from the backup copy, wherein the backup copy of the one or more files was created and stored before data entry by the suspicious process to the one or more files occurred; and
d) update the antivirus database and start the antivirus software to perform the on-demand scan.
In one aspect, for each identified document, the machine learning model determines a probability that a modification of the document belongs to one of the document modification categories.
In one aspect, the method further comprises: determining, by the analyzer, a number of the one or more files for which the probability of the encryptor modifying the file exceeds a first threshold; and identifying the suspicious process as being associated with the encrypter when the determined number of the one or more files having the probability of modifying the file that exceeds the first threshold is greater than a second threshold.
The specific example of implementation described in the system according to fig. 1 also applies to the method according to fig. 4. Thus, the claimed system and method are capable of identifying processes associated with unwanted software (e.g., software that encrypts files of a computer system). Increasing the level of protection of computer system files from encryptors by: the method includes identifying a suspicious process as being associated with a encrypter, and using a trained machine learning model that receives as its input characteristics of a file created or altered by the suspicious process. In addition, a reduction in class I and class II errors is achieved for identifying suspicious processes associated with the encryptor. The reduction of errors is achieved by using a trained machine learning model that receives as its input data characteristics of files created or modified by the suspicious process.
FIG. 5 is a block diagram illustrating a computer system 20 upon which aspects of the systems and methods for identifying encryptors encoding files of the computer system may be implemented. The computer system 20 may be in the form of multiple computing devices or may be in the form of a single computing device, such as a desktop computer, a laptop computer, a handheld computer, a mobile computing device, a smartphone, a tablet, a server, a mainframe, an embedded device, and other forms of computing devices.
As shown, computer system 20 includes a Central Processing Unit (CPU) 21, a system memory 22, and a system bus 23 that couples various system components including memory associated with CPU 21. The system bus 23 may include a bus memory or bus memory controller, a peripheral bus, and a local bus capable of interacting with any other bus architecture. Examples of the bus may include PCI, ISA, serial bus (PCI-Express), HyperTransportTM(HyperTransportTM) InfinibandTM(InfiniBandTM) Serial ATA, I2C. And other suitable interconnects. Central processing unit 21 (also referred to as a processor) may include a single set or multiple sets of processors having a single core or multiple cores. The processor 21 may execute one or more computer executable codes that implement the techniques of the present invention. The system memory 22 may be any memory for storing data used herein and/or computer programs executable by the processor 21. The system Memory 22 may include volatile Memory (such as Random Access Memory (RAM) 25) and non-volatile Memory (such as Read-Only Memory (ROM) 24, flash Memory, etc.) or any combination thereof. A Basic Input/Output System (BIOS) 26 may store Basic programs used to transfer information between elements within the computer System 20, such as those used when the operating System is loaded using ROM 24.
The computer system 20 may include one or more storage devices, such as one or more removable storage devices 27, one or more non-removable storage devices 28, or a combination thereof. The one or more removable storage devices 27 and one or more non-removable storage devices 28 are connected to the system bus 23 by way of a memory interface 32. In one aspect, the storage devices and corresponding computer-readable storage media are power-independent modules that store computer instructions, data structures, program modules, and other data for computer system 20. A wide variety of computer-readable storage media may be used for system memory 22, removable storage devices 27, and non-removable storage devices 28. Examples of computer-readable storage media include: machine memories such as cache, SRAM, DRAM, zero capacitor RAM, twin transistor RAM, eDRAM, EDO RAM, DDR RAM, EEPROM, NRAM, RRAM, SONOS, PRAM; flash memory or other storage technologies, such as in Solid State Drives (SSDs) or flash drives; magnetic tape cartridges, magnetic tape, and magnetic disk storage, such as in a hard drive or floppy disk; optical storage, such as in a compact disc (CD-ROM) or Digital Versatile Disc (DVD); and any other medium which can be used to store the desired data and which can be accessed by computer system 20.
System memory 22, removable storage devices 27 and non-removable storage devices 28 of computer system 20 may be used to store an operating system 35, additional application programs 37, other program modules 38 and program data 39. Computer system 20 may include a peripheral interface 46 for communicating data from input device 40, such as a keyboard, mouse, stylus, game controller, voice input device, touch input device, or other peripheral device, such as a printer or scanner via one or more I/O ports, such as a Serial port, parallel port, Universal Serial Bus (USB), or other peripheral interface. A display device 47, such as one or more monitors, projectors or integrated displays, may also be connected to the system bus 23 via an output interface 48, such as a video adapter. In addition to the display device 47, the computer system 20 may be equipped with other peripheral output devices (not shown), such as speakers and other audiovisual devices.
The computer system 20 may operate in a networked environment using network connections to one or more remote computers 49. The one or more remote computers 49 may be local computer workstations or servers including most or all of the elements described above in describing the nature of the computer system 20. Other devices may also exist in a computer network such as, but not limited to, routers, network workstations, peer devices or other network nodes. Computer system 20 may include one or more Network interfaces 51 or Network adapters for communicating with remote computers 49 via one or more networks, such as a computer Local-Area Network (LAN) 50, a computer Wide-Area Network (WAN), an intranet, and the internet. Examples of the network interface 51 may include an ethernet interface, a frame relay interface, a SONET (synchronous optical network) interface, and a wireless interface.
Aspects of the present invention may be systems, methods and/or computer program products. The computer program product may include one or more computer-readable storage media having computer-readable program instructions embodied thereon for causing a processor to perform various aspects of the present invention.
A computer readable storage medium may be a tangible device that can hold and store program code in the form of instructions or data structures that can be accessed by a processor of a computing device, such as computer system 20. The computer readable storage medium may be an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination thereof. By way of example, such computer-readable storage media may include Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), portable compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD), flash memory, hard disks, laptop disks, memory sticks, floppy disks, or even mechanical coding devices such as punch cards or raised structures in grooves on which instructions are recorded. As used herein, a computer-readable storage medium should not be considered as a transitory signal per se, such as a radio wave or other freely propagating electromagnetic wave, an electromagnetic wave propagating through a waveguide or transmission medium, or an electrical signal transmitted through an electrical wire.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a corresponding computing device, or to an external computer or external storage device via a network (e.g., the internet, a local area network, a wide area network, and/or a wireless network). The network may include copper transmission cables, optical transmission fibers, wireless transmissions, routers, firewalls, switches, gateway computers and/or edge servers. A network interface in each computing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium within the respective computing device.
The computer-readable program instructions for carrying out operations of the present invention may be assembly instructions, Instruction-Set-Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source code or object code written in any combination of one or more programming languages, including an object-oriented programming language and a conventional procedural programming language. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a LAN or a WAN, or the connection may be made to an external computer (for example, through the Internet). In some aspects, an electronic circuit, including, for example, a Programmable Logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA), can perform aspects of the invention by executing computer-readable program instructions with state information of the computer-readable program instructions to personalize the electronic circuit.
In various aspects, the systems and methods described in this disclosure may be processed in modules. The term "module" as used herein refers to, for example, a real-world device, component, or arrangement of components implemented using hardware, for example, through an Application Specific Integrated Circuit (ASIC) or FPGA, or to a combination of hardware and software, for example, implemented by a microprocessor system and a set of instructions implementing the functionality of the module, which when executed convert the microprocessor system into a dedicated device. A module may also be implemented as a combination of both, with some functions being implemented solely in hardware and other functions being implemented by a combination of hardware and software. In some implementations, at least a portion of the modules (and in some cases all of the modules) may run on a processor of a computer system, such as the computer system described in more detail in fig. 5 above. Thus, each module may be implemented in various suitable configurations and should not be limited to any particular implementation illustrated herein.
In the interest of clarity, not all of the routine features of the various aspects are disclosed herein. It will of course be appreciated that in the development of any such actual implementation of the invention, numerous implementation-specific decisions must be made in order to achieve the developer's specific goals, and that these specific goals will vary from one implementation to another and from one developer to another. It will be appreciated that such a development effort might be complex and time-consuming, but would nevertheless be a routine undertaking of engineering for those of ordinary skill in the art having the benefit of this disclosure.
Further, it is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation, such that the terminology or phraseology of the present specification is to be interpreted by the skilled artisan in light of the teachings and guidance presented herein, in combination with the knowledge of one skilled in the relevant art(s). Furthermore, it is not intended that any term in this specification or claims be ascribed an uncommon or special meaning unless explicitly set forth as such.
Various aspects disclosed herein include present and future known equivalents to the known modules referred to herein by way of illustration. Further, while various aspects and applications have been shown and described, it will be apparent to those skilled in the art having the benefit of this disclosure that many more modifications than mentioned above are possible without departing from the inventive concepts disclosed herein.

Claims (20)

1. A method for identifying an encryptor encoding a file of a computer system, the method comprising:
identifying, by the file processor, one or more files for which data entry was performed by the suspicious process;
for each identified file, determining, by the file processor, characteristics of the identified file;
for each identified document, identifying, by an analyzer, a category of document modification using a trained machine learning model and corresponding characteristics of the identified document; and
for each identified file, identifying, by the analyzer, a suspicious process as being associated with the encrypter based on the identified category of file modification of the file; and
protecting the computer system from the encrypter.
2. The method of claim 1, further comprising:
detecting unwanted software by sequentially inspecting all processes of the computer system, wherein the sequential inspection includes identifying the inspected process as the suspicious process.
3. The method of claim 1, wherein the suspicious process is associated with an updatable list of predetermined processes.
4. The method of claim 1, wherein the categories of file modifications include at least one category of file modifications made by encryptors and at least one other category of file modifications made by legitimate software.
5. The method of claim 1, wherein the protection of the computer system comprises at least one of:
stopping the suspicious process and all flows and other processes associated with the suspicious process;
deleting or isolating the file for starting the suspicious process;
restoring the one or more files for which data entry was performed by the suspicious process from a backup copy, wherein the backup copy of the one or more files was created and stored prior to occurrence of data entry to the one or more files by the suspicious process; and
update the antivirus database and start the antivirus software to perform the on-demand scan.
6. The method of claim 1, wherein for each identified document, the machine learning model determines a probability that a modification of the document belongs to one of the categories of the document modification.
7. The method of claim 6, further comprising:
determining, by the analyzer, a number of the one or more files for which the probability of the encrypter modifying a file exceeds a first threshold; and
identifying the suspicious process as being associated with the encrypter when the determined number of the one or more files for which the probability of modifying a file exceeds the first threshold is greater than a second threshold.
8. The method of claim 7, wherein identifying the suspicious process as being associated with the encrypter is performed using a trained second machine learning model that receives the identified class of file modifications of the file as input data.
9. The method of claim 8, wherein the trained second machine learning model further receives as input data the identified characteristics of the suspicious process.
10. The method of claim 8, wherein the trained second machine learning model comprises a machine learning model trained based on at least one of: neural networks, decision trees, random forests, support vector machines, k-nearest neighbor methods, logistic regression methods, linear regression methods, bayesian classification methods, and gradient boosting methods.
11. The method of claim 1, wherein identifying the one or more files for which data input was performed is based on processing of system calls to operations using streams and/or write streams.
12. The method of claim 1, wherein identifying the one or more files comprises identifying characteristics of each identified file, the characteristics comprising at least one of: entropy of at least a portion of the file, metadata of the at least a portion of the file, information about an application or process that has entered data into the file.
13. The method of claim 1, wherein the trained machine learning model used to identify the category of file modification comprises a first machine learning model based on at least one of: neural networks, decision trees, random forests, support vector machines, k-nearest neighbor methods, logistic regression methods, linear regression methods, bayesian classification methods, and gradient boosting methods.
14. The method of claim 1, wherein identifying the suspicious process as being associated with the encrypter further comprises:
identifying characteristics of the suspicious process, the characteristics including at least an identifier and a context of the suspicious process; and
identifying an event associated with the suspicious process, the event comprising one or more of: determination of an antivirus program, modification of an auto-launch list, internet access, and information about the system.
15. A system for identifying an encryptor encoding a file of a computer system, comprising:
at least one processor configured to:
identifying, by the file processor, one or more files for which data entry was performed by the suspicious process;
for each identified file, determining, by the file processor, characteristics of the identified file;
for each identified document, identifying, by an analyzer, a category of document modification using a trained machine learning model and corresponding characteristics of the identified document;
for each identified file, identifying, by the analyzer, a suspicious process as being associated with the encrypter based on the identified category of file modification of the file; and
protecting the computer system from the encrypter.
16. The system of claim 15, wherein the processor is further configured to:
detecting unwanted software by sequentially inspecting all processes of the computer system, wherein the sequential inspection includes identifying the inspected process as the suspicious process.
17. The system of claim 15, wherein the suspicious process is associated with an updatable list of predetermined processes.
18. A non-transitory computer-readable medium having stored thereon computer-executable instructions for identifying an encryptor encoding a file of a computer system, the computer-executable instructions comprising instructions for:
identifying, by the file processor, one or more files for which data entry was performed by the suspicious process;
for each identified file, determining, by the file processor, characteristics of the identified file;
for each identified document, identifying, by an analyzer, a category of document modification using a trained machine learning model and corresponding characteristics of the identified document;
for each identified file, identifying, by the analyzer, a suspicious process as being associated with the encrypter based on the identified category of file modification of the file; and
protecting the computer system from the encrypter.
19. The non-transitory computer-readable medium of claim 18, wherein the computer-executable instructions further comprise instructions to:
detecting unwanted software by sequentially inspecting all processes of the computer system, wherein the sequential inspection includes identifying the inspected process as the suspicious process.
20. The non-transitory computer-readable medium of claim 18, wherein the suspicious process is associated with an updatable list of predetermined processes.
CN202110685763.4A 2020-08-24 2021-06-21 System and method for identifying encryptor encoding files of computer system Pending CN114091046A (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
RU2020128090A RU2770570C2 (en) 2020-08-24 2020-08-24 System and method for determining process associated with malware encrypting computer system files
RU2020128090 2020-08-24
US17/320,362 US20220058261A1 (en) 2020-08-24 2021-05-14 System and method for identifying a cryptor that encodes files of a computer system
US17/320,362 2021-05-14

Publications (1)

Publication Number Publication Date
CN114091046A true CN114091046A (en) 2022-02-25

Family

ID=76305827

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110685763.4A Pending CN114091046A (en) 2020-08-24 2021-06-21 System and method for identifying encryptor encoding files of computer system

Country Status (2)

Country Link
EP (1) EP3961449B1 (en)
CN (1) CN114091046A (en)

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180248896A1 (en) * 2017-02-24 2018-08-30 Zitovault Software, Inc. System and method to prevent, detect, thwart, and recover automatically from ransomware cyber attacks, using behavioral analysis and machine learning
US11494491B2 (en) * 2018-03-16 2022-11-08 Acronis International Gmbh Systems and methods for protecting against malware code injections in trusted processes by a multi-target injector
US10795994B2 (en) * 2018-09-26 2020-10-06 Mcafee, Llc Detecting ransomware

Also Published As

Publication number Publication date
EP3961449A1 (en) 2022-03-02
EP3961449B1 (en) 2023-08-16

Similar Documents

Publication Publication Date Title
US11924233B2 (en) Server-supported malware detection and protection
Gopinath et al. A comprehensive survey on deep learning based malware detection techniques
Roy et al. Deepran: Attention-based bilstm and crf for ransomware early detection and classification
EP2310974B1 (en) Intelligent hashes for centralized malware detection
JP5961183B2 (en) How to detect malicious software using contextual probabilities, generic signatures, and machine learning methods
CN109074452B (en) System and method for generating tripwire files
US11122061B2 (en) Method and server for determining malicious files in network traffic
CN110659483A (en) System and method for identifying malicious files using a learning model trained on one malicious file
US9317679B1 (en) Systems and methods for detecting malicious documents based on component-object reuse
US20110185417A1 (en) Memory Whitelisting
US10735468B1 (en) Systems and methods for evaluating security services
US20210097177A1 (en) System and method for detection of malicious files
US20200257811A1 (en) System and method for performing a task based on access rights determined from a danger level of the task
US11487868B2 (en) System, method, and apparatus for computer security
Tchakounté et al. LimonDroid: a system coupling three signature-based schemes for profiling Android malware
Zakaria et al. Early detection of windows cryptographic ransomware based on pre-attack api calls features and machine learning
Cen et al. Ransomware early detection: A survey
Jang et al. Function‐Oriented Mobile Malware Analysis as First Aid
JP7320462B2 (en) Systems and methods for performing tasks on computing devices based on access rights
Mohata et al. Mobile malware detection techniques
US20220058261A1 (en) System and method for identifying a cryptor that encodes files of a computer system
CN112149126A (en) System and method for determining trust level of a file
CN112580044A (en) System and method for detecting malicious files
EP3961449B1 (en) System and method for identifying a cryptor that encodes files of a computer system
CN116611058A (en) Lexovirus detection method and related system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination