NZ767245A - System and method for detecting data anomalies by analysing morphologies of known and/or unknown cybersecurity threats - Google Patents

System and method for detecting data anomalies by analysing morphologies of known and/or unknown cybersecurity threats Download PDF

Info

Publication number
NZ767245A
NZ767245A NZ767245A NZ76724520A NZ767245A NZ 767245 A NZ767245 A NZ 767245A NZ 767245 A NZ767245 A NZ 767245A NZ 76724520 A NZ76724520 A NZ 76724520A NZ 767245 A NZ767245 A NZ 767245A
Authority
NZ
New Zealand
Prior art keywords
data
data object
value
model
block
Prior art date
Application number
NZ767245A
Other versions
NZ767245B2 (en
Inventor
Bouguerra Nizar
Ling Chan Mei
Original Assignee
Flexxon Pte Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from SG10202002125QA external-priority patent/SG10202002125QA/en
Application filed by Flexxon Pte Ltd filed Critical Flexxon Pte Ltd
Publication of NZ767245A publication Critical patent/NZ767245A/en
Publication of NZ767245B2 publication Critical patent/NZ767245B2/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/901Indexing; Data structures therefor; Storage structures
    • G06F16/9017Indexing; Data structures therefor; Storage structures using directory or table look-up
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/55Detecting local intrusion or implementing counter-measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/55Detecting local intrusion or implementing counter-measures
    • G06F21/56Computer malware detection or handling, e.g. anti-virus arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/55Detecting local intrusion or implementing counter-measures
    • G06F21/56Computer malware detection or handling, e.g. anti-virus arrangements
    • G06F21/562Static detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1408Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1408Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
    • H04L63/1416Event detection, e.g. attack signature detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1408Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
    • H04L63/1425Traffic logging, e.g. anomaly detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/32Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials
    • H04L9/3247Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials involving digital signatures

Abstract

This document describes a system and method for detecting anomalous data files and preventing detected anomalous data files from being stored in a data storage. In particular, the system and method detects anomalous data files by dividing each data file into blocks of data whereby entropy values are obtained for each block of data and this information is collated and subsequently used in a machine learning model to ascertain the security level of the data file.

Description

SYSTEM AND METHOD FOR DETECTING DATA ANOMALIES BY ANALYSING MORPHOLOGIES OF KNOWN AND/OR UNKNOWN CYBERSECURITY THREATS Field of the Invention This invention relates to a system and method for detecting anomalous data files and preventing detected anomalous data files from being stored in a data storage. In particular, the system and method detect anomalous data files by dividing each data file into blocks of data whereby entropy values are obtained for each block of data and this information is collated and subsequently used in a machine learning model to ascertain the security level of the data file.
Summary of Prior Art I , and types of malicious cyber-attacks. The aim of these attacks are to illicitly gain access to a ( ) may be installed in , (e.g. email, or website), through a CD-ROM inserted into the system or through an external storage device connected to the system. Once the malware has gained access to the system, backdoors), accessing sensitive information, deleting crucial files thereby causing the system to fail.
It is generally agreed that once malware has been installed, it becomes much harder to detect and this allows the computer system to be easily compromised by the attacker.
To address this issue, those skilled in the art have proposed that such malware or data be identified before it is allowed to infect a computer system. Once identified, the malware may then be classified so that the extent of damage that may be caused by the malware may be better understood and prevented should it occur again. Amongst the various techniques that have been proposed to identify malware include the temporal analysis and the live update approaches which are then used to update a database so that the database may be used to filter known malicious entities from affecting protected computer systems.
Initially, the most obvious way would be for the system administrator to manually analyse a suspicious program as the program is running. The administrator then observes the results to determine whether the program is to be treated as a malware or trusted software.
D analysis of the program, the administrator may decompile the program to investigate specific lines of code or pay special attention to application program interface (API) calls that interact with the computer system and/or external contacts to determine whether these calls indicate malicious behaviour. While such an approach may be through and detailed, it is extremely time consuming and inefficient. Hence, those skilled in the art have proposed alternative automated method.
In the temporal analysis approach, all activities in an affected system are sorted and reviewed according to time so that suspicious events occurring within a particular time period may be closely examined. Such events may include files accessed/ installed/ deleted/ modified; logs of user entries; processes (including background processes) that were initiated or terminated; network ports that were remotely accessed, and etc. during the time period. Once the event that allowed the malware to be installed has been detected, the computer sy threat classification system may then be updated accordingly to prevent the reoccurrence of such an event.
An alternative to reviewing static historical data such as files and event logs is the live update method which examines live programs, system memory contents as the programs are running, current network port activity, and other types of metadata while the computer system is in use in order to identify how it may have been modified by an attacker. The information obtained from this method ma system.
The updated threat classification system may then be used to review new files that are to be introduced to the system. This is done by comparing the characteristics of the new files with its database of known, previously encountered files. Such comparisons are typically done by cryptographically hashing the data that is to be compared, i.e. by applying a mathematical function to convert the data into compact numerical representations. It is then assumed that if the two hashes generated using the same algorithm are different, this implies that the new file may have been compromised.
The downside of the approaches proposed above is that they do not prevent zero-day type of malwares from affecting computer systems and are only useful in preventing reoccurrences of the same malware that have been previously detected. In other words, if slight modifications are made to these malware, there is the strong likelihood that the malware may slip throug .
Other techniques that have been proposed to identify suspicious activity on a potentially compromised computer system often generate large amounts of data, all of which must be reviewed and interpreted before they may be used to update threat classification systems. As a further complication, the malware themselves are constantly evolving, developing new ways to circumvent existing detection methodologies, by employing various methods to camouflage their presence, making the job of computer security systems much more difficult. Some of file entries, file modification/access dates, and system processes. In addition to the above, the identity of the malware itself may be obfuscated by changing its name or execution profile such that it appears to be something benign thereby effectively camouflaging the malware.
However, when data is encrypted, compressed, or obfuscated (depending on the method of obfuscation) its entropy value, or its measure of randomness, tends to be higher . I , programs generally tend to be in a structured organized manner for ease of debugging while encrypted data tends to have a significant degree of entropy.
It is accepted that a measure of entropy isn't a guaranteed method for identifying malware or an attacker's hidden data store. A valid program may have encrypted, or more commonly, compressed, information stored on a computer system. However, at the very basic level, the examination of entropy does provide an excellent initial filter for identifying potentially problematic programs. By doing so, this greatly reduces the amount of data that needs to be analysed in great detail.
However, due to the way an entropy value is generated for a block of data, there is the possibility that a data block may return a low entropy value when in fact certain sections of that data block may contain small obfuscated blocks of malware. Such a scenario could occur when an attacker has placed encrypted malware in a data block with relatively low entropy thereby effectively masking the presence of the malware.
In view of the above, it is most desirous for a technique to derive a robust measurement of entropy in order to detect the presence of malware in a computer system while reducing the number of false positives generated during the detection process For the above reasons, those skilled in the art are constantly striving to come up with a system and method that is capable of generating a suitable entropy value for the data files whereby these entropic values and other information about the data file are provided to a supervised machine learning model to detect and identify anomalous data files before such Summary of the Invention The above and other problems are solved and an advance in the art is made by systems and methods provided by embodiments in accordance with the invention.
A first advantage of embodiments of systems and methods in accordance with the invention is that zero-day type anomalous files may be effectively and efficiently identified.
A second advantage of embodiments of systems and methods in accordance with the invention is that anomalous files that have yet to be labelled or identified as threats will be evolutions of such similar malware.
A third advantage of embodiments of systems and methods in accordance with the invention is that regardless of the type of file introduced into the system, the file will be analysed to determine its threat value.
A fourth advantage of embodiments of systems and methods in accordance with the invention is that regardless of the type and/or size of the file introduced into the system (and which may not contain any data files), any Read/ Write/ Overwrite commands initiated by the file will be analysed as a front-end manager of a data flash controller will be configured to constantly sample commands executed by the file. The sampling period may vary between a few hundred of a millisecond to a few tens of seconds and by doing so, this prevents the system from ransomware attacks.
The above advantages are provided by embodiments of a method in accordance with the invention operating in the following manner.
According to a first aspect of the invention, a system for detecting data anomalies in a received data object is disclosed, the system comprising: a processing unit; and a non- transitory media readable by the processing unit, the media storing instructions that when executed by the processing unit, cause the processing unit to: determine a security posture of the data object based on a digital signature and a file type of the data object; generate a type- security-platform (TSP) lookup table based on the security posture and the features of the data object associated with the security posture, and generate an obfuscation value and a forensic value for the received data object based on the TSP lookup table; generate a disassembled value or an interpreted value for the data object; compute a result value for each block of the received data object, whereby the result value for each of the blocks is generated based on the disassembled or interpreted value, an obfuscation value and a forensic value associated with the block of the received data object; create a data model based on all the result values of the data object; and process the data model using an artificial intelligence (AI) algorithm to determine if the data object contains data anomalies.
With reference to the first aspect, the instructions to generate an obfuscation value for the received data object comprises instructions for directing the processing unit to: divide the data object into blocks of data; and calculate a Shannon Entropy value for each block of data.
With reference to the first aspect, the instructions to generate a forensic value for the received data object comprises instructions for directing the processing unit to: divide the data object into blocks of data; and calculate a similarity score for each block of data using a Frequency-Based Similarity hashing scheme.
With reference to the first aspect, the instructions to generate result values for each block of the received data object comprises instructions for directing the processing unit to: generate a result value comprising three bytes for each block of the received data, whereby for each block, the instructions direct the processing unit to: set a most significant bit (MSB) and a second MSB of a first byte of the result value based on the disassembled or interpreted value of the data object; parse a remainder of the bits of the first byte with a second byte of the result value, and set the parsed result based on the obfuscation value associated with the block; and set a value of a third byte based on the forensic value associated with the block.
With reference to the first aspect, the instructions to generate a data model based on all the result values of the data object comprises instructions for directing the processing unit to: generate a data image model whereby each pixel in the data image model is associated with a unique result value, wherein each unique result value is represented in the data image model by a unique image.
With reference to the first aspect, the AI algorithm used to process the data model comprises: a convolutional neural network (CNN) model, a deep neural network (DNN) model or a recurrent neural network (RNN) model.
With reference to the first aspect, the instructions to process the data model using the artificial intelligence (AI) algorithm comprises instructions for directing the processing unit to: compare the data model with data models contained within a database, wherein the comparison is performed using machine learning algorithms.
With reference to the first aspect, the media further comprises instructions for directing the processing unit to: provide a virtual file system that is configured to receive and store the data object, whereby the virtual file system causes the processing unit to perform all the steps within the virtual file system.
With reference to the first aspect, the digital signature comprises a magic number associated with the data object.
With reference to the first aspect, the features of the data object associated with the security posture comprises a platform type and a file type of the data object.
According to a second aspect of the invention, a method for detecting data anomalies in a received data object using an Artificial Intelligence (AI) module is disclosed, the method comprising the steps of: determining, using an analyser module provided within the AI module, a security posture of the data object based on a digital signature and a file type of the data object; generating, using the analyser module and a detector module provided within the AI module, a type-security-platform (TSP) lookup table based on the security posture and the features of the data object associated with the security posture, and generate an obfuscation value and a forensic value for the received data object based on the TSP lookup table; generating, using a disassembling and interpreting module provided within the AI module, a disassembled value or an interpreted value for the data object; computing, using a block structuring module provided within the AI module, a result value for each block of the received data object, whereby the result value for each of the blocks is generated based on the disassembled or interpreted value, an obfuscation value and a forensic value associated with the block of the received data object; creating, using a model generator module provided within the AI module, a data model based on all the result values of the data object; and processing, using an AI threat module provided within the AI module, the data model using an artificial intelligence (AI) algorithm to determine if the data object contains data anomalies.
With reference to the second aspect, the step of generating the obfuscation value for the received data object comprises the steps of: dividing the data object into blocks of data; and calculating a Shannon Entropy value for each block of data.
With reference to the second aspect, the step of generating a forensic value for the received data object comprises the steps of: dividing the data object into blocks of data; and calculating a similarity score for each block of data using a Frequency-Based Similarity hashing scheme.
With reference to the second aspect, the step of generating result values for each block of the received data object comprises the steps of: generating a result value comprising three bytes for each block of the received data, whereby for each block, the method: sets a most significant bit (MSB) and a second MSB of a first byte of the result value based on the disassembled or interpreted value of the data object; parses a remainder of the bits of the first byte with a second byte of the result value, and set the parsed result based on the obfuscation value associated with the block; and sets a value of a third byte based on the forensic value associated with the block.
With reference to the second aspect, the step of creating a data model based on all the result values of the data object comprises the steps of: generating a data image model whereby each pixel in the data image model is associated with a unique result value, wherein each unique result value is represented in the data image model by a unique image.
With reference to the second aspect, the AI algorithm used to process the data model comprises: a convolutional neural network (CNN) model, a deep neural network (DNN) model or a recurrent neural network (RNN) model.
With reference to the second aspect, the step of processing the data model using the artificial intelligence (AI) algorithm comprises the step of: comparing the data model with data models contained within a database, wherein the comparison is performed using machine learning algorithms.
With reference to the second aspect, wherein before the step of determining the security posture of the data object based on the digital signature and the file type of the data object; the method further comprises the step of: providing, using the analyser module, a virtual file system to receive and store the data object, whereby the virtual file system causes all the steps of the method to be executed within the virtual file system.
With reference to the second aspect, the digital signature comprises a magic number associated with the data object.
With reference to the second aspect, the features of the data object associated with the security posture comprises a platform type and a file type of the data object.
Brief Description of the Drawings The above and other problems are solved by features and advantages of a system and method in accordance with the present invention described in the detailed description and shown in the following drawings.
Figure 1 illustrating a block diagram of modules that may be used to implement the method for detecting and analysing anomalies in accordance with embodiments of the invention; Figure 2 illustrating a block diagram representative of processing systems providing embodiments in accordance with embodiments of the invention; Figure 3 a process or method for detecting anomalous data files in accordance with embodiments of the invention; Figure 4 illustrating a process or method for utilizing a 32-bit security lookup table to generate a data model in accordance with embodiments of the invention; Figure 5 illustrating a diagram showing the entropy values being plotted against the sampling rate obtained for an exemplary data object in accordance with embodiments of the invention; Figure 6 illustrating a plot having an x axis that represents the frequency of a type of operation while the y axis represents a type of operation after a data object has been disassembled; Figure 7A illustrating a 512 x 512 pixels plot for generating an image of a data model in accordance with embodiments of the invention; Figure 7B illustrating a generated image of a data model in accordance with embodiments of the invention; Figure 7C illustrating a comparison between a generated data image model and images that were previously for pre-trained dataset models in accordance with embodiments of the invention; Figure 8 illustrating a process or method for scoring a dataset model generated by the process illustrated in Figure 4 based on its features using a machine learning model whereby the scores are used determine the veracity of the data file in accordance with embodiments of the invention.
Detailed Description This invention relates to a system and method for detecting anomalous data files and preventing detected anomalous data files from being stored in a data storage. In particular, the system and method divides each data file into blocks of data whereby entropy values are obtained for each block of data and this information is collated and subsequently used in a machine learning model to ascertain the security level of the data file. Files that are found to be anomalous are then quarantined while files that are deemed to be ok will be allowed to progress to the next step whereby it will be analysed for any malware and/or ransomware commands that may be running in the background (even if the file does not contain any data sections).
The present invention will now be described in detail with reference to several embodiments thereof as illustrated in the accompanying drawings. In the following description, numerous specific features are set forth in order to provide a thorough understanding of the embodiments of the present invention. It will be apparent, however, to one skilled in the art, that embodiments may be realised without some or all of the specific features. Such embodiments should also fall within the scope of the current invention. Further, certain process steps and/or structures in the following may not been described in detail and the reader will be referred to a corresponding citation so as to not obscure the present invention unnecessarily.
Further, one skilled in the art will recognize that many functional units in this description have been labelled as modules throughout the specification. The person skilled in the art will also recognize that a module may be implemented as circuits, logic chips or any sort of discrete component. Still further, one skilled in the art will also recognize that a module may be implemented in software which may then be executed by a variety of processor architectures.
In embodiments of the invention, a module may also comprise computer instructions or executable code that may instruct a computer processor to carry out a sequence of events based on instructions received. The choice of the implementation of the modules is left as a design choice to a person skilled in the art and does not limit the scope of this invention in any way.
An exemplary process or method for detecting, analysing and identifying an anomalous data file by an AI core processor in accordance with embodiments of the invention is set out in the steps below. The steps of the process or method are as follows: Step 1: determine a security posture of the data object based on a digital signature and a file type of the data object; Step 2: generate a type-security-platform (TSP) lookup table based on the security posture and the features of the data object associated with the security posture, and generate an obfuscation value and a forensic value for the received data object; Step 3: generate a disassembled value or an interpreted value for the data object; Step 4: compute a result value for each block of the received data object, whereby the result value for each of the blocks is generated based on the disassembled or interpreted value, an obfuscation value and a forensic value associated with the block of the received data object; Step 5: create a data model based on all the result values of the data object; and Step 6: process the data model using an artificial intelligence (AI) algorithm to determine if the data object contains data anomalies.
In accordance with embodiments of the invention, the steps set out above may be performed by modules contained within artificial intelligence (AI) core module 105, as illustrated in Figure 1, whereby AI core module 105 comprises synthesis module 110, AI advanced analyser module 106, AI threat module 107, validation module 140 and storage 150.
In turn, synthesis module 110 comprises sub-modules: analyser 111; platform detector 112; disassembling/ interpreting 113; and block structuring 114 while AI advanced analyser module 106 comprises sub-modules: output structure block 120; feature extractor 121; and model generator 122. As for AI threat module 107, this module comprises sub-modules AI search 130; database 131; and trained database 132.
In embodiments of the invention, data 101 may comprise all types and formats of data objects and the content of these data may include, but are not limited to, video files, documents, images, spreadsheets, application programming interfaces, executable files and all other types of files with various file extensions /or dummy command bytes such as Read/Write(erase) commands. In general, data 101 may be divided into a plurality of sub-objects 102, which when combined, make up data 101 as a whole. In embodiments of the invention, data 101 are categorized into smaller sub-objects 102 so that it would be easier for AI core module 105 to process each of these smaller sub-objects efficiently and effectively.
Data 101 or objects 102 (sub-divided data 101) are then received by synthesis module 110. From herein, although reference is made to data 101, one skilled in the art will recognize that the subsequently discussed steps and processes may be applied also to all sorts of data objects and even sub-objects 102 without departing from the invention. Upon receiving data 101, analyser module 111 analyses the data structure of data 101 to obtain more details and information about the structure and make-up of data 101. In an embodiment of the invention, analyser module 111 is configured to determine through a digital signature lookup table, a file lookup table or a magic number lookup table whether data 101 is an anomalous file. In embodiments of the invention, analyser module 111 creates an optional virtual file system that is configured to read the file in a sandbox environment. Analyser module 111 then generates an appropriate security flag for data 101 which is then used in the generation of a unique type- security-platform (TSP) lookup table associated with data 101. This TSP lookup table is then used by the other modules in synthesis module 110 to analyse data 101 in greater detail.
Platform detector module 112 is then configured to populate the remainder of the received TSP lookup table based on certain key features about data 101 such as, but not limited to, its file type, its operating platform and computer architectures that may be affected and/or altered by data 101 and its security posture. The result of the analysis performed by detector module 112 is then stored at GV module 112a and is also provided to disassembling/ interpreting module 113.
At disassembling/ interpreting module 113, the result generated by detector module 112 is then either disassembled to generate a disassembled dataset or value, or is sent to a de-scripting source to be interpreted for recognizable ASCII terms and the interpreted data object and its corresponding value is then returned back to module 113 to be further processed.
The dataset model is then stored at GV module 113a and provided to the next module, the block structuring module 114.
At block structuring module 114, obfuscated data and/or other relevant features may be extracted from the reduced dataset. In embodiments of the invention, structuring module 114 identifies functions, external call functions, objects that communicate with input/output ports in turn adds this information to the dataset model. This dataset model may be presented as a complex table of bytes, i.e. output structure block 120, to AI advanced analyser module 106. In embodiments of the invention, the dataset model will be generated by block structuring module 114 based on the outputs of modules 111, 112 and 113. Module 114 will parse all the results obtained from the entropy/forensic/dissembler processes into a dataset model which possesses an appropriate data structure to unify all the results together. In embodiments of the invention, parsed results contained within this structure will be named with a prefix, , . . result of a forensic table will be named as P. F. F. . 1. 2. 3 .
Once this information is provided to module 106, feature extractor module 121 will collate information contained within the output structure block 120 together with the outputs stored at GV modules 111a, 112a and 113a through GV call-back module 117, and data 101 and provide this collated information to model generator module 122.
Model generator module 122 then in turn uses the received information to generate an appropriate model. The model is then provided to and stored in database 131 and is used in combination with trained database 132 by AI search module 130 to determine a threat score for the model.
In embodiments of the invention, features obtained from extractor module 121 may be used by the model generator module 122 to generate a data image model having a matrix comprising 512 x 512 pixels, whereby each pixel represents a block of data 101 and is made up of 3 bytes of data. The generated data image model and/or the dataset model will then be temporarily stored inside database 131. AI search module 130 will then run a matching algorithm to match the model stored within database 131 with datasets stored in trained database 132. The outcome will then be normalized by 100 and the normalized score will then be passed to validation module 140. Within this module, various scoring settings may be defined by the user of the system, such as, but not limited to: 0 50 scores between 51 and 85 may imply that the f 86 99 In embodiments of the invention, AI search module 130 utilizes a supervised machine learning algorithm to assign the threat scores to the dataset model. In embodiments of the invention, the supervised machine learning techniques such as, but not limited to, linear regression, random forest and/or support vector machines may be used.
Once the threat scores have been assigned, these scores are then provided with data 101 to validation module 140. If validation model 140 ascertains based on the scores that data 101 is a threat, data 101 will be blocked from being installed in storage 150 and instead, will be provided to an output 160 whereby data 101 will either be quarantined or further analysed.
Conversely, if it is decided that data 101 does not constitute a threat, it will be allowed to be installed in storage 150. For completeness, storage 150 may comprise, but is not limited to, any type or form of data storage such as a (. . RAM, / ) (. . drives).
In accordance with embodiments of the invention, a block diagram representative of components that may be provided within AI core 105 (as illustrated in Figure 1) for implementing embodiments in accordance with embodiments of the invention is illustrated in Figure 2. One skilled in the art will recognize that the exact configuration of AI core 105 is not fixed and may be different and may vary and Figure 2 is provided by way of example only. can process such instructions and may include: a microprocessor, microcontroller, programmable logic device or other computational device. That is, processor 205 may be provided by any suitable logic circuitry for receiving inputs, processing them in accordance with instructions stored in memory and generating outputs (i.e. to the memory components, security management module 280, AI coprocessor 285, sensors 290 and/or PCIe BUS, etc.). In this embodiment, processor 205 may be a single core or multi-core processor with memory addressable space. In one example, processor 205 may be multi-core, comprising for example an 8 core CPU.
In embodiments of the invention, processor 205 may comprise flash controller 252 that is configured to control any type of non-volatile memory storage medium that may be electrically erased and programmed. An example of such a non-volatile memory storage would be NAND or NOR type flash memory or non-flash EEPROM flash memory. In Figure 2, although flash controller 252 is arranged to control NAND flash 245 which stores the secure boot, firmware, AI trained data, signature data, hash table, threshold table, reserved area and user area, one skilled in the art will recognize that other types of data and information may be stored in NAND flash 245 without departing from the invention.
Processor 205 also comprises DDR controller 268 that is configured to control any type of volatile random access memory RAM 223 such as static random access memory (SRAM) or dynamic-random access memory (DRAM). A read only memory (ROM) module ROM 270 is also provided within processor 205 together with I-D 254, A-D 256, architecture core 258, AXI4 262, and NVME core 276. In particular, I-D 254 comprises an instruction decoder configured to decode instructions for processor 205, A-D 256 comprises an address decoder configured to decode addresses for each physical peripheral used within the chipset such that all address buses are managed within processor 205, and architecture core 258 comprises the architecture core of processor 205 such as, but not limited to, an ARM architecture, a MIPS architecture, a RISC-VARCH architecture and so on whereby the architecture type depends on the number of instructions that processor 205 is handling, the amount of power consumed and etc.
As for AXI4 262, this component comprises an interconnect bus connection identified as AXI4. AXI4 262 is a connection that is frequently used by many ASIC manufacturers and it broadly comprises a SLAVE/MASTER bus and a high speed internal communication bus connecting processor 205 with other components contained within whereby each component linked to this interconnection bus will have its own address.
Non-volatile Memory Express (NVME) core 276 comprises a component that is READ/WRITE and this is done through a direct connection to PCIe bus 295 through AXI4 262. This means that each time a data object is received from a host or is sent to the host, this transmission will be controlled by NVME core 276. It should be noted that this component may be configured to operate independently from processor 205 and may be used to monitor all NVME commands executed within a predefined time frame. Additionally, NVME core 276 may be configured to sync the speed of the data rate between DDR controller 268 and flash controller 252, whereby such an operation is known as Flash Transition Layer.
Cache 260 is also provided within processor 205 and is used by the processor to reduce the average cost (time or energy) to access data from the various types of memories.
One skilled in the art will recognize that the various memory components described above comprise non-transitory computer-readable media and shall be taken to comprise all computer-readable media except for a transitory, propagating signal. Typically, the instructions are stored as program code in the memory components but can also be hardwired.
Peripheral Component Interconnect Express (officially abbreviated as PCIe) controller 274 is a controller for controlling data exchanges that take place on the high-speed serial computer expansion bus between processor 205 and various host computers that are able to support PCIe bus protocol. Input/output (I/O) interface 272 is also provided for communicating with various types of user interfaces, communications interfaces and sensors 290. Sensors 290 may include, but are not limited to, motion sensors, temperature sensors, impact sensors and these sensors may be configured to transmit/receive data to/from external sources via a wired or wireless network to other processing devices or to receive data via the wired or wireless network. Wireless networks that may be utilized include, but are not limited to, Wireless-Fidelity (Wi-Fi), Bluetooth, Near Field Communication (NFC), cellular networks, satellite networks, telecommunication networks, Wide Area Networks (WAN) and etc.
AI coprocessor 285 is connected to processor 205 via a dedicated bus (not shown) and comprises a specialized hardware accelerator that is designed to accelerate artificial intelligence applications, especially machine learning algorithms and is used to accelerate specific computing tasks to lighten the load on processor 205.
Figure 3 illustrates process 300 for analysing a received data object, whereby process 300 may be implemented by the modules in the AI core in accordance with embodiments of the invention. Process 300 begins at step 305 by receiving a data object that is to be analysed.
As mentioned in the previous section, the data object may comprise a document, image file, an executable file or any other types of files. In embodiments of the invention, the data object may comprise blocks of bytes whereby these blocks of bytes will be stored in a storage medium linked to the AI core, such as, but not limited to, a flash drive chipset where it will be assigned an area to record its size, flash location sectors and Start-of-File and End-of-File.
Process 300 then proceeds to step 310 whereby optionally, a virtual file system is initialized once the entirety of the data object has been completely received. At this step, a virtual file system that is able to read the data object in the same manner as a host drive is created by process 300. However, unlike conventional drives or file systems, the virtual file system has no direct link to drives accessible by the host drive as the virtual file system is configured to be completely separate. Further, as the virtual file system is configured to run inside an ASIC chipset, it requires lesser hardware resources and is able to be executed in parallel with other tasks. Hence, as new objects are received at step 305, process 300 will add these new data objects into an internal list in the virtual file system to be queued for further processing. In embodiments of the invention, process 300 will continuously monitor this internal list, to ensure that the data object will not be called back until it has been analysed and secured. In order to do this, the internal list may be contained in a temporary buffer that may be isolated from the host file system and may be temporarily locked. In other words, in one embodiment of the invention, it can be said that the entirety of processes 300 and 400 take place within the virtual file system while in another embodiment of the invention, processes 300 and 400 may take place outside the virtual file system. This means that if processes 300 and 400 take place outside the virtual file system, process 300 will proceed from step 305 to step 315.
At step 315, either from within the virtual file system or within the AI core, process 300 . I , . F , . , 300 stores this information in a temporary variable which may be called extension[n-1]=pdf.
Information from step 315 is then passed to step 320. At this step, process 300 generates a magic number or a special header frame that is unique to the data object and this magic number may then be embedded inside the data object whereby it may then be used by Process 300 then loads a preloaded magic number lookup table at step 325. As known to one skilled in the art, magic numbers refers to constant numerical values that may be used to identify a particular file format or protocol or may also refer to distinctive unique values that are unlikely to be mistaken for other meanings. Each magic number in the preloaded magic number lookup table refers to a particular file format or type and this lookup table may be updated periodically as required, or whenever a new file type is discovered.
Process 300 then determines at step 330 if the generated magic number matches with any of the magic numbers in the preloaded magic number lookup table.
If process 300 determines that a match does not exist, process 300 proceeds to step 335. At this step, a security flag associated with a soon to be generated type-security-platform (TSP) lookup table will be generated and set to a high level. Process 300 then proceeds to step 340 whereby a type-security-platform (TSP) lookup table is generated.
Conversely, if process 300 determines at step 330 that the generated magic number matches with a magic number in the preloaded magic number lookup table this would imply that the data object comprises a known file type.
Process 300 then loads a lookup table of potential anomalous file types (e.g. stored as a data array) at step 345 whereby each known anomalous file type in this table is associated with a magic number. In this embodiment of the invention, an anomalous file type represents a file that is a potential threat and may be executed on different platforms without having to be compiled with a specific tool chain compiler (for example a Java script object, a pdf file, a jpeg(with embedded exe file and called later to extract it).
Process 300 then determines at step 350 whether the file type of the received data object matches with an anomalous file type in the lookup table of potential anomalous file types loaded at step 345.
If process 300 determines that there is a match, process 300 proceeds to step 355. At this step, a security flag associated with a soon to be generated type-security-platform (TSP) lookup table will be generated and the flag will be set to a high level. Process 300 then proceeds to step 340 whereby a TSP lookup table is then generated.
Conversely, if process 300 determines at step 350 that the file type of the received data object is not a match with an anomalous file type, process 300 proceeds to step 360 whereby a security flag associated with a soon to be generated type-security-platform (TSP) lookup table will be generated and set to a normal level indicating that the data object may not be an anomalous file. Process 300 then proceeds to step 340 whereby a TSP lookup table is then generated.
In accordance with embodiments of the invention, a TSP lookup comprises a 32-bit variable whereby the first 16 most significant bits (MSB) represent the various types of data objects (i.e. bits 31(MSB) to 16), the subsequent 8 bits represent the security posture of the data object (i.e. bits 15 to 8) and the final 8 bits represent the platform usage of the data object (i.e. bits 7 to 0 - the least significant bit (LSB)). An exemplary TSP lookup table is set out below at Tables 1-3.
Bits 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 Type *.pdf 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 *.jpeg 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 *.bmp 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 *.exe 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 *any 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 Table 1 Bits 15 14 13 12 11 10 9 8 Security Status_1 1 1 0 0 0 0 0 1 Status_2 1 0 0 0 1 0 0 0 Status_3 0 0 1 0 0 0 0 0 Status_4 0 0 0 1 0 0 0 1 Status_n 0 1 0 1 0 0 1 0 Table 2 Bits 7 6 5 4 3 2 1 0 (LSB) Platform X86 1 1 0 0 0 1 0 B X64 1 0 0 1 0 0 0 B AMD 1 1 0 0 0 0 0 B ARM 1 0 1 0 1 0 0 B Etc. 1 0 0 1 0 1 1 B Table 3 Table 1 sets out an example how the various file types may be represented by bits 31 (MSB) to 16 in the TSP lookup table, Table 2 sets an example how the security posture of a data object may be represented by bits 15 to 8 and Table 3 sets an example how the various types of platforms used by the data object may be represented by bits 7 to 0.
I , T 2, S_1 s digital signature , S_ 2 signature matched with a magic number in the magic number lookup table but the file type of matched with a magic number in the magic number lookup table and its file type does not match with any known anomalous file type. One skilled in the art will recognize that various types of Status levels may be used to represent other types and variations of security postures without departing from this invention.
In embodiments of the invention, with reference to Table 3, bits 7 to 4 may be used to , (MCU) , system/hardware platform, and the lowest 4 bits (bits 3-0) may indicate the possible operating system of the data object, whereby if the LSB is set as B (or any other equivalent indicator) it means that the data object may run on any type of operating system.
Once the TSP lookup table has been generated for a data object, process 400 as illustrated in Figure 4 is then initiated. Process 400 begins at step 402 by retrieving the location of the data object as stored within the virtual file system (if process 400 is taking place within the virtual file system) or as stored in the AI core. Process 400 then retrieves the TSP lookup table associated with the data object at the next step, step 405. Process 400, at step 410, then parses the information contained in the TSP lookup table to their relevant groups. In particular, the bits in the TSP lookup table relating to the security status of the data object will be parsed a first group, bits in the TSP lookup table relating to the type of the data object will be parsed to a second group and bits in the TSP lookup table relating to the target platform of the data object will be parsed to a third group.
Process 400 then proceeds to step 430, where process 400 will determine from the first group of parsed bits whether the security level has been set to a normal or high level.
If process 400 determines that the security level has been set to a high level, it will proceed to step 435 whereby it will retrieve information relating to the type of the data object from the TSP and then to step 440 whereby it will retrieve information relating to the operating platform associated with the data object from the TSP. This information would be contained in the second and third group of parsed bits in the TSP. Information from steps 435 and 440 are then provided to step 447 and process 400 proceeds to step 445. The aim of providing information from 435 and 440 to step 447 is so that the forensic analysis may then be performed based on information from these two step.
At step 445, process 400 proceeds to calculate an obfuscation value for the whole data object, i.e. for the whole file size, based on the information received. Conversely, if process 400 determines that the security level has been set to a normal level, process 400 proceeds directly to step 445 to calculate an obfuscation value for the data object as a whole.
As an example, the range of the obfuscation values may be between 0-255 whereby a higher obfuscation value implies a higher risk of malware or ransomware.
In embodiments of the invention, the obfuscation values of the data object comprises the entropy values for the data object. In this embodiment, process 400 will proceed to compute an entropy value for the data object by first normalizing the size of the data object.
For example, if the data object comprises 1 Mbyte of data, it will be divided by 256 to produce 43 000 blocks whereby each block comprises 256 bytes. The Shannon Entropy value for each block is then obtained as follows. There are several mathematical methods for generating a of data. In an embodiment of the invention, the entropy value is calculated using a method pioneered by Claude Shannon and is known by those skilled in the art as the Shannon Entropy equation: � � � � log � � where � � is the probability of � given the discrete random variable � . Since � is discrete, data represented by binary digital data organized in bytes (or 8-bit blocks) may be used in place of � . In order for the equation to work properly, � must comprise a minimum block of data which is at least 256 bytes in length. The value obtained is then normalized such that � ∈ 0 . 0 … 1 . 0 where � � � | � � In short, the entropy value calculated based on the equations above will comprise a numerical value between 0 and 1, where values closer to 1 indicate higher degrees of entropy in a given block of data. For a more thorough discussion of Shannon Entropy, reference is S, C. E. A M T C. T Bell System Technical J 27, 379-423 and 623-656, July and October 1948, which is incorporated herein by reference. One skilled in the art will recognize that any other types of entropy calculation methods could be used without departing from this invention.
The entropy values obtained for each of the blocks may then be set out as illustrated in exemplary Table 4 below.
Entropy value Block0 Block1 Block2 Block3 Block4 Block5 Block6 Blockn Block Table 4 Table 4 shows an exemplary plot for a malware data object. As can be seen, most of the blocks exhibit high entropy values hence indicating that this data object is highly likely an anomalous data object. The results obtained from the entropy computations may also be ��� �� plotted as illustrated in Figure 5 whereby plots 505 and 510 represent entropy lines that may be associated with anomalous data objects, e.g. malware, while plot 515 which has lower entropy values may represent a non-anomalous data object. In embodiments of the invention, these patterns or plots may be used as digital fingerprints of the data object. This result will then be passed to the next module whereby it will be utilized to select the most suitable predefined forensic algorithm that is to be applied by process 400.
Process 400 then proceeds to step 447 whereby process 400 calls a forensic analysis , . . forensic analysis values and/or the entropy patterns (e.g. Table 4) show that there is the possibility that the data object may contain malware.
If it was previously determined at step 430 that the security level of the data object has been set to a normal level, the forensic analysis will be performed based on solely the blocks of the data object. However, if it was previously determined at step 430 that the security level of the data object was set to a high level, the forensic analysis will be performed based on the blocks of the data object together with the type and platform information retrieved at step 435 and 440 for each of the blocks.
The forensic module will generate forensic values that represent both the investigation method adopted and a similarity output that is between 1 to 10, whereby a lower value represents a lower risk (lesser match) and a higher value represents a higher risk (higher match). One skilled in the art will recognize that existing forensic analysis functions may be used without departing from this invention. By way of an example, exemplary (but not limiting) pseudocode set out below may be used for the forensic analysis.
Pseudocode for Forensic Analysis Forensic_t[0][Entropy method][result] Forensic_t[1][fbhash_method][result] Forensic_t[n-1][until the available method][result] Enum { METHOD1 METHOD2 METHOD3 …………METHODn-1} LIST; Typedef ARRAY[x][y][z] Forensic Declare pointer pDATA= Data Object; Declare variable Result[]; I = array{LIST}.
For x = 0 TO X <COUNT(LIST) FUNCTION_FORENSIC(pDATA,X,LIST{X},Result[X]) (the objective of this function is to perform a call back to an enumerated method in a list whereby the result will be matched to the method used) Forensic[x],[LIST{X}],[RESULT[X]] ; (this buffer value will be used to track the method used, the method’s name and its own result which is to be used later for the feature extract function) X=X+1; FUNCTION_FORENSIC(pDATA,X,LIST{X},Result[X]) Declare local = sizeof(pDATA); Declare LResult=0; Switch (LIST{X}) Case METHOD1: Analyse_M1(local,pDATA); (each function will be called based in statement case) Result[X]=LResult Case METHOD2: Analyse_M2(local,pDATA); Result[X]=LResult Case METHODn-1: Analyse_Mn-1(local,pDATA); Result[X]=LResult In embodiments of the invention, other types of forensic analysis may be carried out and this may comprise the step of hashing the data object using hashing methods known in digital forensics, such as, but not limited to the Frequency-Based Similarity hashing scheme (FbHash).
One skilled in the art will recognize that other types of investigative methods could be used without departing from this invention and the choice is left to one skilled in the art so that process 400 has the ability to update its repertoire of methods when new methods are discovered and added. An example of the FbHash scheme is set out below: A similarity score may be calculated whereby D1 is the data object and D2 is a dataset of known malware. Values for D1 and D2 may be obtained as follows: D1 D1 D1 D1 Digest(D1) = W , W , W , W ch0 ch1 ch2 ch(n-1) D2 D2 D2 D2 Digest(D2) = W , W , W , W ch0 ch1 ch2 ch(n-1) where the function Digest() is a storage array where W is a chunk score generated by a ch(n-1) FbHashing algorithm represented as , n: represents numbers of chunk scores used (or the number of blocks of the data object) and a FbHashing algorithm represents one of many methods for digital investigation where the following notations represent: Once values have been obtained for D1 and D2, a final similarity score, Similarity (D ,D ) may be computed using a cosine similarity method as follows: ∑ � � �� ��� � � � � � � , � 100 1 1 � � = = Final similarity scores will ranges between 0 and 100 whereby a score of 100 implies that D1 is identical with D2 while a score of 0 implies that D1 does not match with D2 at all.
At step 450, process 400 then determines whether the data object is to be disassembled or interpreted. If process 400 determines that the data object comprises substantial amounts of ASCII, process 400 will then cause the data object to be interpreted (steps 460, 482, 484, 486) while if process 400 determines that the data object comprises substantial amounts of machine language, process 400 will cause the data object to be disassembled (steps 455, 465, 470, 475).
If process 400 determines that the data object is to be disassembled, it will proceed to step 455. At this step, process 400 performs a listing process on the TSP lookup table of the data object. Based on the listed platform type, process 400 will decompile the file and list all internal and external calls at step 470, list all call functions at step 465, and list all used third library files at step 475. By listing out the common malware behaviours, which typically comprise commands that require access to file system, Ethernet I/O or any network access, process 400 is able to identify portions of the data object that may possibly constitute malware.
Process 400 provides all this information to a table array which comprise predefined settings to identify commands that comprise normal behaviour or commands which may be threats. For exam, DNS . B dat , 400 to call the selected function and if its behaviour is abnormal, the data object will be prevented from executing the function, the data object area will be locked and the user will be alerted.
Process 400 then triggers a numeral counter in Counter 1 indicating that the data object has been disassembled while at the same time resetting a numeral counter in Counter 2.
In other words, during the disassembling step, the data object is decompiled so that its behaviour may be analysed in detail. In embodiments of the invention, the raw binary of the data object may be obtained so that malware embedded inside the data object may be discovered and such an exemplary plot of the decompiled data is illustrated in Figure 6.
Figure 6 shows that different types of operations may be marked according to operation type. In this exemplary plots, the x axis represents the frequency of a type of operation while the y axis represents the type of operation and may be plotted using the exemplary pseudocode below Diss_table[0][x1_criterianame][operationtype][0 or 1] to Diss_table[n-1][xn-1_criterianame][operationtypen-1][0 or 1] Returning to Figure 4, if process 400 determines that the virtual data model is to be , 400 460 -scripted at this step and divided into various parts for further analysis. Obfuscated strings/ encrypted blocks are provided to step 482, call functions by third parties are provided to step 484 and references to external links are provided to step 486. The outcome is then stored at step 488 whereby a numeral counter in Counter 2 is triggered indicating that the data object has been interpreted while at the same time resetting a numeral counter in Counter 1.
The outcome from steps 480 and 488 are then provided to data model generator at step 490 which may then use this information together with all the information generated by process 400 thus far to compute a dataset model A.
In accordance with embodiments of the invention, process 400 uses the information generated thus far to whereby each result value comprises 3 bytes so that a data image model may be generated.
The most significant bit (MSB) and the second MSB of the first byte of the result value is used to indicate whether the block of the data object was disassembled or interpreted, while the remaining 6 bits of the first byte which are parsed with the second byte (producing 14 bits) are used to represent the obfuscation value of the block of the data object (as computed at step 445). The third byte of the result value is then used to represent the forensic analysis value (as computed at step 447).
In accordance with other embodiments of the invention, for hardware anomalies, the st nd first 12 bits including the MSB (1 byte + part of the 2 byte) will represent the power monitor (current/ voltage that will be converted by ADC), the next 10 bits will represent the temperature values, and the last 2 bits will indicate CPU loads: (values between 1 9 represents a low load, represents a medium load and 11 represents a high load).
A 512 / NAND flash such as its page/sector/PLANE address/ status of file if locked or not locked/digital signature.
Process 400 will then generate the data model A based on the appropriate features that were extracted above.
In embodiments of the invention, the data model A may be represented as an image comprising 512 x 512 pixels, whereby each of pixel 701 represents a result value (which comprises 3 bytes). This is illustrated in Figure 7A where each line in the image represents a line of pixels .
Hence, once process 400 has produced the result values for the blocks of the data object, these result values are then plotted onto the 512 x 512 canvas to produce a data image model 702 as illustrated in Figure 7B. In this embodiment of the invention, each unique result value is represented by a unique image representation such as, but not limited to, a corresponding shaded/coloured pixel on image 702.
Once image 702 has been generated, image 702 will then be provided to process 703 as illustrated in Figure 7C. At step 705, the target image (i.e. image 702) is provided to an image classification model such as, but not limited to, a convolutional neural network (CNN) model, a deep neural network (DNN) model or a recurrent neural network (RNN) model. The detailed workings of these models are omitted for brevity. The image classification model then generates features maps and rectified feature maps for image 702 at steps 710 and 715, and then proceeds to generate an output at step 720. This output is then compared with a database of outputs that represent known malware and if process 703 determines that the output at step 720 matches plot with an output of known malware, it is determined that the data object is malicious.
In another embodiment of the invention, dataset model A may comprise a collection of the result values for all the blocks of the data object. With reference to Figure 8, process 800 then searches through a preloaded database of abnormal data objects to identify a database model that is similar to that of dataset model A. This takes place at step 805.
As it is unlikely that a perfect match may be found, process 800 then performs a machine learning analysis to determine a matching score that is to be assigned to dataset model A based on the pre-trained dataset model that is most similar to it. This takes place at step 810. In embodiments of the invention, the machine learning algorithm adopted for this step may comprise supervised machine learning algorithm that has been trained using a set of pre-trained dataset models. In embodiments of the invention, the pre-trained dataset models may be prepared using external processing devices and stored within a SSD storage device accessible by process 800. To facilitate the ease of access, an indexed file stored within the storage device may be accessible by process 800 whereby this indexed file contains information about the database list that may be called.
Process 800 then determines at step 815 if the generated score exceeds a predetermined threat threshold. If process 800 determines that the generated score implies that the dataset model Ais unsafe, process 800 will proceed to step 820 which blocks the data object from being installed onto the sys creates a log map showing where the data object resides in quarantine and process 800 then ends. Alternatively, if process 800 determines from the generated score that the data object is safe, process 800 allows the data object to pass through at step 820 to be installed in the data storage of the system and process 600 then ends.
In other embodiments of the invention, both processes 703 and 800 may be carried out either sequentially or concurrently in order to put the data object through a more stringent screening process.
Numerous other changes, substitutions, variations and modifications may be ascertained by the skilled in the art and it is intended that the present invention encompass all such changes, substitutions, variations and modifications as falling within the scope of the appended claims.

Claims (16)

CLAIMS 1.:
1. A system for detecting data anomalies in a received data object, the system comprising: a processing unit; and a non-transitory media readable by the processing unit, the media storing instructions that when executed by the processing unit, cause the processing unit to: determine a security posture of the data object based on a digital signature and a file type of the data object; generate a type-security-platform (TSP) lookup table based on the security posture and the features of the data object associated with the security posture, and generate an obfuscation value and a forensic value for the received data object based on the TSP lookup table; generate disassembled values or interpreted values for the data object; compute a result value for each block of the received data object, whereby the result value for each of the blocks is generated based on the disassembled or interpreted values, an obfuscation value and a forensic value associated with the block of the received data object; create a data model based on all the result values of the data object; and process the data model using an artificial intelligence (AI) algorithm to determine if the data object contains data anomalies and wherein the instructions to generate an obfuscation value for the received data object comprises instructions for directing the processing unit to: divide the data object into blocks of data; and calculate a Shannon Entropy value for each block of data; and wherein the instructions to generate a forensic value for the received data object comprises instructions for directing the processing unit to: divide the data object into blocks of data; and calculate a similarity score for each block of data using a Frequency-Based Similarity hashing scheme.
2. The system according to claim 1 wherein the instructions to generate the result value for each block of the received data object comprises instructions for directing the processing unit generate a result value comprising three bytes for each block of the received data, whereby for each block, the instructions direct the processing unit to: set a most significant bit (MSB) and a second MSB of a first byte of the result value based on the disassembled or interpreted value of the data object; parse a remainder of the bits of the first byte with a second byte of the result value, and set the parsed result based on the obfuscation value associated with the block; and set a value of a third byte based on the forensic value associated with the block.
3. The system according to claim 1 wherein the instructions to create a data model based on all the result values of the data object comprises instructions for directing the processing unit generate a data image model whereby each pixel in the data image model is associated with a unique result value, wherein each unique result value is represented in the data image model by a unique image.
4. The system according to claim 3 wherein the AI algorithm used to process the data model comprises: a convolutional neural network (CNN) model, a deep neural network (DNN) model or a recurrent neural network (RNN) model.
5. The system according to claim 1 wherein the instructions to process the data model using the artificial intelligence (AI) algorithm comprises instructions for directing the processing unit compare the data model with data models contained within a database, wherein the comparison is performed using machine learning algorithms.
6. The system according to claim 1 wherein the media further comprises instructions for directing the processing unit to: provide a virtual file system that is configured to receive and store the data object, whereby the virtual file system causes the processing unit to perform all the steps within the virtual file system.
7. The system according to claim 1 wherein the digital signature comprises a magic number associated with the data object.
8. The system according to claim 1 wherein the features of the data object associated with the security posture comprises a platform type and a file type of the data object.
9. A method for detecting data anomalies in a received data object using an Artificial Intelligence (AI) module, the method comprising: determining, using an analyser module provided within the AI module, a security posture of the data object based on a digital signature and a file type of the data object; generating, using the analyser module and a detector module provided within the AI module, a type-security-platform (TSP) lookup table based on the security posture and the features of the data object associated with the security posture, and generate an obfuscation value and a forensic value for the received data object based on the TSP lookup table; generating, using a disassembling and interpreting module provided within the AI module, disassembled values or interpreted values for the data object; computing, using a block structuring module provided within the AI module, a result value for each block of the received data object, whereby the result value for each of the blocks is generated based on the disassembled or interpreted values, an obfuscation value and a forensic value associated with the block of the received data object; creating, using a model generator module provided within the AI module, a data model based on all the result values of the data object; and processing, using an AI threat module provided within the AI module, the data model using an artificial intelligence (AI) algorithm to determine if the data object contains data anomalies and wherein the step of generating the obfuscation value for the received data object comprises the steps of: dividing the data object into blocks of data; and calculating a Shannon Entropy value for each block of data; and wherein the step of generating a forensic value for the received data object comprises the steps of: dividing the data object into blocks of data; and calculating a similarity score for each block of data using a Frequency-Based Similarity hashing scheme.
10. The method according to claim 9 wherein the step of generating the result value for each block of the received data object comprises the steps of: generating a result value comprising three bytes for each block of the received data, whereby for each block, the method: sets a most significant bit (MSB) and a second MSB of a first byte of the result value based on the disassembled or interpreted value of the data object; parses a remainder of the bits of the first byte with a second byte of the result value, and set the parsed result based on the obfuscation value associated with the block; and sets a value of a third byte based on the forensic value associated with the block.
11. The method according to claim 9 wherein the step of creating a data model based on all the result values of the data object comprises the steps of: generating a data image model whereby each pixel in the data image model is associated with a unique result value, wherein each unique result value is represented in the data image model by a unique image.
12. The method according to claim 11 wherein the AI algorithm used to process the data model comprises: a convolutional neural network (CNN) model, a deep neural network (DNN) model or a recurrent neural network (RNN) model.
13. The method according to claim 9 wherein the step of processing the data model using the artificial intelligence (AI) algorithm comprises the step of: comparing the data model with data models contained within a database, wherein the comparison is performed using machine learning algorithms.
14. The method according to claim 9 wherein before the step of determining the security posture of the data object based on the digital signature and the file type of the data object; the method further comprises the step of: providing, using the analyser module, a virtual file system to receive and store the data object, whereby the virtual file system causes all the steps of the method to be executed within the virtual file system.
15. The method according to claim 9 wherein the digital signature comprises a magic number associated with the data object.
16. The method according to claim 9 wherein the features of the data object associated with the security posture comprises a platform type and a file type of the data object. AI Core Storage Data Analyser GV 111 111a Object threat Platform Detector GV Callback 112 112a Validation 160 Disassembling/ GV Interpreting 113 113a Objec t Block Structuring Synthesis AI Threat 107 AI Search Output Feature Model Structure Objectn Extractor Generator Block 131 132 AI Advanced Analyser 106
NZ767245A 2020-03-09 2020-08-20 System and method for detecting data anomalies by analysing morphologies of known and/or unknown cybersecurity threats NZ767245B2 (en)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
SG10202002125 2020-03-09
SG10202002125QA SG10202002125QA (en) 2020-03-09 2020-03-09 System and method for detecting data anomalies by analysing morphologies of known and/or unknown cybersecurity threats
US16/946,245 US11082441B1 (en) 2020-03-09 2020-06-11 Systems and methods for detecting data anomalies by analysing morphologies of known and/or unknown cybersecurity threats
US16/946,245 2020-06-11
PCT/SG2020/050441 WO2021183043A1 (en) 2020-03-09 2020-07-30 System and method for detecting data anomalies by analysing morphologies of known and/or unknown cybersecurity threats
SGPCT/SG2020/050441 2020-07-30

Publications (2)

Publication Number Publication Date
NZ767245A true NZ767245A (en) 2020-10-30
NZ767245B2 NZ767245B2 (en) 2021-02-02

Family

ID=

Also Published As

Publication number Publication date
JP7092939B2 (en) 2022-06-28
DK3899770T3 (en) 2022-10-24
IL289367B (en) 2022-06-01
IL289367A (en) 2022-02-01
JP2022522383A (en) 2022-04-19

Similar Documents

Publication Publication Date Title
AU2020223632B2 (en) System and method for detecting data anomalies by analysing morphologies of known and/or unknown cybersecurity threats
US20210256127A1 (en) System and method for automated machine-learning, zero-day malware detection
Mosli et al. Automated malware detection using artifacts in forensic memory images
EP2472425B1 (en) System and method for detecting unknown malware
US8312546B2 (en) Systems, apparatus, and methods for detecting malware
US10372909B2 (en) Determining whether process is infected with malware
CN106557697B (en) System and method for generating a set of disinfection records
US10878087B2 (en) System and method for detecting malicious files using two-stage file classification
US11475133B2 (en) Method for machine learning of malicious code detecting model and method for detecting malicious code using the same
JP2019003598A (en) System and method for detecting abnormal events
Yücel et al. Imaging and evaluating the memory access for malware
Eskandari et al. To incorporate sequential dynamic features in malware detection engines
US11916937B2 (en) System and method for information gain for malware detection
CA3125101A1 (en) System and method for detecting data anomalies by analysing morphologies of known and/or unknown cybersecurity threats
Darus et al. Android malware classification using XGBoost on data image pattern
O'Kane et al. N-gram density based malware detection
US20160197730A1 (en) Membership query method
US11487876B1 (en) Robust whitelisting of legitimate files using similarity score and suspiciousness score
CN108319853B (en) Virus characteristic code processing method and device
NZ767245A (en) System and method for detecting data anomalies by analysing morphologies of known and/or unknown cybersecurity threats
NZ767245B2 (en) System and method for detecting data anomalies by analysing morphologies of known and/or unknown cybersecurity threats
CN114925369A (en) Static analysis method and system for business system container safety
CN111310162B (en) Trusted computing-based equipment access control method, device, product and medium
CN112347477A (en) Family variant malicious file mining method and device
EP3588349B1 (en) System and method for detecting malicious files using two-stage file classification

Legal Events

Date Code Title Description
PSEA Patent sealed