CN113312621B - Simulated android malicious software dynamic detection method based on enhanced deep learning - Google Patents

Simulated android malicious software dynamic detection method based on enhanced deep learning Download PDF

Info

Publication number
CN113312621B
CN113312621B CN202110612058.1A CN202110612058A CN113312621B CN 113312621 B CN113312621 B CN 113312621B CN 202110612058 A CN202110612058 A CN 202110612058A CN 113312621 B CN113312621 B CN 113312621B
Authority
CN
China
Prior art keywords
enhanced
time step
model
lstm
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110612058.1A
Other languages
Chinese (zh)
Other versions
CN113312621A (en
Inventor
郭薇
张国栋
周翰逊
陈晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Diti Information Technology Co ltd
Original Assignee
Shenzhen Morning Intellectual Property Operations Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Morning Intellectual Property Operations Co ltd filed Critical Shenzhen Morning Intellectual Property Operations Co ltd
Priority to CN202110612058.1A priority Critical patent/CN113312621B/en
Publication of CN113312621A publication Critical patent/CN113312621A/en
Application granted granted Critical
Publication of CN113312621B publication Critical patent/CN113312621B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/55Detecting local intrusion or implementing counter-measures
    • G06F21/56Computer malware detection or handling, e.g. anti-virus arrangements
    • G06F21/566Dynamic detection, i.e. detection performed at run-time, e.g. emulation, suspicious activities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention discloses a dynamic detection method of simulated android malicious software based on enhanced deep learning, wherein a dynamic detection model of simulated android malicious software based on enhanced deep learning is constructed in the method; based on the model, the method comprises: carrying out data preprocessing on input data, and then inputting the preprocessed data into a heterogeneous redundancy model structure; the heterogeneous redundancy model structure comprises three functionally equivalent heterogeneous redundancies, which are respectively: an enhanced Long Short Term Mermory network (LSTM) model, an enhanced Gated Recurrent Unit (GRU) model, and an enhanced capsule network model; according to the method provided by the invention, in the android malicious software dynamic detection model based on the mimicry framework, the mimicry framework and the mimicry defense principle are utilized, so that the model can autonomously defend against network attacks, and the defense performance of the model is enhanced.

Description

Simulated android malicious software dynamic detection method based on enhanced deep learning
Technical Field
The invention discloses the technical field of mobile internet security, in particular to a simulated android malicious software dynamic detection method based on enhanced deep learning.
Background
In recent years, android operating systems have gained popularity due to the open nature of the android framework. With the wide use of android operating systems, the number of android software also increases greatly. The rapid development of the android software enables lawless persons to aim at the android software development market, and the android malicious software is utilized to steal the privacy of the android user, so that illegal benefits are obtained. The number of android malware grows year by year as well as the number of android software grows.
While the amount of android malware has grown dramatically, its escape technology has also grown rapidly in recent years. Google in 2020 deletes 17 pieces of android software containing the malware family Joker. The Joker-equipped application masquerades as legitimate software and can in part provide normal functionality to the user. The new Joker variety increases by 64 in just a few weeks.
In view of the above, although various android malware detection methods have been proposed in recent years, with the increasing amount of android malware and the increasing upgrade of attack modes, the current situation of mobile network security is still very serious.
Therefore, how to efficiently and accurately detect android malicious software is a problem to be solved by people.
Disclosure of Invention
In view of the above, the present disclosure provides a method for dynamically detecting simulated android malware based on enhanced deep learning, so as to solve the above technical problems.
The technical proposal provided by the invention is that, in particular,
the method comprises the steps of constructing a simulated android malicious software dynamic detection model based on the enhanced deep learning;
based on the model, the method comprises:
carrying out data preprocessing on input data, and then inputting the preprocessed data into a heterogeneous redundancy model structure; the heterogeneous redundancy model structure comprises three functionally equivalent heterogeneous redundancies, which are respectively: an enhanced LSTM model, an enhanced GRU model, and an enhanced capsule network model;
and randomly distributing the preprocessed input data through the dynamic detection model, randomly selecting an enhanced LSTM model, an enhanced GRU model or an enhanced capsule network model for training, obtaining output data, and completing dynamic detection of the android malicious software.
The defending performance of the android malicious software dynamic detection model is expressed as follows:
the execution body A1 is an enhanced LSTM model, the execution body A2 is an enhanced GRU model, and the execution body A3 is an enhanced capsule network model;
assuming that the probability of the single successful attack of the enhanced LSTM model by the attacker in the execution body is PLSTM, the probability of the single successful attack of the enhanced GRU model by the attacker is PGRU, and the probability of the single successful attack of the enhanced capsule network model by the attacker is PCapsule; the probability of successful attack to the dynamic detection model of android malware based on mimicry can be calculated as follows:
P=P LSTM *V i +P GRU *V i +P Capsule *V i (1)
wherein V is i Representing the probability of the random selection of the three executives A1, A2, A3 as a training learning model, V i The values of (2) are as follows:
the probabilities of the three executives A1, A2 and A3 being attacked successfully independently are PLSTM, PGRU, PCapsule and PLSTM, PGRU, PCapsule respectively, all belong to the [0,1] interval, and the probability P of the successful attack of the whole dynamic android malicious software detection model based on the mimicry architecture meets the following inequality:
min{P LSTM ,P GRU ,P Capsule }≤P≤max{P LSTM ,P GRU ,P Capsule } (3)
the enhanced LSTM model is obtained by inputting x t Enhancement processing x t =x t +x t-1 For cell state c t Enhancement treatment c t =c t +c t-1 Capturing input x using enhanced LSTM model t And cell state c t More history of API call sequence information;
when the time step is t, inputting x to the enhanced LSTM hidden unit t =x t +x t-1 This allows the input at each time step t to include the input x of the last time step t-1 t-1 C is increased t =c t +c t-1 The method comprises the steps of carrying out a first treatment on the surface of the The enhanced LSTM hidden unit further comprises a hidden state h t-1 Cell state c at time step t-1 t-1 The enhanced LSTM hidden unit outputs a hidden state h comprising a time step t t And cell state c of time step t t
The information flow of the enhanced LSTM model is:
1): updating an input value x of an enhanced LSTM hidden unit at time step t t =x t +x t-1 I.e. the input value x of the LSTM hidden unit enhanced at time step t t And the input value x of the enhanced LSTM hidden unit at time step t-1 t-1 Adding to obtain updated x t X is updated t As input data for the enhanced LSTM hidden unit at time step t;
x t =x t +x t-1 (4)
2): calculating forgetting value f of enhanced LSTM hidden unit at time step t t The method comprises the steps of carrying out a first treatment on the surface of the Input x of LSTM hidden unit enhanced at time step t t Enhanced hidden state h of LSTM hidden unit at time step t-1 t-1 And the cell status value c of the enhanced LSTM hidden unit at time step t-1 t-1 Leading in a sigmoid activation function to obtain the forgetting value f of the enhanced LSTM hidden unit at the time step t t
The sigmoid activation function expression is:
f t =sigmoid(x t W xf +h t-1 W hf +c t-1 W cf +b f ) (6)
wherein W is xf ,W hf ,W cf Is to calculate the forgetting value f t Weight matrix, b, as needed f Is a bias matrix;
3): will beInput x in enhanced LSTM hidden unit at time step t t Enhanced hidden state h of LSTM hidden unit at time step t-1 t-1 And the cell status value c of the enhanced LSTM hidden unit at time step t-1 t-1 Leading into a sigmoid activation function to obtain an input value i of an enhanced LSTM hidden unit at time step t t
i t =sigmoid(x t W xi +h t-1 W hi +c t-1 W ci +b i ) (7)
W xi ,W hi ,W ci Are respectively with x t ,h t-1 ,c t-1 Corresponding weight matrix, b i Is a bias matrix;
4): input x of LSTM hidden unit enhanced at time step t t And the hidden state h of the enhanced LSTM hidden unit at time step t-1 t-1 Leading into tanh activation function, and obtaining candidate cell state value of enhanced LSTM hidden unit at time step t
W xc ,W hc Are respectively with x t ,h t-1 Corresponding weight matrix, b c Is a bias matrix;
5): cell status value c of enhanced LSTM hidden unit at time step t-1 t-1 And the forgetting value f of the enhanced LSTM hidden unit at time step t t Candidate cell state values for enhanced LSTM hidden units at time step t by Hadamard productAnd the input value i of the enhanced LSTM hidden unit at time step t t Performing Hadamard product, and adding the obtained two Hadamard product results to obtain the cell state value c of the enhanced LSTM hidden unit at time step t t
c t =c t +c t-1 (10)
6): cell state value c of enhanced LSTM hidden unit at time step t t =c t +c t-1 Cell state value c of enhanced LSTM hidden unit at time step t t And the cell status value c of the enhanced LSTM hidden unit at time step t-1 t-1 Added updated c t
7): input x of enhanced LSTM hidden unit at time step t-1 t Enhanced hidden state h of LSTM hidden unit at time step t-1 t-1 And a cell state value c of the enhanced LSTM hidden unit at time step t t Leading into sigmoid activation function, and obtaining output value o of enhanced LSTM hidden unit at time step t t
o t =sigmoid(x t W xo +h t-1 W ho +c t W co +b o ) (11)
W xo ,W ho ,W co Are respectively with x t ,h t-1 ,c t Corresponding weight matrix, b o Is a bias matrix;
8): cell state value c of enhanced LSTM hidden unit at time step t t Leading in the tanh activation function to obtain a result and an output value o of the enhanced LSTM hidden unit at the time step t t And (3) carrying out Hadamard product to finally obtain a hidden state value h of the enhanced LSTM hidden unit at the time of the time step t t
h t =o t *tanh(c t ) (12)
The enhanced GRU model is obtained by inputting x t Enhancement processing x t =x t +x t-1 Such that the input data at time step t comprisesThe input of the current time step and the input information at the last time step t-1.
The information flow of the enhanced GRU model is as follows:
1): updating the input value x of the GRU model enhanced at time step t t =x t +x t-1 I.e. the input value x of the GRU model enhanced at time step t t And the input value x of the GRU model enhanced at time step t-1 t-1 Added updated x t X is updated t As input data for the GRU model enhanced at time step t;
x t =x t +x t-1 (13)
2): computing reset gate r of GRU model enhanced at time step t t The method comprises the steps of carrying out a first treatment on the surface of the Input x of GRU model enhanced at time step t t And hidden state h of GRU model enhanced at time step t-1 t-1 Importing to a sigmoid activation function to obtain a reset value r of the GRU model enhanced at the time step t t
r t =sigmoid(W r *[h t-1 ,x t ]) (14)
W r Is a corresponding weight matrix;
3): computing an update gate u of an enhanced GRU model at time step t t The method comprises the steps of carrying out a first treatment on the surface of the Input x of GRU model enhanced at time step t t Hidden state h of GRU model enhanced at time step t-1 t-1 Importing to a sigmoid activation function to obtain an updated value u of the GRU model enhanced at time step t t
u t =sigmoid(W z *[h t-1 ,x t ]) (15)
4): reset gate r of GRU model enhanced at time step t t And hidden state h of GRU model enhanced at time step t-1 t-1 Making Hadamard product; the obtained result and the input value x of the GRU model enhanced at the time step t are then combined t Leading into tanh activation function, and obtaining candidate hidden state value of GRU model enhanced at time step t
the tanh activation function expression is:
5): will be 1-u t And (3) withUpdate gate u of GRU model enhanced in time step t by Hadamard product t And h t-1 Carrying out Hadamard product, and combining two Hadamard product results to obtain the hidden state h of the reinforced GRU model at the time step t t
x t Representing the input x at time step t t Enhancing input values of treated GRU cells, h t-1 Is the hidden state value at time step t-1, h t Is the hidden state value at time step t, sigma represents the corresponding activation function, r t Reset gate at time step t, u t For the update gate at time step t,is the candidate hidden state value at time step t.
S for the enhanced capsule network model j +s j-1 To update s of the current capsule unit j The dynamic change condition of the API call sequence can be fully learned.
The information flow of the enhanced capsule network model is as follows:
the required inputs include a predictive vector indicating the number of route iterations r, L-layer capsule units
1): initializing vector b ij In the first iteration, b ij The initial value of the vector b is zero with the change of the iteration number r ij Dynamically updating;
b ij =0 (19)
2): for all L-layer capsule units i, vector b ij Performing softmax operation to obtain vector c i Is a value of (2);
c i =softmax(b ij ) (20)
3): after the coupling coefficient c of all the L-layer capsule units i is obtained ij After that, the information flow will flow to the capsule unit of the upper layer, i.e. the l+1 layer. Input vector s of different capsule units j Is the weighted sum of all possible incoming units, i.e. the coupling coefficient c ij Product sum with all possible prediction vectors;
4): between different neurons of the same layer, the input s of the latter neuron j =s j +s j-1 The degree of connection between the front capsule unit and the rear capsule unit is enhanced;
s j =s j +s j-1 (22)
5): for all vectors s subjected to enhancement processing j Performing the compression operation of the square nonlinear function; vector s j The vector v transmitted to the upper capsule unit is obtained after the compression operation j
v j =squash(s j ) (23)
6): weight b ij Carrying out dynamic updating operation, and carrying out weight updating operation after the data in the capsule network completes one unidirectional flow process each time; i.e. using L+1 layersOutput vector v of capsule unit j And the prediction vector obtained from the L-layer capsule unitAnd (3) performing dot product and adding the original weight to update to obtain a new weight, so as to realize dynamic update of the weight. After the step 6) is finished, jumping to the step 3) to restart the process, and repeating r times;
the invention has the beneficial effects that:
the invention provides a simulated android malicious software dynamic detection method based on enhanced deep learning. In the android malicious software dynamic detection model based on the mimicry architecture, the mimicry architecture and the mimicry defense principle are utilized, so that the model can autonomously defend against network attacks, and the defense performance of the model is enhanced.
Therefore, the dynamic android malicious software detection model based on the mimicry architecture can detect the android malicious software and simultaneously ensure that the android malicious software is not bothered by network security, so that the attack resistance of the detection model is enhanced, and the android malicious software detection can be efficiently and accurately carried out by the detection model.
The dynamic android malicious software detection model based on the mimicry architecture not only can ensure the capability of detecting the android malicious software, but also can enhance the anti-attack performance of the model and improve the defending performance of the model.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure of the invention as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the description of the embodiments or the prior art will be briefly described below, and it will be obvious to those skilled in the art that other drawings can be obtained from these drawings without inventive effort.
Fig. 1 is a schematic structural diagram of an android malware dynamic detection model based on a mimicry architecture according to an embodiment of the present disclosure;
fig. 2 is an attack diagram of an android malware dynamic detection model based on a mimicry architecture provided in an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a hidden unit structure of an enhanced LSTM model provided by an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a hidden unit structure of an enhanced GRU model provided by an embodiment of the disclosure;
fig. 5 is a schematic structural diagram of an enhanced capsule network provided by an embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the invention. Rather, they are merely examples of systems consistent with aspects of the invention as detailed in the accompanying claims.
In the prior art, common android malicious software detection methods comprise an android malicious software static detection method: the android software does not need to be really operated, and only static features are collected to be used as input features of the detection model under normal conditions; the android malicious software dynamic detection method comprises the following steps: the method comprises the steps that runtime features of android software are required to serve as input features of a detection model to detect android malicious software; the android malicious software hybrid detection method comprises the following steps: the method is a fusion of a static detection method and a dynamic detection method;
in addition, as security issues in network space are emphasized, mimicry architectures are widely used as mimicry defenses in terms of network security. Mimicry Defense (MD) is an active Defense, and utilizes a non-similar redundancy architecture to implement a multidimensional dynamic reconfiguration mechanism, and under the condition of guaranteeing functional equivalence, utilizes uncertainty of the mimicry architecture to resist the threat of network space. The mimicry architecture plays a great role in network space security.
The embodiment provides a simulated android malicious software dynamic detection method based on enhanced deep learning, which combines a simulated dissimilar redundancy construction principle and provides a simulated android malicious software dynamic detection model based on the enhanced deep learning. The input data is first data pre-processed and then input into the heterogeneous redundancy model structure. As shown in fig. 1, in the dynamic detection model of android malware based on mimicry architecture, there are three functionally equivalent heterogeneous redundancies. One is an enhanced LSTM model, one is an enhanced GRU model, and the other is an enhanced capsule network model. The preprocessed input data is randomly distributed through a model, and learning prediction is randomly selected based on an enhanced LSTM model, an enhanced GRU model or an enhanced capsule network model. The input data is randomly selected to enter an enhanced LSTM model, an enhanced GRU model or an enhanced capsule network model for training, and output data is obtained.
In the dynamic android malicious software detection model based on the mimicry architecture, the enhanced LSTM model, the enhanced GRU model and the enhanced capsule network model improve the detection of the malicious software and improve the accuracy of the detection of the malicious software. By referring to the mimicry dissimilar redundancy construction principle, the android malicious software dynamic detection model based on the mimicry architecture has certain defenses and higher safety.
In the android malicious software dynamic detection model based on the mimicry architecture, the mimicry architecture and the mimicry defense principle are utilized, so that the model can autonomously defend against network attacks, and the defense performance of the model is enhanced. Therefore, the dynamic android malicious software detection model based on the mimicry architecture can detect the android malicious software and simultaneously ensure that the android malicious software is not bothered by network security, so that the attack resistance of the detection model is enhanced, and the android malicious software detection can be efficiently and accurately carried out by the detection model. The dynamic android malicious software detection model based on the mimicry architecture not only can ensure the capability of detecting the android malicious software, but also can enhance the anti-attack performance of the model and improve the defending performance of the model. Therefore, the embodiment considers the defending performance of the dynamic android malicious software detection model based on the mimicry architecture;
first, in the defending performance of the dissimilar redundancy structure, if there are 4 functionally equivalent executors in an execution set in one dissimilar redundancy structure, each time an attacker wants to attack the dissimilar redundancy structure, the attacker needs to provide an interference sample to realize interference on the input data set, so as to attack the dissimilar redundancy structure.
Since the plurality of executives are functionally equivalent different algorithms, the attack success rate for different executives in the executives set is different when an attacker attacks the dissimilar redundancy structure because the properties of the different algorithms are different. It is assumed that four functionally equivalent execution volumes among the execution volumes are an execution volume A1, an execution volume A2, an execution volume A3, and an execution volume A4, respectively. The probability that the executing body A1 is singly and successfully attacked by an attacker is PA1, the probability that the executing body A2 is singly and successfully attacked by the attacker is PA2, the probability that the executing body A3 is singly and successfully attacked by the attacker is PA3, and the probability that the executing body A4 is singly and successfully attacked by the attacker is PA4. The probability P of a successful attack on the entire non-similar redundancy construct can be calculated by the following formula:
P=P A1 *V i +P A2 *V i +P A3 *V i +P A4 *V i (25)
wherein V is i Representing the probability of the random selection of the three executives A1, A2, A3, A4 as a training learning model, V i Wherein n represents the number of executives in the set of executives:
the probabilities of the four executives A1, A2, A3 and A4 being attacked successfully independently are PA1, PA2, PA3 and PA4 respectively, and PA1, PA2, PA3 and PA4 all belong to the [0,1] interval, and the probability P of the successful attack of the whole dynamic redundancy architecture satisfies the following inequality:
min{P A1 ,P A2 ,P A3 ,P A4 }≤P≤max{P A1 ,P A2 ,P A3 ,P A4 } (27)
defensive performance of dynamic android malicious software detection model based on mimicry architecture
When an android malicious software dynamic detection model based on a mimicry architecture is attacked, an attacker needs to provide an android malicious software countermeasure sample first. And the generated android malicious software countermeasure sample is taken into the data set of the android malicious software detection, the input data of the dynamic android malicious software detection model based on the mimicry architecture is interfered, so that the attack of the dynamic android malicious software detection model based on the mimicry architecture is performed, and the experiment result is analyzed, so that the defending performance of the dynamic android malicious software detection model based on the mimicry architecture is illustrated.
As shown in fig. 2, there are three executives in the dynamic android malware detection model based on the mimicry architecture, where the executor A1 is an enhanced LSTM model, the executor A2 is an enhanced GRU model, and the executor A3 is an enhanced capsule network model.
Assuming that the probability of the enhanced LSTM model being singly and successfully attacked by an attacker in an executing body is PLSTM, the probability of the enhanced GRU model being singly and successfully attacked by the attacker is PGRU, and the probability of the enhanced capsule network model being singly and successfully attacked by the attacker is PCapsule. The probability of successful attack to the dynamic detection model of android malware based on mimicry architecture can be calculated as follows:
P=P LSTM *V i +P GRU *V i +P Capsule *V i (1)
wherein,V i representing the probability of the random selection of the three executives A1, A2, A3 as a training learning model, V i The values of (2) are as follows:
the probabilities of the three executives A1, A2 and A3 being attacked successfully independently are PLSTM, PGRU, PCapsule and PLSTM, PGRU, PCapsule respectively, all belong to the [0,1] interval, and the probability P of successful attack of the whole dynamic android malicious software detection model based on the mimicry architecture meets the following inequality:
min{P LSTM ,P GRU ,P Capsule }≤P≤max{P LSTM ,P GRU ,P Capsule } (3)
in most cases, android software provides its functionality by representing basic behavior using API call sequences, permissions, intents, and the like. However, analysis of Android malware shows that these basic behaviors in applications are likely to be accepted as part of malicious functionality. For example, consider a spyware, regardless of how much malicious functionality it hides, there is still the basic behavior necessary to access private information from these devices. Thus, semantic representations of basic behavior, such as sequence-based features, will help provide potentially malicious information that is different from traditional grammatical features. Android requires calling different API sequences to implement different functions, and Android malware executes malicious behavior by calling sensitive APIs.
The Android application can be regarded as a series of API method calls because different API sequences are called when the Android software runs. All API calls are not filtered, where the order is a portion of the information used to identify android malware. It represents the temporal relationship between two API method calls and defines the intended subtasks of the application. And the difference in the number of calls of each API is another part of information identifying the android malware.
When normal android software is run, the class, sequence and times of the API call sequence of the android device are different from those of the API call sequence when android malicious software is run. More sensitive APIs are usually called by the android malicious software in the running process, and the calling times of the sensitive APIs are larger than those of the normal android software.
In summary, when the android malware is dynamically detected by taking the android API call sequence as the input feature, the input data should keep the order and the times of the API call sequence as much as possible. The overlong input sequence is unfavorable for the deep learning model to dynamically detect the android malicious software, because the deep learning model has limited learning ability on long-distance serialized data.
In order to solve the above problems, the present embodiment proposes an android malware dynamic detection method based on an enhanced LSTM model. In order to enable the LSTM model to better learn historical dynamic changes of the android malicious software in the running process, an enhanced deep learning model is provided, and the detection efficiency of the model on the android malicious software is improved.
The principle of the enhanced LSTM model is as follows: enhanced LSTM model by inputting x t Enhancement processing x t =x t +x t-1 For cell state c t Enhancement treatment c t =c t +c t-1 The input data and the cell state value can transmit more historical API call sequence information, not only the API call sequence information of the current time point. It is proposed to capture input x using enhanced LSTM model t And cell state c t More historical conditions of the API call sequence information are adopted, so that the detection capability of the enhanced LSTM model on android malicious software is improved.
When the time step is t, for the enhanced LSTM hidden unit, its input x t =x t +x t-1 This allows the input at each time step t to include the input x of the last time step t-1 t-1 C is increased t =c t +c t-1 By the aid of the method, more historical data flow in the network, and learning capacity of the model on the historical data is enhanced. In addition to this, the enhanced LSTM concealment unit also includes concealmentState h t-1 Cell state c at time step t-1 t-1 The enhanced LSTM hidden unit outputs a hidden state h comprising a time step t t And cell state c of time step t t
As shown in fig. 3, the information flow of the enhanced LSTM model is as follows:
the first step: updating an input value x of an enhanced LSTM hidden unit at time step t t =x t +x t-1 I.e. the input value x of the LSTM hidden unit enhanced at time step t t And the input value x of the enhanced LSTM hidden unit at time step t-1 t-1 Added updated x t X is updated t As input data for the enhanced LSTM hidden unit at time step t.
And a second step of: calculating forgetting value f of enhanced LSTM hidden unit at time step t t . Input x of LSTM hidden unit enhanced at time step t t Enhanced hidden state h of LSTM hidden unit at time step t-1 t-1 And the cell status value c of the enhanced LSTM hidden unit at time step t-1 t-1 Leading in a sigmoid activation function to obtain the forgetting value f of the enhanced LSTM hidden unit at the time step t t . Wherein W is xf ,W hf ,W cf Is to calculate the forgetting value f t Weight matrix, b, as needed f Is a bias matrix.
And a third step of: input x of LSTM hidden unit enhanced at time step t t Enhanced hidden state h of LSTM hidden unit at time step t-1 t-1 And the cell status value c of the enhanced LSTM hidden unit at time step t-1 t-1 Leading into a sigmoid activation function to obtain an input value i of an enhanced LSTM hidden unit at time step t t . In the calculation of i t In the process, W xi ,W hi ,W ci Are respectively with x t ,h t-1 ,c t-1 Corresponding weight matrix, b i Is a bias matrix.
Fourth step: input x of LSTM hidden unit enhanced at time step t t And the hidden state h of the enhanced LSTM hidden unit at time step t-1 t-1 Leading into tanh activation function, and obtaining candidate cell state value of enhanced LSTM hidden unit at time step tIn calculating candidate cell State values +.>In the process, W xc ,W hc Are respectively with x t ,h t-1 Corresponding weight matrix, b c Is a bias matrix.
Fifth step: cell status value c of enhanced LSTM hidden unit at time step t-1 t-1 And the forgetting value f of the enhanced LSTM hidden unit at time step t t Candidate cell state values for enhanced LSTM hidden units at time step t by Hadamard productAnd the input value i of the enhanced LSTM hidden unit at time step t t Performing Hadamard product, and adding the obtained two Hadamard product results to obtain the cell state value c of the enhanced LSTM hidden unit at time step t t
Sixth step: cell state value c of enhanced LSTM hidden unit at time step t t =c t +c t-1 Cell state value c of enhanced LSTM hidden unit at time step t t And the cell status value c of the enhanced LSTM hidden unit at time step t-1 t-1 Added updated c t
Seventh step: input x of enhanced LSTM hidden unit at time step t-1 t Enhanced hidden state h of LSTM hidden unit at time step t-1 t-1 And a cell state value c of the enhanced LSTM hidden unit at time step t t Leading into sigmoid activation function, and obtaining output value o of enhanced LSTM hidden unit at time step t t In calculating the output value o t In the process, W xo ,W ho ,W co Are respectively with x t ,h t-1 ,c t Corresponding weight matrix, b o Is a bias matrix.
Eighth step: cell state value c of enhanced LSTM hidden unit at time step t t Leading in the tanh activation function to obtain a result and an output value o of the enhanced LSTM hidden unit at the time step t t And (3) carrying out Hadamard product to finally obtain a hidden state value h of the enhanced LSTM hidden unit at the time of the time step t t
Wherein, the sigmoid activation function expression is:
the tanh activation function expression is:
the formula for the hidden unit (time step t) of the enhanced LSTM model is as follows:
x t =x t +x t-1 (4)
f t =sigmoid(x t W xf +h t-1 W hf +c t-1 W cf +b f ) (6)
i t =sigmoid(x t W xi +h t-1 W hi +c t-1 W ci +b i ) (7)
c t =c t +c t-1 (10)
o t =sigmoid(x t W xo +h t-1 W ho +c t W co +b o ) (11)
h t =o t *tanh(c t ) (12)
the training algorithm for malware dynamic detection based on the enhanced LSTM model is as follows:
the android malicious software dynamic detection method based on the enhanced deep learning carries out detection on the android malicious software by learning the dynamic change information of the API call sequence of the android software. Enhanced LSTM model by using x t =x t +x t-1 As input, each time of input data better contains the history data input before, c is increased t =c t +c t-1 The step ensures that the cell state value input into the model each time comprises the cell state value of the last time step, and improves the ability of the model to learn dynamic change data.
The principle of the enhanced GRU model is implemented by inputting x into t Enhancement processing x t =x t +x t-1 The input data at the time step t not only comprises the input of the current time step, but also comprises the input information at the last time step t-1. In the enhanced GRU model, the input data is able to pass more historical API call sequence information. The input enhanced GRU model can learn the history information of the input data better in the training and testing process, so that the detection accuracy of the android malicious software is improved.
Wherein the unit structure of the enhanced GRU is shown in FIG. 3, x in the model diagram t Representing input of time step tx t Enhancing input values of treated GRU cells, h t-1 Is the hidden state value at time step t-1, h t Is the hidden state value at time step t, sigma represents the corresponding activation function, r t Reset gate at time step t, u t For the update gate at time step t,is the candidate hidden state value at time step t.
As shown in fig. 2, the information flow of the enhanced GRU model is as follows:
the first step: updating the input value x of the GRU model enhanced at time step t t =x t +x t-1 I.e. the input value x of the GRU model enhanced at time step t t And the input value x of the GRU model enhanced at time step t-1 t-1 Added updated x t X is updated t As input data for the enhanced GRU model at time step t.
And a second step of: computing reset gate r of GRU model enhanced at time step t t . Input x of GRU model enhanced at time step t t And hidden state h of GRU model enhanced at time step t-1 t-1 Importing to a sigmoid activation function to obtain a reset value r of the GRU model enhanced at the time step t t 。W r Is a corresponding weight matrix.
And a third step of: computing an update gate u of an enhanced GRU model at time step t t . Input x of GRU model enhanced at time step t t Hidden state h of GRU model enhanced at time step t-1 t-1 Importing to a sigmoid activation function to obtain an updated value u of the GRU model enhanced at time step t t
Fourth step: reset gate r of GRU model enhanced at time step t t And hidden state h of GRU model enhanced at time step t-1 t-1 Hadamard product is formed. The obtained result and the input value x of the GRU model enhanced at the time step t are then combined t Leading into tanh activation function, and obtaining candidate hidden state value of GRU model enhanced at time step t
Fifth step: will be 1-u t And (3) withUpdate gate u of GRU model enhanced in time step t by Hadamard product t And h t-1 Carrying out Hadamard product, and combining two Hadamard product results to obtain the hidden state h of the reinforced GRU model at the time step t t
The formula for inputting hidden units (time step t) of the enhanced LSTM model is as follows:
x t =x t +x t-1 (13)
r t =sigmoid(W r *[h t-1 ,x t ]) (14)
u t =sigmoid(W z *[h t-1 ,x t ]) (15)
in the enhanced GRU model, input data are input to an input layer in batches, flow through a circulating layer halfway, and finally the data obtain a classification result through an output layer. Firstly, the running API call sequence of the android application software is subjected to data preprocessing to obtain serialized data suitable for training and learning. The experimental data set is divided into a corresponding training set, a test set and a verification set according to the proportion.
The data enters an enhanced GRU model through the input layer, and then the circulation layer learns the dynamic change information of the API call sequence when the android software runs. The nerve cells in the circulating layer of the enhanced GRU model are all connected end to end, and the input of the following nerve cell is the output of the previous nerve cell. And finally, outputting the detection result of the android malicious software through an output layer.
The training algorithm for malware dynamic detection based on the enhanced GRU model is as follows:
in the enhanced capsule network model, the enhanced capsule network performs enhancement processing on a dynamic routing algorithm, and s is added on the original dynamic routing formula j =s j +s j-1 . This makes s in the capsule network j The information of the last capsule unit can be carried, and the learning ability of the capsule network to the historical information is enhanced. Longer API call sequences were intercepted during the experiment in order to better learn the information of the API call sequences. S for enhanced capsule network model j +s j-1 To update s of the current capsule unit j Dynamic change conditions of the API call sequence can be fully learned, so that detection efficiency of malicious software is improved.
The operational flow of the enhanced capsule network model is approximately as follows:
first, in the L-layer capsule network, u 1 And u 2 Is a capsule unit containing a set of neurons, all of which are vectors. Vector u 1 And weight W 11 、W 12 、W 13 、W 14 Multiplication to obtain a predictive vectorVector u 2 And weight W 11 、W 12 、W 13 、W 14 Multiplication to obtain a predictive vector->By +.>Coupling systemNumber c ij The product of (2) and the resulting vector s j . Between different neurons of the same layer, the input s of the latter neuron j =s j +s j-1 The degree of association between the front and rear neurons is enhanced. Finally, for all vectors s subjected to enhancement processing j Performing compression operation of square nonlinear function to obtain output vector v j . Output vector v of capsule unit using l+1 layer j And the resulting prediction vector from the L-layer capsule unit +.>And (3) performing dot product and adding the original weight to update to obtain new weight, wherein the process is the information flow process from the capsule unit of the L layer to the capsule unit of the L+1 layer.
The dynamic detection method of android malicious software based on the enhanced capsule network model has the following formula:
b ij =0 (19)
c i =softmax(b ij ) (20)
s j =s j +s j-1 (22)
v j =squash(s j ) (23)
the detailed procedure for training based on the enhanced capsule network model is as follows:
first the required inputs include a predictive vector indicating the number of route iterations r, L-layer capsule units
Step one: initializing vector b ij In the first iteration, b ij Is zero. Vector b as the number of iterations r varies ij And dynamically updating.
Step two: for all L-layer capsule units i, vector b ij Performing softmax operation to obtain vector c i Is a value of (2). The softmax activation function ensures c ij The value of (c) is not negative and c of the capsule units of the same layer ij The sum of (2) is 1. Because in the first iteration, b ij The initial values of (a) are zero, so that after the first iteration is finished, the coupling coefficients c of different capsule units of the same layer ij Are all equal.
Step three: after the coupling coefficient c of all the L-layer capsule units i is obtained ij After that, the information flow will flow to the capsule unit of the upper layer, i.e. the l+1 layer. In this step, the input vectors s of the different capsule units j Is the weighted sum of all possible incoming units, i.e. the coupling coefficient c ij The product of the sum of all possible prediction vectors.
Step four: between different neurons of the same layer, the input s of the latter neuron j =s j +s j-1 The degree of connection between the front and rear capsule units is enhanced. The input of the second capsule element, e.g. in layer L+1, is the current input s 2 Adding the input s of the previous capsule unit 1 I.e. the input of the second capsule unit is s 2 =s 2 +s 1 . The second capsule unit of the L+1 layer can learn the information transmitted by the previous capsule unit better, and the learning capacity of the model on the historical input information is enhanced.
Step five: for all vectors s subjected to enhancement processing j And performing the compression operation of the square nonlinear function. After the function of the square compression function, the vector s j The original vector direction is reserved, the compression operation only changes the length of the vector, and the vector s is obtained j Is compressed to a length of 1 or less. Vector s j Performing compression operationThen the vector v transmitted to the upper capsule unit is obtained j
Step six: weight b ij The dynamic updating operation is carried out, and the weight updating operation is carried out after the data in the capsule network completes one unidirectional flow process each time, which is the key of the dynamic routing algorithm. In this step, the output vector v of the capsule unit of the l+1 layer is utilized j And the prediction vector obtained from the L-layer capsule unitAnd (3) performing dot product and adding the original weight to update to obtain a new weight, so as to realize dynamic update of the weight. The dot product operation is to calculate the predictive vector +.>And output vector v j Similarity between the weights, and updating the weights through the similarity. After the step six is finished, the algorithm jumps to the step 3 to restart the process, and repeats r times.
The training algorithm of the android malicious software dynamic detection method based on the enhanced capsule network is as follows:
other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.

Claims (2)

1. The dynamic detection method of the simulated android malicious software based on the enhanced deep learning is characterized in that a dynamic detection model of the simulated android malicious software based on the enhanced deep learning is constructed in the method;
based on the model, the method comprises:
carrying out data preprocessing on input data, and then inputting the preprocessed data into a heterogeneous redundancy model structure; the heterogeneous redundancy model structure comprises three functionally equivalent heterogeneous redundancies, which are respectively: an enhanced LSTM model, an enhanced GRU model, and an enhanced capsule network model;
the preprocessed input data is randomly distributed through the dynamic detection model, and the enhanced LSTM model, the enhanced GRU model or the enhanced capsule network model are randomly selected for training to obtain output data, so that the dynamic detection of the android malicious software is completed;
the enhanced LSTM model is obtained by inputting x t Enhancement processing x t =x t +x t-1 For cell state c t Enhancement treatment c t =c t +c t-1 Capturing input x using enhanced LSTM model t And cell state c t More history of API call sequence information;
when the time step is t, inputting x to the enhanced LSTM hidden unit t =x t +x t-1 This allows the input at each time step t to include the input x of the last time step t-1 t-1 C is increased t =c t +c t-1 The method comprises the steps of carrying out a first treatment on the surface of the The enhanced LSTM hidden unit further comprises a hidden state h t-1 Cell state c at time step t-1 t-1 The enhanced LSTM hidden unit outputs a hidden state h comprising a time step t t And cell state c of time step t t
The information flow of the enhanced LSTM model is:
1): updating an input value x of an enhanced LSTM hidden unit at time step t t =x t +x t-1 I.e. L enhanced at time step tInput value x of STM hidden unit t And the input value x of the enhanced LSTM hidden unit at time step t-1 t-1 Adding to obtain updated x t X is updated t As input data for the enhanced LSTM hidden unit at time step t;
x t =x t +x t-1 (4)
2): calculating forgetting value f of enhanced LSTM hidden unit at time step t t The method comprises the steps of carrying out a first treatment on the surface of the Input x of LSTM hidden unit enhanced at time step t t Enhanced hidden state h of LSTM hidden unit at time step t-1 t-1 And the cell status value c of the enhanced LSTM hidden unit at time step t-1 t-1 Leading in a sigmoid activation function to obtain the forgetting value f of the enhanced LSTM hidden unit at the time step t t
The sigmoid activation function expression is:
f t =sigmoid(x t W xf +h t-1 W hf +c t-1 W cf +b f ) (6)
wherein W is xf ,W hf ,W cf Is to calculate the forgetting value f t Weight matrix, b, as needed f Is a bias matrix;
3): inputting x into LSTM hidden unit enhanced at time step t t Enhanced hidden state h of LSTM hidden unit at time step t-1 t-1 And the cell status value c of the enhanced LSTM hidden unit at time step t-1 t-1 Leading into a sigmoid activation function to obtain an input value i of an enhanced LSTM hidden unit at time step t t
i t =sigmoid(x t W xi +h t-1 W hi +c t-1 W ci +b i ) (7)
W xi ,W hi ,W ci Are respectively with x t ,h t-1 ,c t-1 Corresponding weight matrix, b i Is a bias matrix;
4): input x of LSTM hidden unit enhanced at time step t t And the hidden state h of the enhanced LSTM hidden unit at time step t-1 t-1 Leading into tanh activation function, and obtaining candidate cell state value of enhanced LSTM hidden unit at time step t
W xc ,W hc Are respectively with x t ,h t-1 Corresponding weight matrix, b c Is a bias matrix;
5): cell status value c of enhanced LSTM hidden unit at time step t-1 t-1 And the forgetting value f of the enhanced LSTM hidden unit at time step t t Candidate cell state values for enhanced LSTM hidden units at time step t by Hadamard productAnd the input value i of the enhanced LSTM hidden unit at time step t t Performing Hadamard product, and adding the obtained two Hadamard product results to obtain the cell state value c of the enhanced LSTM hidden unit at time step t t
c t =c t +c t-1 (10)
6): cell state value c of enhanced LSTM hidden unit at time step t t =c t +c t-1 Cell state value c of enhanced LSTM hidden unit at time step t t And the cell status value c of the enhanced LSTM hidden unit at time step t-1 t-1 Added updated c t
7): input x of enhanced LSTM hidden unit at time step t-1 t Enhanced hidden state h of LSTM hidden unit at time step t-1 t-1 And a cell state value c of the enhanced LSTM hidden unit at time step t t Leading into sigmoid activation function, and obtaining output value o of enhanced LSTM hidden unit at time step t t
o t =sigmoid(x t W xo +h t-1 W ho +c t W co +b o ) (11)
W xo ,W ho ,W co Are respectively with x t ,h t-1 ,c t Corresponding weight matrix, b o Is a bias matrix;
8): cell state value c of enhanced LSTM hidden unit at time step t t Leading in the tanh activation function to obtain a result and an output value o of the enhanced LSTM hidden unit at the time step t t And (3) carrying out Hadamard product to finally obtain a hidden state value h of the enhanced LSTM hidden unit at the time of the time step t t
h t =o t *tanh(c t ) (12);
The enhanced GRU model is obtained by inputting x t Enhancement processing x t =x t +x t-1 The input data at the time step t comprises the input of the current time step and the input information at the last time step t-1;
the information flow of the enhanced GRU model is as follows:
1): updating the input value x of the GRU model enhanced at time step t t =x t +x t-1 I.e. the input value x of the GRU model enhanced at time step t t And the input value x of the GRU model enhanced at time step t-1 t-1 Added updated x t X is updated t As input data for the GRU model enhanced at time step t;
x t =x t +x t-1 (13)
2): computing reset gate r of GRU model enhanced at time step t t The method comprises the steps of carrying out a first treatment on the surface of the Input x of GRU model enhanced at time step t t And enhanced at time step t-1Hidden state h of GRU model t-1 Importing to a sigmoid activation function to obtain a reset value r of the GRU model enhanced at the time step t t
r t =sigmoid(W r *[h t-1 ,x t ]) (14)
W r Is a corresponding weight matrix;
3): computing an update gate u of an enhanced GRU model at time step t t The method comprises the steps of carrying out a first treatment on the surface of the Input x of GRU model enhanced at time step t t Hidden state h of GRU model enhanced at time step t-1 t-1 Importing to a sigmoid activation function to obtain an updated value u of the GRU model enhanced at time step t t
u t =sigmoid(W z *[h t-1 ,x t ]) (15)
4): reset gate r of GRU model enhanced at time step t t And hidden state h of GRU model enhanced at time step t-1 t-1 Making Hadamard product; the obtained result and the input value x of the GRU model enhanced at the time step t are then combined t Leading into tanh activation function, and obtaining candidate hidden state value of GRU model enhanced at time step t
the tanh activation function expression is:
5): will be 1-u t And (3) withUpdate gate u of GRU model enhanced in time step t by Hadamard product t And h t-1 Carrying out Hadamard product, and combining two Hadamard product results to obtain the hidden state h of the reinforced GRU model at the time step t t
x t Representing the input x at time step t t Enhancing input values of treated GRU cells, h t-1 Is the hidden state value at time step t-1, h t Is the hidden state value at time step t, sigma represents the corresponding activation function, r t Reset gate at time step t, u t For the update gate at time step t,the candidate hidden state value is the candidate hidden state value at the time step t;
s for the enhanced capsule network model j +s j-1 To update s of the current capsule unit j Dynamic change conditions of the API call sequence can be fully learned;
the information flow of the enhanced capsule network model is as follows:
the required inputs include a predictive vector indicating the number of route iterations r, L-layer capsule units
1): initializing vector b ij In the first iteration, b ij The initial value of the vector b is zero with the change of the iteration number r ij Dynamically updating;
b ij =0 (19)
2): for all L-layer capsule units i, vector b ij Performing softmax operation to obtain vector c i Is a value of (2);
c i =softmax(b ij ) (20)
3): after the coupling coefficient c of all the L-layer capsule units i is obtained ij Then, the information flow will flow to the capsule unit of the upper layer, i.e. L+1 layer, the input vector s of different capsule units j Is the weighted sum of all possible incoming units, i.e. the coupling coefficient c ij Product sum with all possible prediction vectors;
4): between different neurons of the same layer, the input s of the latter neuron j =s j +-s j-1 The degree of connection between the front capsule unit and the rear capsule unit is enhanced;
s j =s j +s j-1 (22)
5): for all vectors s subjected to enhancement processing j Performing the compression operation of the square nonlinear function; vector s j The vector v transmitted to the upper capsule unit is obtained after the compression operation j
v j =squash(s j ) (23)
6): weight b ij Carrying out dynamic updating operation, and carrying out weight updating operation after the data in the capsule network completes one unidirectional flow process each time; i.e. output vector v of capsule unit using L+1 layer j And the prediction vector obtained from the L-layer capsule unitPerforming dot product, adding the original weight to update to obtain new weight, and realizing dynamic update of the weight; after the step 6) is finished, jumping to the step 3) to restart the process, and repeating r times;
2. the dynamic detection method of simulated android malicious software based on enhanced deep learning as claimed in claim 1, wherein the defensive performance of the dynamic detection model of android malicious software is represented as follows:
the execution body A1 is an enhanced LSTM model, the execution body A2 is an enhanced GRU model, and the execution body A3 is an enhanced capsule network model;
assuming that the probability of the single successful attack of the enhanced LSTM model by the attacker in the execution body is PLSTM, the probability of the single successful attack of the enhanced GRU model by the attacker is PGRU, and the probability of the single successful attack of the enhanced capsule network model by the attacker is PCapsule; the probability of successful attack of the dynamic detection model of the android malicious software based on the mimicry is calculated as follows:
P=P LSTM *V i +P GRU *V i +P Capsule *V i (1)
wherein V is i Representing the probability of the random selection of the three executives A1, A2, A3 as a training learning model, V i The values of (2) are as follows:
the probabilities of the three executives A1, A2 and A3 being attacked successfully independently are PLSTM, PGRU, PCapsule and PLSTM, PGRU, PCapsule respectively, all belong to the [0,1] interval, and the probability P of the successful attack of the whole dynamic android malicious software detection model based on the mimicry architecture meets the following inequality:
min{P LSTM ,P GRU ,P Capsule }≤P≤max{P LSTM ,P GRU ,P Capsule } (3)。
CN202110612058.1A 2021-06-02 2021-06-02 Simulated android malicious software dynamic detection method based on enhanced deep learning Active CN113312621B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110612058.1A CN113312621B (en) 2021-06-02 2021-06-02 Simulated android malicious software dynamic detection method based on enhanced deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110612058.1A CN113312621B (en) 2021-06-02 2021-06-02 Simulated android malicious software dynamic detection method based on enhanced deep learning

Publications (2)

Publication Number Publication Date
CN113312621A CN113312621A (en) 2021-08-27
CN113312621B true CN113312621B (en) 2024-03-26

Family

ID=77377120

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110612058.1A Active CN113312621B (en) 2021-06-02 2021-06-02 Simulated android malicious software dynamic detection method based on enhanced deep learning

Country Status (1)

Country Link
CN (1) CN113312621B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113973008B (en) * 2021-09-28 2023-06-02 佳源科技股份有限公司 Detection system, method, equipment and medium based on mimicry technology and machine learning
CN116401667B (en) * 2023-04-13 2024-04-19 湖南工商大学 Android malicious software detection method and device based on CNN-GRU

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108595955A (en) * 2018-04-25 2018-09-28 东北大学 A kind of Android mobile phone malicious application detecting system and method
WO2019075338A1 (en) * 2017-10-12 2019-04-18 Charles River Analytics, Inc. Cyber vaccine and predictive-malware-defense methods and systems
CN110048992A (en) * 2018-01-17 2019-07-23 北京中科晶上超媒体信息技术有限公司 A method of constructing dynamic heterogeneous redundancy structure

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019075338A1 (en) * 2017-10-12 2019-04-18 Charles River Analytics, Inc. Cyber vaccine and predictive-malware-defense methods and systems
CN110048992A (en) * 2018-01-17 2019-07-23 北京中科晶上超媒体信息技术有限公司 A method of constructing dynamic heterogeneous redundancy structure
CN108595955A (en) * 2018-04-25 2018-09-28 东北大学 A kind of Android mobile phone malicious application detecting system and method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
动态异构冗余的Web威胁感知技术研究;李卫超等;智能计算机与应用(第04期);正文全文 *
基于拟态安全防御的LDoS攻击防御研究;陈静;福建电脑(第02期);正文全文 *
网络空间拟态防御研究;邬江兴;信息安全学报(第04期);正文全文 *

Also Published As

Publication number Publication date
CN113312621A (en) 2021-08-27

Similar Documents

Publication Publication Date Title
CN110647918B (en) Mimicry defense method for resisting attack by deep learning model
Yuan et al. Adversarial examples: Attacks and defenses for deep learning
CN108549940B (en) Intelligent defense algorithm recommendation method and system based on multiple counterexample attacks
CN113312621B (en) Simulated android malicious software dynamic detection method based on enhanced deep learning
Chen et al. Stealing deep reinforcement learning models for fun and profit
Liu et al. SIN 2: Stealth infection on neural network—A low-cost agile neural Trojan attack methodology
Khalid et al. Fademl: Understanding the impact of pre-processing noise filtering on adversarial machine learning
CN114139155A (en) Malicious software detection model and generation method of enhanced countermeasure sample thereof
Li et al. Hashtran-dnn: A framework for enhancing robustness of deep neural networks against adversarial malware samples
Wei et al. Cross-layer strategic ensemble defense against adversarial examples
Dziedzic et al. On the difficulty of defending self-supervised learning against model extraction
Agrawal et al. Robust neural malware detection models for emulation sequence learning
KR20220025455A (en) Method for depending adversarial attack and apparatus thereof
Habibi et al. Performance evaluation of CNN and pre-trained models for malware classification
Wang et al. Intelligent Security Detection and Defense in Operating Systems Based on Deep Learning
Chen et al. STPD: Defending against ℓ0-norm attacks with space transformation
CN115719085A (en) Deep neural network model inversion attack defense method and equipment
Darem A Novel Framework for Windows Malware Detection Using a Deep Learning Approach.
Kim et al. When George Clooney Is Not George Clooney: Using GenAttack to Deceive Amazon’s and Naver’s Celebrity Recognition APIs
Bojarajulu et al. Parametric and non-parametric analysis on MAOA-based intelligent IoT-BOTNET attack detection model
Westbrook et al. Adversarial attacks on machine learning in embedded and iot platforms
Khaled et al. Careful what you wish for: on the extraction of adversarially trained models
CN114021136A (en) Back door attack defense system for artificial intelligence model
Zheng et al. GONE: A generic O (1) NoisE layer for protecting privacy of deep neural networks
Ayaz et al. Improving Robustness Against Adversarial Attacks with Deeply Quantized Neural Networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20231012

Address after: 518000 909, Building 49, No. 3, Queshan Yunfeng Road, Taoyuan Community, Dalang Street, Longhua District, Shenzhen, Guangdong

Applicant after: Shenzhen Morning Intellectual Property Operations Co.,Ltd.

Address before: No. 12, 19th Floor, Zone 1, Taifeng Building, East Section of Dangui Street, Ziliujing District, Zigong City, Sichuan Province, 643000

Applicant before: Sichuan Hengying Information Technology Service Co.,Ltd.

Effective date of registration: 20231012

Address after: No. 12, 19th Floor, Zone 1, Taifeng Building, East Section of Dangui Street, Ziliujing District, Zigong City, Sichuan Province, 643000

Applicant after: Sichuan Hengying Information Technology Service Co.,Ltd.

Address before: 110136, Liaoning, Shenyang moral and Economic Development Zone, No. 37 South Avenue moral

Applicant before: SHENYANG AEROSPACE University

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant
TA01 Transfer of patent application right

Effective date of registration: 20240325

Address after: 518000, Building A 610, Longguang Jiuzuan North Phase B, Hongshan Community, Minzhi Street, Longhua District, Shenzhen City, Guangdong Province

Applicant after: Shenzhen Diti Information Technology Co.,Ltd.

Country or region after: China

Address before: 518000 909, Building 49, No. 3, Queshan Yunfeng Road, Taoyuan Community, Dalang Street, Longhua District, Shenzhen, Guangdong

Applicant before: Shenzhen Morning Intellectual Property Operations Co.,Ltd.

Country or region before: China

TA01 Transfer of patent application right