WO2024012681A1 - Security framework for a network - Google Patents

Security framework for a network Download PDF

Info

Publication number
WO2024012681A1
WO2024012681A1 PCT/EP2022/069730 EP2022069730W WO2024012681A1 WO 2024012681 A1 WO2024012681 A1 WO 2024012681A1 EP 2022069730 W EP2022069730 W EP 2022069730W WO 2024012681 A1 WO2024012681 A1 WO 2024012681A1
Authority
WO
WIPO (PCT)
Prior art keywords
module
attack
attack detection
detection
behavior
Prior art date
Application number
PCT/EP2022/069730
Other languages
French (fr)
Inventor
Hichem SEDJELMACI
Original Assignee
Telefonaktiebolaget Lm Ericsson (Publ)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget Lm Ericsson (Publ) filed Critical Telefonaktiebolaget Lm Ericsson (Publ)
Priority to PCT/EP2022/069730 priority Critical patent/WO2024012681A1/en
Publication of WO2024012681A1 publication Critical patent/WO2024012681A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1408Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
    • H04L63/1425Traffic logging, e.g. anomaly detection

Definitions

  • the present disclosure generally relates to methods for use in detecting an attack on a network, a computer program product, and an attack detection module, a security module and a central defense module for use in detecting an attack on a network.
  • Game theory for security has grown to be a diverse area of research.
  • Prior work has considered models ranging from two-player games to n-player games, with the play- ers representing various combinations of defenders and attackers.
  • researchers have studied many solution concepts, such as the Stackelberg equilibrium, and both de- scriptive and normative interpretations of the outcomes have been considered.
  • Re- sults range from characterization of best-responses and equilibria to computational results, behavioral user studies, simulations, and real-world deployments.
  • game theory for security remains a very active research area with many open problems, driven by the practical need to understand and im- prove security decision-making. Examples of game theory-based security systems are described in E.
  • a block- chain based trust management method for agents in a multi-agent system is pre- sented for achieving trust, cooperation and privacy of agents.
  • the detection frameworks disclosed in these documents can execute a set of detection strategies such as detection, prevention, and mitigation with a goal to accurately detect the unknown cyber threats defined as zero-day attacks.
  • Mutual monitoring and detection are executed between the detection systems with the goal to detect the malicious detection systems and hence allow only the trusted detection systems to participate in the detection and monitoring process.
  • the major weakness of these works is that the detection strategies executed by the detection frameworks are activated all the time (i.e., the rules-based detection or the machine learning-based detection does not switch to another detection strategy), which leads to an increase of a false positive rate, and computation and communication overheads, specifically when the number of attackers increase.
  • the method comprises monitoring, by one or more attack detection modules, a behavior of one or more target devices in the network. If it is detected by a first one of the one or more attack detection modules that the monitored behavior of one of the one or more target devices satisfies a predefined condition, the first attack detection module provides to a central defense module data relating to the monitored behavior which satisfies the predefined condition.
  • the method further comprises updating, based on the monitored behavior which satisfies the predefined condition, a model implement- ed by one or more of the one or more attack detection modules to detect the attack on the network.
  • the model may hereby, in some examples, be a detection training model which may be artificial intelligence-based, as will be outlined further below.
  • the predefined condition may, in some examples, comprise or relate to the behavior of one of the one or more target devices which deviates from an average behavior of the other target devices by more than a predefined deviation threshold.
  • the abnormal behavior may, in some examples, relate to the number of packets sent and/or dropped by a target device which differs from an expected and/or average number of packets sent and/or dropped by other target devices.
  • the abnormal be- havior i.e. the behavior of the target device(s) which satisfies/satisfy the predefined condition, is then taken into account when generating and/or updating a model which is implemented for attack detection.
  • Attack features may hereby relate to the behavior of an attacker which fulfills the predefined condition.
  • the method comprises monitoring, by one or more security modules, a behavior of (one or more) target devices in the network. It is then determined, by the one or more security modules and based on the behavior, a normal behavior of the target device.
  • the security module(s) may hereby generate a model which can be implemented for detecting that a target device exhibits normal behavior, which may, in some exam- ples, be a behavior which does not deviate from an expected and/or average behav- ior by more than a predefined threshold.
  • the one or more features may, in some examples, be defined as features of a normal behavior.
  • the relevant one or more features i.e. attack's features and normal features, may be used in order to categorize a monitored target as a normal target or as an attacker.
  • the method comprises selecting, by a central defense module, from a plurality of attack detection modules one or more attack detection modules.
  • the central defense mod- ule then receives, from the selected one or more attack detection modules, data related to an attack on the network.
  • the central defense module generates, based on the data, an attack model for downloading by the selected one or more attack detection modules for attack detection.
  • the central defense module may hereby cooperate with the selected one or more attack detection modules in order to deter- mine (in some examples over time) the relevant attacks' features related in particular to a new attack's behavior, which may in particular be suspected to occur.
  • selecting the one or more attack detection modules from the plurality of attack detection modules is based on a respective reliability of each of the plurality of attack detection modules.
  • the reliability is based on a behavior of a re- spective attack detection module.
  • the attack detection module may be considered as not being reliable, and may thus not be selected by the central de- fense module.
  • any two or more of the above-outlined methods may be com- bined.
  • the computer program product may hereby, in some examples, be stored on a comput- er-readable recording medium or encoded in a data signal.
  • an attack detection module for use in detecting an attack on a network.
  • the attack detection module is configured to monitor a behavior of one or more target devices in the network.
  • the attack detection module is further configured to detect that/if the monitored behavior of one of the one or more target devices satisfies a predefined condition.
  • the attack detection module is further con- figured to provide, to a central defense module, data relating to the monitored be- havior which satisfies the predefined condition.
  • the attack detection module is configured to update, based on the monitored behavior which satisfies the predefined condition, a model implemented by the attack detection module to detect the attack on the network.
  • a security module for use in detecting an attack on a net- work.
  • the security module is configured to monitor a behavior of one or more target devices in the network.
  • the security module is further configured to determine, based on the behavior, a normal behavior of the target device(s).
  • a central defense module for use in detecting an attack on a network.
  • the central defense module is configured to select from a plurality of attack detection modules one or more attack detection modules.
  • the central defense module is further configured to receive, from the selected one or more attack detec- tion modules, data related to an attack on the network. Based on the data, the cen- tral defense module is configured to generate an attack model for downloading by the selected one or more attack detection modules for attack detection.
  • attack detection module according to any one of the example implementations as outlined herein
  • security module according to any one of the example implemen- tations as outlined herein
  • central defense module according to any one of the example implementations as outlined herein.
  • Fig. 1 shows a schematic illustration of a security framework according to some ex- ample implementations of the present disclosure
  • Fig. 2 shows a flow diagram of a method for cooperatively determining attack fea- tures according to some example implementations of the present disclosure
  • Fig. 3 shows a flow diagram for optimal activation of detection techniques according to some example implementations of the present disclosure
  • Fig. 4 shows a flow diagram of a method according to some example implementa- tions of the present disclosure
  • Fig. 5 shows a flow diagram of a method according to some example implementa- tions of the present disclosure
  • Fig. 6 shows a flow diagram of a method according to some example implementa- tions of the present disclosure.
  • the security framework may be an Al security framework.
  • the security framework may be based on new defense modules, which are defined as a security module, an attack detection mod- ule, and a central defense module.
  • the security interactions between these modules are performed, in some examples, based on a security game model between the attack detection module and the security module in which the goal may be to force the attack detection module to provide over time correct and relevant attacks' fea- tures (attributes of attacks behaviors) and force the security module to provide over time correct and relevant features related to normal behaviors.
  • a new reliable reputation metric is implemented in some examples that evaluates the behavior of the attack detection module by monitoring a set of parameters X, Y, Z and Z? with a goal to activate optimally the detection technique, while considering the tradeoff between false positive and computation overhead.
  • X, Kand Zare respectively the attacks detection rates detected by the machine learning algorithm, rules-based detection and hybrid detection (rules-based and machine-learning based) and confirmed by the central defense module.
  • D is the false detection rate provided by the attack detection module and detected by the central defense module.
  • the (Al) security framework may be considered as a main com- ponent of the zero-trust architecture.
  • Figure 1 shows a schematic illustration of a network 100 comprising a system 101 in which a security framework according to some example implementations of the pre- sent disclosure is provided.
  • the security framework is an Al edge security framework (AESF).
  • the AESF comprises different defense modules, that is an attack detection module 104 and a security module 106 executed, in this example, at each edge server, and a central defense module 102 executed, in this example, at a cloud server.
  • the system 101 communicates, in this example, with an Internet of things network 108 imple- mented in an access network.
  • the edge network, cloud network and access network are merely examples, and the central defense module 102, the attack detection module 104 and the security mod- ule 106 can be implemented in networks different from the edge network, cloud network and access network.
  • the central defense module 102, the attack detection module 104 and the security module 106 can be imple- mented in the same network, e.g. the same edge network or the same cloud net- work.
  • the AESF can, in some examples, have many points of presence in a multiple edge server setup.
  • the attack detection module 104 cooperates with the central defense module 102 (which may be defined as cooperative attack features determination) with the goal of determining (in particular over time) the relevant attacks' features related to a new attack's behavior that is suspected to occur or will be suspected to occur in future iterations.
  • the central defense module 102 which may be defined as cooperative attack features determination
  • the suspected target is detected as malicious and this detection is shared with the central defense module 102.
  • the number of packets sent and/or dropped is just one example of a feature, and other features may additionally or alternatively be taken into account when determining if a behavior is normal or abnormal.
  • another example which may be taken into account when determining whether the behavior is normal or abnormal is the signal strength as an attack's feature which may be used to detect, for example, a jamming attack.
  • the signal strength as an attack's feature which may be used to detect, for example, a jamming attack.
  • such examples may be used as an input for training a machine learning/ Al model, and the output for training purposes is labeled as a target/device (which may comprise the attack detection module 104 and/or the security module 106 and/or another target device) exhibiting normal or abnormal behavior.
  • the security module 106 focus is to determine the (one or more) features related to normal behaviors of the monitored target(s), where the cooperation detection be- tween the central defense module 102 and the security module 106 is performed to detect/identify over time the relevant feature(s) of normal behaviors.
  • the security agent/module categorizes the monitored target as a node that exhibits a normal behavior and shares this information with the central defense module 102.
  • such examples/data may be used as an input for training a machine learning/AI model, and the output for training purposes is labeled as a target/device (which may comprise the attack detection module 104 and/or the security module 106 and/or another target device) exhibiting normal or abnormal behavior.
  • a target/device which may comprise the attack detection module 104 and/or the security module 106 and/or another target device
  • the attack detection module 104 monitors attack behaviors, and the attack detection module 104 and the central defense module 102 execute a cooperative attack's features determination process.
  • the attack detection module 104 is further configured to detect if the security mod- ule 106 is a malicious module.
  • the central defense module 102 and the security module 106 cooperatively execute a Federated Learning to determine the relevant features relat- ed to normal behaviors.
  • the security module 106 further to monitoring normal be- haviors, detects if the attack detection module 104 is a malicious module.
  • An interaction game is provided between the defense modules, i.e. the attack detec- tion module 104 and the security module 106.
  • system 101 that is the attack detection module 104 and the security module 106, respectively
  • system 101 that is the attack detection module 104 and the security module 106, respectively
  • LoT Internet of Things
  • the attack detection module 104 is based on rules-based attacks detection, machine learning-based attacks detection, or a hybrid detection technique which relates to a combination of rules-based and machine learning-based detection.
  • the attack detection module executes a machine learning-based detection on a supervised neural network (with, for example, inputs and outputs in the network being provided based on, e.g., numbers of packages sent and/or dropped for learning the model).
  • the reputation value is (very) low (e.g. below a predefined threshold) and/or (very) high (e.g. above a predefined threshold, which may or may not be the same threshold under which the reputation value is considered to be (very) low)
  • the rules-based attacks detec- tion and hybrid attacks detection are activated, respectively (as explained below in relation to a reliable reputation).
  • the attack detection module 104 monitors the behavior(s) of the target device(s) (e.g., user equipment and/or Internet of Things devices) located within the neigh- borhood of the edge server (where the attack detection module is activated) by activating the supervised neural network algorithm.
  • the neural network algo- rithm detects a new attack
  • a cooperative attack's features determination process between the attack detection module 104 and the central defense module 102 is launched. This process is summarized as follows and illustrated in the flow diagram of figure 2, which shows a method 200 for cooperatively determining attack features according to some example implementations of the present disclosure.
  • the central defense module selects, at step S202, a set of attack detection modules (activated, in this example, in edge servers) to participate in the relevant attack's features determination.
  • the selection of attack detection modules is based on their reliabilities (as explained below in relation to an interaction game between the defense modules, in particular in relation to equation 3), i.e., the attack detection module that is, based on a determination at step S204 of whether an equi- librium (e.g. a Nash equilibrium) for the security module which monitors the respec- tive attack detection module is reached, detected/categorized at step S206 as a malicious security agent is not selected in the attack's features determination pro- cess.
  • an equi- librium e.g. a Nash equilibrium
  • only the attack detection modules that are exe- cuting the hybrid detection technique are selected at step S202 (see in particular below in relation to reliable reputation metric, formula 7).
  • the central defense module generates a global attack model based on the new attack' s features related to the new attack detected by the attack detection module and the observations of the other selected attack detection modules.
  • the observations constitute the features related to the current attacks detected by the selected attack detection modules.
  • the global attack model may, in some exam- ples, be either a training model used as an input by the machine learning algorithm, or a rules model (containing a rule related to each attack behavior) used by rules- based attack detection, or a hybrid model (containing both a rules-based model and a training-based model) used by the hybrid attacks detection.
  • the attack detection modules download the global attack model, whereby the attack detection module that detects the new attack updates its training model and the other selected attack detection modules update their hybrid model.
  • the attack detection module that detects the new attack updates its training model and the other selected attack detection modules update their hybrid model.
  • all of the attack detection modules that participate in the cooperative attack's features determination have the same detected attacks.
  • the Attack detection modules monitor the data traffic gathered by the security modules based on the global attack model (determined in step S208). The goal of this step is to detect the malicious behaviors that could be executed by the security modules.
  • the monitored security module exhibits an attack (deter- mined based on a determination, at step S214, of whether the attack detection mod- ule which monitors the respective security module has reached an equilibrium (e.g. Nash equilibrium))
  • steps S202 to S210 are re-executed with a goal to lead the cen- tral defense module to confirm or not the malicious behavior of the monitored securi- ty module and update the global attack model with new relevant attack's features.
  • step S214 of determining whether the security module is malicious it is also de- termined, in this example, whether another target device is malicious. If this is the case as provided in step S218, steps S202 to S210 are re-executed with a goal to lead the central defense module to confirm or not the malicious behavior of the mon- itored target device and update the global attack model with new relevant attack's features. If it is determined in step S214 that the target device and/or the security module, respectively, is not malicious, the target device and/or the security module is/are continued to be monitored at step S216.
  • the method 200 may be performed based on monitoring a plurality of attack detection modules. Additionally or alternatively, a plurality of secu- rity modules may be monitored. Additionally or alternatively, a plurality of target devices (separate from the attack detection module(s) and the security module(s)) may be monitored.
  • the security module is, in some examples, a module that is based on a Federated Learning (FL) algorithm that monitors the behaviors of its targets (located, in this example, within the edge area).
  • FL Federated Learning
  • a plurality of security modules may act as FL agents and the central defense module is where models are aggregated and updated.
  • the purpose of the security module is, in some examples, to determine only the relevant feature(s) related to normal behaviors of the monitored target(s).
  • the cen- tral defense module and the security module cooperate between each other to de- termine the global normal model (related to normal behaviors of the monitored targets).
  • the security module monitors the behaviors of an (or more) attack detection module(s) by analyzing the attack behavior detected by the attack detec- tion module(s).
  • the security module detects that the behavior of a suspected attack corresponds rather to a normal behavior, i.e., the attack detection module exhibits a malicious behavior (see equation 4 below regarding the detection of a malicious attack detection module)
  • the security module informs the central defense module.
  • the security module shares with the central defense module the identi- ty of the suspected attack detection module with the information of attack (detected by the security module as normal behavior) that the attack detection module sus- pects to occur.
  • the information of attack corresponds to the attack's feature(s).
  • the attack detection module and the security module rely respectively on a Supervised Neural Network and a Federated Machine Learning algorithm.
  • this is just an example of Al algorithms and, in some examples, both modules may use other Al algorithms such as, but not limited to Support Vector Machine and Supervised Reinforcement Learning algorithm (executed by the attack detection module) and Generative Adversarial Network (executed by the security module).
  • an interaction game between the defense modules is provided.
  • the security interaction between the attack detection module and the security mod- ule is modeled, in some examples, as a Stackelberg game, where the security mod- ule and the attack detection module play respectively the roles of Leader and follower players.
  • leader player and one follower are present, i.e. each security Module interacts only with one attack detection module.
  • a new and robust security game model is provided that hardens the security process between the security module and the attack detection module. The goal of the secu- rity game is to detect accurately the malicious security module and the malicious attack detection module.
  • the payoff functions of the Leader player and follower player are defined in equa- tions 1 and 2. It is hereby noted that in the present disclosure, the Stackelberg game is used. However, other non-cooperative games may be used to study interactions between the security module and the attacker detection module, such as, but not limited to the mean field game theory.
  • D SM is the attacks suspected rate of the attack detection module determined by the security module
  • FP SM and FN SM are the false positive and false negative, respectively, generated by the security module against the attack detection module
  • DAM is the attacks suspected rate of the security module deter- mined by the attack detection module
  • FP AM and FN AM are the false positive and false negative, respectively, generated by the attack detection module against the security module.
  • the parameters D SM , FP SM and FN SM are computed by the security module and updated by the central defense module.
  • the parameters D AM , FP AM are FN AM are computed by the attack detection module and updated by the central defense mod- ule.
  • ⁇ 1, ⁇ 2, ⁇ 3, ⁇ 1, ⁇ 2 and ⁇ 3 ⁇ [0,1] are the weight parameters.
  • N is the number of interactions between the security and attack detection modules.
  • the proposed Steckelberg security game aims to reinforce the accuracy of new at- tacks detection by forcing the security module to provide relevant features of normal behaviors and forcing the attack detection module to provide relevant features of new attacks behaviors.
  • This security game leads to a reduction over time of the false positive and false negative rates generated by both security and attack detection modules, as shown in equations 3 and 4.
  • the attack detection module may run a machine learning-based detection, a rules-based attacks detection, or a hybrid attacks detection, which may depend on the reputation value of the attack detection module, which may, in some examples, be computed by the central defense module.
  • Running all the time a ma- chine learning-based detection technique may not be efficient since this technique may require a high computation overhead to carry out the training and detection process.
  • the attackers may degrade the network performance. This degrada- tion may consist of forcing legitimate targets to generate high computation and communication overheads. Therefore, when the reputation (of the attack detection module) is (very) low (e.g. below a predefined threshold), the central defense mod- ule requests the attack detection module to switch from machine learning-based detection to rules-based attacks detection.
  • a switching from a machine learning- based detection to a rules-based attacks detection may relate, more generally, to a robust (heavy) detection technique to a lightweight detection technique.
  • a robust (heavy) detection technique may hereby comprise, for example, one or more of deep learning-based attack detection, reinforcement learning-based attack detection, and generative adversarial network-based attack detection.
  • a lightweight detection tech- nique may comprise, for example, the rules-based attack detection.
  • a hybrid detection technique rules-based and machine learning-based
  • the hybrid detection is activated only when the reputation value is high (that is above a predefined threshold).
  • the central defense module requests the attack detection module to switch from machine- learning based detection to a hybrid detection technique.
  • the new reputation metrics is based on a cooperative game theory, where the attack detection module and the central defense module cooperate with each other to acti- vate optimality the detection technique (machine-learning-based, or rules-based, or hybrid detection) of the attack detection module based on the monitored parameters X, Y, Zand D.
  • X, Kand Zare respectively the attacks detection rates detected by the machine learning algorithm, rules-based detection and hybrid detection (activated at the attack detection module) and confirmed by the central defense module.
  • D is the false detection rate provided by the attack detection module and detected by central defense module.
  • the utility function of this cooperative security game is defined in equation 5.
  • the equilibrium (e.g. Nash equilibrium) of this cooperative security game is reached by maximizing the utility function by the cooperative players (i.e., attack detection module and central defense module) as defined in equation 6.
  • the new proposed reputation metric is defined as a utility function, F.
  • the activation of rules-based detection and hybrid detection is defined in equation 7, and illustrated in the flow diagram 300 for optimal activation of detection techniques (rules-based detection and hybrid detection) shown in figure 3.
  • the attack detection module monitors and computes the parameters X, Y, Zand D.
  • the computed parameters are then sent, at step S304, by the attack detection module to the central defense module.
  • the attack detection module and the central defense module compute, at step S306, the value of Preferred to above in equation 5.
  • a first task (Task 1) as outlined above in equation 7 is executed, in which the central defense module requests the attack detection module to activate the rules-based attacks detection. If the condi- tion 0 ⁇ F « F* is not fulfilled, it is determined at step S316 if the condition F Max ⁇ F ⁇ 5* is fulfilled, whereby F Max ⁇ s a (predefined) security threshold, which may be provided by the security expert working at the central defense module level and which may be updated over time to mitigate the false positive and false negative rates.
  • a second task (Task 2) is executed at step S318, in which the central defense module requests the attack detection module to activate the hybrid attacks detection. If the condition F Max ⁇ 5 ⁇ 5* is not fulfilled, the method returns to step S302 where the attack detection module monitors and computes the parameters X, Y, Zand D.
  • FIG. 4 shows a flow diagram of a method 400 according to some example imple- mentations of the present disclosure.
  • the method 400 is suitable for use in detecting an attack on a network.
  • the method 400 comprises, at step S402, monitoring, by one or more attack detection modules, a behavior of one or more target devices in the network.
  • the first attack detection module provides to a central defense module data relating to the monitored behavior which satisfies the predefined condition.
  • the method 400 further comprises updating, at step S406, based on the monitored behavior which satisfies the predefined condition, a model implemented by one or more of the one or more attack detection modules to detect the attack on the network.
  • FIG. 5 shows a flow diagram of a method 500 according to some example imple- mentations of the present disclosure.
  • the method 500 is suitable for use in detecting an attack on a network.
  • the method 500 comprises monitoring, at step S502, by one or more security modules, a behavior of target devices in the network.
  • the one or more security modules determine, based on the behavior, a normal behavior of the target device.
  • FIG. 6 shows a flow diagram of a method 600 according to some example imple- mentations of the present disclosure.
  • the method 600 is suitable for use in detecting an attack on a network.
  • the method 600 comprises selecting, at step S602, by a central defense module, from a plurality of attack detection modules one or more attack detection modules.
  • the method 600 further comprises, at step S604, receiv- ing, by the central defense module from the selected one or more attack detection modules, data related to an attack on the network.
  • the central defense module generates, based on the data, an attack model for downloading by the se- lected one or more attack detection modules for attack detection.
  • AESF Al Edge Security Framework
  • security module is a hierarchical (Al) security framework composed of two kinds of (Al) agents, named security module and attack detection module.
  • the security module focus is on the determination of one or more relevant features (i.e. attrib- utes) related to a normal behavior of the monitored targets, while the attack detec- tion module focus is on the determination of one or more attacks' features.
  • some example implementations outlined throughout the present disclo- sure provide for a robust security game (based on a non-cooperative game) execut- ed by the Al modules on monitoring mutually their behaviors with a goal of detecting accurately the malicious security module and malicious attack detection module, while considering the false positive and false negative issues.
  • a reliable reputation metric based on a cooperative game is provided to assess the reputation of the attack detection module with a goal of activating optimally the detection techniques, rules-based or hybrid detection (rules-based and machine learning-based).
  • the reputation metric aims to harden the security and detection at the attack detection module.
  • a reliable reputation metric is provided to monitor the behavior of the attack detec- tion module with a goal of activating optimally the detection techniques (e.g., rules- based detection or hybrid detection) used by the attack detection module.
  • This opti- mal activation means ensuring a tradeoff between a high level of security (i.e., high detection rate and low false positive and false negative rates) and low computation and communication overheads (therefore in particular allowing for energy improve- ments).
  • the attack detection and security modules are forced to provide, respectively, the correct attack and normal patterns. This is achieved in particular due to the non- cooperative security game between the security module and the attack detection module.
  • a robust and reliable (Al) Security Frame- work for a heterogeneous network can be provided.

Abstract

Outlined herein is a method (400) for use in detecting an attack on a network. The method comprises monitoring (S402), by one or more attack detection modules, a behavior of one or more target devices in the network. If it is detected by a first one of the one or more attack detection modules that the monitored behavior of one of the one or more target devices satisfies a predefined condition, data relating to the monitored behavior which satisfies the predefined condition is provided by the first attack detection module to a central defense module. The method further comprises updating (S406), based on the monitored behavior which satisfies the predefined condition, a model implemented by one or more of the one or more attack detection modules to detect a said attack on the network.

Description

Telefonaktiebolaget LM Ericsson (publ)
Security framework for a network
Technical Field
The present disclosure generally relates to methods for use in detecting an attack on a network, a computer program product, and an attack detection module, a security module and a central defense module for use in detecting an attack on a network.
Background
Game theory for security has grown to be a diverse area of research. Prior work has considered models ranging from two-player games to n-player games, with the play- ers representing various combinations of defenders and attackers. Researchers have studied many solution concepts, such as the Stackelberg equilibrium, and both de- scriptive and normative interpretations of the outcomes have been considered. Re- sults range from characterization of best-responses and equilibria to computational results, behavioral user studies, simulations, and real-world deployments. Despite the abundance of prior work, game theory for security remains a very active research area with many open problems, driven by the practical need to understand and im- prove security decision-making. Examples of game theory-based security systems are described in E. Shieh etal., "PROTECT: An Application of Computational Game Theo- ry for the Security of the Ports of the United States", Proceedings of the Twenty- Sixth AAAI Conference on Artificial Intelligence, pages 2173-2179; and in M. Pirani et al., "A Game-Theoretic Framework for the Security- Aware Sensor Placement Problem in Networked Control Systems", IEEE Transactions on Automatic Control, 2021.
Attacks on machine learning/artificial intelligence (Al) models, a technique that at- tempts to fool models with deceptive data, is a growing threat in the Al and machine learning research community. The most common reason is to cause a malfunction in a machine learning model. An adversarial attack might entail presenting a model with inaccurate or misrepresentative data for its training, or introducing maliciously de- signed data to deceive an already trained model (see R. Shokri, "Membership Infer- ence Attacks Against Machine Learning Models", 2017 IEEE Symposium on Security and Privacy (SP)).
Recently, a couple of distributed detection frameworks based on a distributed detec- tion process were proposed. In I. Diana Jeba Jingle etal., "A collaborative defense protocol against collaborative attacks in wireless mesh networks", International Jour- nal of Enterprise Network Management, 2021, Vol.12 No.3, pp.199 - 220, a collabo- rative defense protocol (CDP) which uses a handshake-based verification process and a collaborative flood detection and reaction process to effectively carry out the de- fense are disclosed. In R. Khalid, "A Secure Trust Method for Multi-Agent System in Smart Grids Using Blockchain", IEEE Access, vol. 9, pp. 59848-59859, 2021, a block- chain based trust management method for agents in a multi-agent system is pre- sented for achieving trust, cooperation and privacy of agents. The detection frameworks disclosed in these documents can execute a set of detection strategies such as detection, prevention, and mitigation with a goal to accurately detect the unknown cyber threats defined as zero-day attacks. Mutual monitoring and detection are executed between the detection systems with the goal to detect the malicious detection systems and hence allow only the trusted detection systems to participate in the detection and monitoring process. However, the major weakness of these works is that the detection strategies executed by the detection frameworks are activated all the time (i.e., the rules-based detection or the machine learning-based detection does not switch to another detection strategy), which leads to an increase of a false positive rate, and computation and communication overheads, specifically when the number of attackers increase.
In N. Wang etal., "Machine Learning-based Spoofing Attack Detection in MmWave 60GHz IEEE 802. Had Networks", IEEE Infocom 2020, the authors proposed a coop- erative attacks detection against spoofing attacks by using a machine learning algo- rithm. Physical layer features, such as signal to noise ratio and sector level sweep, which are used as inputs to detect the attacks, are exploited to detect the spoofing attack. The main weakness of this work is that the trust level of the security agent that runs the machine learning is evaluated and hence this security agent could be malicious and could provide false and malicious detection.
Summary
Accordingly, there is a need for an improved prevention/security framework.
There is provided a method for use in detecting an attack on a network. The method comprises monitoring, by one or more attack detection modules, a behavior of one or more target devices in the network. If it is detected by a first one of the one or more attack detection modules that the monitored behavior of one of the one or more target devices satisfies a predefined condition, the first attack detection module provides to a central defense module data relating to the monitored behavior which satisfies the predefined condition. The method further comprises updating, based on the monitored behavior which satisfies the predefined condition, a model implement- ed by one or more of the one or more attack detection modules to detect the attack on the network.
The model may hereby, in some examples, be a detection training model which may be artificial intelligence-based, as will be outlined further below.
The predefined condition may, in some examples, comprise or relate to the behavior of one of the one or more target devices which deviates from an average behavior of the other target devices by more than a predefined deviation threshold. For example, the abnormal behavior may, in some examples, relate to the number of packets sent and/or dropped by a target device which differs from an expected and/or average number of packets sent and/or dropped by other target devices. The abnormal be- havior, i.e. the behavior of the target device(s) which satisfies/satisfy the predefined condition, is then taken into account when generating and/or updating a model which is implemented for attack detection. Attack features may hereby relate to the behavior of an attacker which fulfills the predefined condition.
There is provided a further method for use in detecting an attack on a network. The method comprises monitoring, by one or more security modules, a behavior of (one or more) target devices in the network. It is then determined, by the one or more security modules and based on the behavior, a normal behavior of the target device. The security module(s) may hereby generate a model which can be implemented for detecting that a target device exhibits normal behavior, which may, in some exam- ples, be a behavior which does not deviate from an expected and/or average behav- ior by more than a predefined threshold. For the normal behavior, the one or more features may, in some examples, be defined as features of a normal behavior.
In the above methods, the relevant one or more features, i.e. attack's features and normal features, may be used in order to categorize a monitored target as a normal target or as an attacker.
There is provided a further method for use in detecting an attack on a network. The method comprises selecting, by a central defense module, from a plurality of attack detection modules one or more attack detection modules. The central defense mod- ule then receives, from the selected one or more attack detection modules, data related to an attack on the network. The central defense module generates, based on the data, an attack model for downloading by the selected one or more attack detection modules for attack detection. The central defense module may hereby cooperate with the selected one or more attack detection modules in order to deter- mine (in some examples over time) the relevant attacks' features related in particular to a new attack's behavior, which may in particular be suspected to occur.
In some examples, selecting the one or more attack detection modules from the plurality of attack detection modules is based on a respective reliability of each of the plurality of attack detection modules. The reliability is based on a behavior of a re- spective attack detection module. In some examples, if the behavior of the attack detection module deviates from an average and/or expected behavior of attack de- tection modules by a predefined threshold, the attack detection module may be considered as not being reliable, and may thus not be selected by the central de- fense module.
As will be appreciated, any two or more of the above-outlined methods may be com- bined.
There is further provided a computer program product comprising program code portions that, when executed on at least one processor, configure the processor to perform the method of any one of the example implementations outlined herein. The computer program product may hereby, in some examples, be stored on a comput- er-readable recording medium or encoded in a data signal.
There is further provided an attack detection module for use in detecting an attack on a network. The attack detection module is configured to monitor a behavior of one or more target devices in the network. The attack detection module is further configured to detect that/if the monitored behavior of one of the one or more target devices satisfies a predefined condition. The attack detection module is further con- figured to provide, to a central defense module, data relating to the monitored be- havior which satisfies the predefined condition. Furthermore, the attack detection module is configured to update, based on the monitored behavior which satisfies the predefined condition, a model implemented by the attack detection module to detect the attack on the network.
There is further provided a security module for use in detecting an attack on a net- work. The security module is configured to monitor a behavior of one or more target devices in the network. The security module is further configured to determine, based on the behavior, a normal behavior of the target device(s).
There is further provided a central defense module for use in detecting an attack on a network. The central defense module is configured to select from a plurality of attack detection modules one or more attack detection modules. The central defense module is further configured to receive, from the selected one or more attack detec- tion modules, data related to an attack on the network. Based on the data, the cen- tral defense module is configured to generate an attack model for downloading by the selected one or more attack detection modules for attack detection.
There is further provided a system comprising any combination of two or more of the attack detection module according to any one of the example implementations as outlined herein, the security module according to any one of the example implemen- tations as outlined herein, and the central defense module according to any one of the example implementations as outlined herein.
Brief Description of the Drawings
Further aspects, details and advantages of the present disclosure will become appar- ent from the detailed description of exemplary embodiments below and from the drawings, wherein:
Fig. 1 shows a schematic illustration of a security framework according to some ex- ample implementations of the present disclosure;
Fig. 2 shows a flow diagram of a method for cooperatively determining attack fea- tures according to some example implementations of the present disclosure;
Fig. 3 shows a flow diagram for optimal activation of detection techniques according to some example implementations of the present disclosure;
Fig. 4 shows a flow diagram of a method according to some example implementa- tions of the present disclosure;
Fig. 5 shows a flow diagram of a method according to some example implementa- tions of the present disclosure; and Fig. 6 shows a flow diagram of a method according to some example implementa- tions of the present disclosure.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth in order to provide a thorough understanding of the present disclosure. It will be apparent to one of skill in the art that the present disclosure may be practiced in other embodiments that depart from these specific details.
Those skilled in the art will appreciate that the examples outlined herein may be implemented using individual hardware circuits, using software functioning in con- junction with a programmed microprocessor or general purpose computer, using one or more application specific integrated circuits (ASICs) and/or using one or more digital signal processors (DSP). It will also be appreciated that when the present disclosure is described in terms of a method, it may also be embodied in one or more processors and one or more memories coupled to the one or more processors, wherein the one or more memories store one or more computer programs that per- form the steps, services and functions disclosed herein when executed by the one or more processors.
According to example implementations outlined throughout the present disclosure, accuracy for detection of attacks is improved, in particular against unknown attacks. False positives and/or false negatives when detecting attacks may be reduced in example implementations outlined herein when compared to methods according to the state of the art. Furthermore, computational overhead may also be reduced when implementing the examples outlined throughout the present disclosure.
In some examples, the security framework may be an Al security framework.
The security framework according to examples outlined herein may be based on new defense modules, which are defined as a security module, an attack detection mod- ule, and a central defense module. The security interactions between these modules are performed, in some examples, based on a security game model between the attack detection module and the security module in which the goal may be to force the attack detection module to provide over time correct and relevant attacks' fea- tures (attributes of attacks behaviors) and force the security module to provide over time correct and relevant features related to normal behaviors. A new reliable reputation metric is implemented in some examples that evaluates the behavior of the attack detection module by monitoring a set of parameters X, Y, Z and Z? with a goal to activate optimally the detection technique, while considering the tradeoff between false positive and computation overhead. Here, X, Kand Zare respectively the attacks detection rates detected by the machine learning algorithm, rules-based detection and hybrid detection (rules-based and machine-learning based) and confirmed by the central defense module. D is the false detection rate provided by the attack detection module and detected by the central defense module.
In some examples, the (Al) security framework may be considered as a main com- ponent of the zero-trust architecture.
Figure 1 shows a schematic illustration of a network 100 comprising a system 101 in which a security framework according to some example implementations of the pre- sent disclosure is provided.
In this example, the security framework is an Al edge security framework (AESF). The AESF comprises different defense modules, that is an attack detection module 104 and a security module 106 executed, in this example, at each edge server, and a central defense module 102 executed, in this example, at a cloud server. The system 101 communicates, in this example, with an Internet of things network 108 imple- mented in an access network. As will be appreciated by those with skill in the art, the edge network, cloud network and access network are merely examples, and the central defense module 102, the attack detection module 104 and the security mod- ule 106 can be implemented in networks different from the edge network, cloud network and access network. Additionally or alternatively, the central defense module 102, the attack detection module 104 and the security module 106 can be imple- mented in the same network, e.g. the same edge network or the same cloud net- work.
The AESF can, in some examples, have many points of presence in a multiple edge server setup.
The attack detection module 104 cooperates with the central defense module 102 (which may be defined as cooperative attack features determination) with the goal of determining (in particular over time) the relevant attacks' features related to a new attack's behavior that is suspected to occur or will be suspected to occur in future iterations. As an example, in case the number of packets sent and/or dropped (con- sidered as relevant features) for a device is higher compared to its neighbor devices, the suspected target is detected as malicious and this detection is shared with the central defense module 102. It will be appreciated that the number of packets sent and/or dropped is just one example of a feature, and other features may additionally or alternatively be taken into account when determining if a behavior is normal or abnormal. For instance, another example which may be taken into account when determining whether the behavior is normal or abnormal is the signal strength as an attack's feature which may be used to detect, for example, a jamming attack. In any one or more of the example implementation outlined throughout the present disclo- sure, such examples may be used as an input for training a machine learning/ Al model, and the output for training purposes is labeled as a target/device (which may comprise the attack detection module 104 and/or the security module 106 and/or another target device) exhibiting normal or abnormal behavior.
The security module 106 focus is to determine the (one or more) features related to normal behaviors of the monitored target(s), where the cooperation detection be- tween the central defense module 102 and the security module 106 is performed to detect/identify over time the relevant feature(s) of normal behaviors. As an example, if the number of packets sent and/or dropped (considered as relevant features) of the monitored target(s) is the same as compared to that of neighbor targets, the security agent/module categorizes the monitored target as a node that exhibits a normal behavior and shares this information with the central defense module 102. As outlined above, such examples/data may be used as an input for training a machine learning/AI model, and the output for training purposes is labeled as a target/device (which may comprise the attack detection module 104 and/or the security module 106 and/or another target device) exhibiting normal or abnormal behavior.
In view of the above, and as can be seen in figure 1, the attack detection module 104 monitors attack behaviors, and the attack detection module 104 and the central defense module 102 execute a cooperative attack's features determination process. The attack detection module 104 is further configured to detect if the security mod- ule 106 is a malicious module.
In some examples, the central defense module 102 and the security module 106 cooperatively execute a Federated Learning to determine the relevant features relat- ed to normal behaviors. The security module 106, further to monitoring normal be- haviors, detects if the attack detection module 104 is a malicious module. An interaction game is provided between the defense modules, i.e. the attack detec- tion module 104 and the security module 106.
Data traffic flows between the system 101 (that is the attack detection module 104 and the security module 106, respectively) and, in this example, an Internet of Things (loT) network 308.
The attack detection module 104 is based on rules-based attacks detection, machine learning-based attacks detection, or a hybrid detection technique which relates to a combination of rules-based and machine learning-based detection. In some exam- ples, by default, the attack detection module executes a machine learning-based detection on a supervised neural network (with, for example, inputs and outputs in the network being provided based on, e.g., numbers of packages sent and/or dropped for learning the model). However, in some examples, when the reputation value is (very) low (e.g. below a predefined threshold) and/or (very) high (e.g. above a predefined threshold, which may or may not be the same threshold under which the reputation value is considered to be (very) low), the rules-based attacks detec- tion and hybrid attacks detection are activated, respectively (as explained below in relation to a reliable reputation).
The attack detection module 104 monitors the behavior(s) of the target device(s) (e.g., user equipment and/or Internet of Things devices) located within the neigh- borhood of the edge server (where the attack detection module is activated) by activating the supervised neural network algorithm. In case the neural network algo- rithm detects a new attack, a cooperative attack's features determination process between the attack detection module 104 and the central defense module 102 is launched. This process is summarized as follows and illustrated in the flow diagram of figure 2, which shows a method 200 for cooperatively determining attack features according to some example implementations of the present disclosure.
In this example, the central defense module selects, at step S202, a set of attack detection modules (activated, in this example, in edge servers) to participate in the relevant attack's features determination. The selection of attack detection modules is based on their reliabilities (as explained below in relation to an interaction game between the defense modules, in particular in relation to equation 3), i.e., the attack detection module that is, based on a determination at step S204 of whether an equi- librium (e.g. a Nash equilibrium) for the security module which monitors the respec- tive attack detection module is reached, detected/categorized at step S206 as a malicious security agent is not selected in the attack's features determination pro- cess. In addition, in some examples, only the attack detection modules that are exe- cuting the hybrid detection technique are selected at step S202 (see in particular below in relation to reliable reputation metric, formula 7).
At step S208, the central defense module generates a global attack model based on the new attack' s features related to the new attack detected by the attack detection module and the observations of the other selected attack detection modules. Here, the observations constitute the features related to the current attacks detected by the selected attack detection modules. The global attack model may, in some exam- ples, be either a training model used as an input by the machine learning algorithm, or a rules model (containing a rule related to each attack behavior) used by rules- based attack detection, or a hybrid model (containing both a rules-based model and a training-based model) used by the hybrid attacks detection.
At step S210, the attack detection modules download the global attack model, whereby the attack detection module that detects the new attack updates its training model and the other selected attack detection modules update their hybrid model. In this step, in some examples, all of the attack detection modules that participate in the cooperative attack's features determination have the same detected attacks.
At step S212, the Attack detection modules monitor the data traffic gathered by the security modules based on the global attack model (determined in step S208). The goal of this step is to detect the malicious behaviors that could be executed by the security modules. In case the monitored security module exhibits an attack (deter- mined based on a determination, at step S214, of whether the attack detection mod- ule which monitors the respective security module has reached an equilibrium (e.g. Nash equilibrium)), steps S202 to S210 are re-executed with a goal to lead the cen- tral defense module to confirm or not the malicious behavior of the monitored securi- ty module and update the global attack model with new relevant attack's features.
In step S214 of determining whether the security module is malicious, it is also de- termined, in this example, whether another target device is malicious. If this is the case as provided in step S218, steps S202 to S210 are re-executed with a goal to lead the central defense module to confirm or not the malicious behavior of the mon- itored target device and update the global attack model with new relevant attack's features. If it is determined in step S214 that the target device and/or the security module, respectively, is not malicious, the target device and/or the security module is/are continued to be monitored at step S216.
As will be appreciated, the method 200 may be performed based on monitoring a plurality of attack detection modules. Additionally or alternatively, a plurality of secu- rity modules may be monitored. Additionally or alternatively, a plurality of target devices (separate from the attack detection module(s) and the security module(s)) may be monitored.
The security module is, in some examples, a module that is based on a Federated Learning (FL) algorithm that monitors the behaviors of its targets (located, in this example, within the edge area). A plurality of security modules may act as FL agents and the central defense module is where models are aggregated and updated.
The purpose of the security module is, in some examples, to determine only the relevant feature(s) related to normal behaviors of the monitored target(s). The cen- tral defense module and the security module cooperate between each other to de- termine the global normal model (related to normal behaviors of the monitored targets).
Unlike the current Federated Learning algorithm, where all the distributed devices work with centralized devices to generate the same training model, in some exam- ples of the present disclosure, only the security module that detects the new relevant feature(s) and the central defense module work together to generate the shared training model defined as the global normal model. This learning mode (i.e., only the monitoring security module interacts with the central defense module) hardens the security process as the other security modules do not participate in the determination of the global normal model, which leads to a reduction of false negatives.
In this example, the security module monitors the behaviors of an (or more) attack detection module(s) by analyzing the attack behavior detected by the attack detec- tion module(s). In case the security module detects that the behavior of a suspected attack corresponds rather to a normal behavior, i.e., the attack detection module exhibits a malicious behavior (see equation 4 below regarding the detection of a malicious attack detection module), the security module informs the central defense module. Here, the security module shares with the central defense module the identi- ty of the suspected attack detection module with the information of attack (detected by the security module as normal behavior) that the attack detection module sus- pects to occur. The information of attack corresponds to the attack's feature(s).
It is noted that, in some examples, the attack detection module and the security module rely respectively on a Supervised Neural Network and a Federated Machine Learning algorithm. However, this is just an example of Al algorithms and, in some examples, both modules may use other Al algorithms such as, but not limited to Support Vector Machine and Supervised Reinforcement Learning algorithm (executed by the attack detection module) and Generative Adversarial Network (executed by the security module).
In some examples, an interaction game between the defense modules is provided. The security interaction between the attack detection module and the security mod- ule is modeled, in some examples, as a Stackelberg game, where the security mod- ule and the attack detection module play respectively the roles of Leader and Follower players. In some examples, only one leader player and one follower are present, i.e. each security Module interacts only with one attack detection module. A new and robust security game model is provided that hardens the security process between the security module and the attack detection module. The goal of the secu- rity game is to detect accurately the malicious security module and the malicious attack detection module.
The payoff functions of the Leader player and Follower player are defined in equa- tions 1 and 2. It is hereby noted that in the present disclosure, the Stackelberg game is used. However, other non-cooperative games may be used to study interactions between the security module and the attacker detection module, such as, but not limited to the mean field game theory.
Figure imgf000013_0001
Figure imgf000014_0001
Figure imgf000014_0002
From equations 1 and 2, DSM is the attacks suspected rate of the attack detection module determined by the security module, and FPSM and FNSM are the false positive and false negative, respectively, generated by the security module against the attack detection module. DAM is the attacks suspected rate of the security module deter- mined by the attack detection module, and FPAM and FNAM are the false positive and false negative, respectively, generated by the attack detection module against the security module.
It is noted the parameters DSM, FPSM and FNSM are computed by the security module and updated by the central defense module. The parameters DAM, FPAM are FNAM are computed by the attack detection module and updated by the central defense mod- ule. α1, α2, α3, β1, β2 and β3 ε [0,1] are the weight parameters. N is the number of interactions between the security and attack detection modules.
In this example, the leader aims to maximize its payoff, while minimizing the payoff of the follower player, and vice versa, as shown in equations 3 and 4. Therefore, when the equilibrium (e.g. Nash equilibrium) for the security module is reached, i.e. Usecunty module = Yi, the attack detection module is categorized as a malicious module. Furthermore, when the equilibrium (e.g. Nash equilibrium) for the attack detection module is reached, i.e. UAttack module = Y2, the security module is categorized as a mali- cious module.
Figure imgf000015_0001
The proposed Steckelberg security game aims to reinforce the accuracy of new at- tacks detection by forcing the security module to provide relevant features of normal behaviors and forcing the attack detection module to provide relevant features of new attacks behaviors. This security game leads to a reduction over time of the false positive and false negative rates generated by both security and attack detection modules, as shown in equations 3 and 4.
According to some examples outlined herein, a reliable reputation metric is provided. As specified above, the attack detection module may run a machine learning-based detection, a rules-based attacks detection, or a hybrid attacks detection, which may depend on the reputation value of the attack detection module, which may, in some examples, be computed by the central defense module. Running all the time a ma- chine learning-based detection technique may not be efficient since this technique may require a high computation overhead to carry out the training and detection process. Specifically in case when the machine learning algorithm is infected by cyber-attacks, the attackers may degrade the network performance. This degrada- tion may consist of forcing legitimate targets to generate high computation and communication overheads. Therefore, when the reputation (of the attack detection module) is (very) low (e.g. below a predefined threshold), the central defense mod- ule requests the attack detection module to switch from machine learning-based detection to rules-based attacks detection.
Generally, throughout the present disclosure a switching from a machine learning- based detection to a rules-based attacks detection may relate, more generally, to a robust (heavy) detection technique to a lightweight detection technique. A robust (heavy) detection technique may hereby comprise, for example, one or more of deep learning-based attack detection, reinforcement learning-based attack detection, and generative adversarial network-based attack detection. A lightweight detection tech- nique may comprise, for example, the rules-based attack detection. To harden the accuracy of attacks detection, specifically against unknown attacks (such as zero-day threats), a hybrid detection technique (rules-based and machine learning-based) may be activated. The hybrid detection is activated only when the reputation value is high (that is above a predefined threshold). Here, the central defense module requests the attack detection module to switch from machine- learning based detection to a hybrid detection technique.
The new reputation metrics is based on a cooperative game theory, where the attack detection module and the central defense module cooperate with each other to acti- vate optimality the detection technique (machine-learning-based, or rules-based, or hybrid detection) of the attack detection module based on the monitored parameters X, Y, Zand D. X, Kand Zare respectively the attacks detection rates detected by the machine learning algorithm, rules-based detection and hybrid detection (activated at the attack detection module) and confirmed by the central defense module. D is the false detection rate provided by the attack detection module and detected by central defense module. The utility function of this cooperative security game is defined in equation 5.
Figure imgf000016_0001
From equation 5, the variables X, Zand Pare the main parameters used to assess the reputation value of the attack detection module. Therefore, the exponential func- tion is used to represent X, Zand D. The value of F increases rapidly when the val- ues of the parameters Zand Z increase, while F decreases rapidly when the parameter D increases, y1, y2 ε [0,1] are the weight parameters.
The equilibrium (e.g. Nash equilibrium) of this cooperative security game is reached by maximizing the utility function by the cooperative players (i.e., attack detection module and central defense module) as defined in equation 6.
Figure imgf000016_0002
The new proposed reputation metric is defined as a utility function, F. The activation of rules-based detection and hybrid detection is defined in equation 7,
Figure imgf000017_0001
and illustrated in the flow diagram 300 for optimal activation of detection techniques (rules-based detection and hybrid detection) shown in figure 3.
In this example, at step S302, the attack detection module monitors and computes the parameters X, Y, Zand D. The computed parameters are then sent, at step S304, by the attack detection module to the central defense module. Based on the parameters, the attack detection module and the central defense module compute, at step S306, the value of Preferred to above in equation 5.
If it is determined at step S308 that 5= 0, the method returns to step S302 where the attack detection module monitors and computes the parameters X, Y, Zand D. If it is determined that F* 0, the attack detection module and the central defense module collaboratively compute 5* at step S310.
If it is determined at step S312 that 0 < 5<< F* a first task (Task 1) as outlined above in equation 7 is executed, in which the central defense module requests the attack detection module to activate the rules-based attacks detection. If the condi- tion 0 < F« F* is not fulfilled, it is determined at step S316 if the condition FMax < F≤ 5* is fulfilled, whereby FMax \s a (predefined) security threshold, which may be provided by the security expert working at the central defense module level and which may be updated over time to mitigate the false positive and false negative rates. If FMax < F≤ F*, a second task (Task 2) is executed at step S318, in which the central defense module requests the attack detection module to activate the hybrid attacks detection. If the condition FMax < 5≤ 5* is not fulfilled, the method returns to step S302 where the attack detection module monitors and computes the parameters X, Y, Zand D.
Figure 4 shows a flow diagram of a method 400 according to some example imple- mentations of the present disclosure. The method 400 is suitable for use in detecting an attack on a network. The method 400 comprises, at step S402, monitoring, by one or more attack detection modules, a behavior of one or more target devices in the network. At step S404, if it is detected by a first one of the one or more attack detection modules that the monitored behavior of one of the one or more target devices satisfies a predefined condition, the first attack detection module provides to a central defense module data relating to the monitored behavior which satisfies the predefined condition. The method 400 further comprises updating, at step S406, based on the monitored behavior which satisfies the predefined condition, a model implemented by one or more of the one or more attack detection modules to detect the attack on the network.
Figure 5 shows a flow diagram of a method 500 according to some example imple- mentations of the present disclosure. The method 500 is suitable for use in detecting an attack on a network. The method 500 comprises monitoring, at step S502, by one or more security modules, a behavior of target devices in the network. At step S504, the one or more security modules determine, based on the behavior, a normal behavior of the target device.
Figure 6 shows a flow diagram of a method 600 according to some example imple- mentations of the present disclosure. The method 600 is suitable for use in detecting an attack on a network. The method 600 comprises selecting, at step S602, by a central defense module, from a plurality of attack detection modules one or more attack detection modules. The method 600 further comprises, at step S604, receiv- ing, by the central defense module from the selected one or more attack detection modules, data related to an attack on the network. At step S606, the central defense module generates, based on the data, an attack model for downloading by the se- lected one or more attack detection modules for attack detection.
As can be gathered from the present disclosure, one aspect of the (Al Edge) Security Framework (AESF) is a hierarchical (Al) security framework composed of two kinds of (Al) agents, named security module and attack detection module. The security module focus is on the determination of one or more relevant features (i.e. attrib- utes) related to a normal behavior of the monitored targets, while the attack detec- tion module focus is on the determination of one or more attacks' features. The deployment of two specific (Al) modules (one monitors the normal behaviors and one monitors the attacks behaviors) at each edge server that interact with a central- ized detection module (activated for instance at the cloud server) leads to identifying and distinguishing of features of, e.g., new attacks (defined as zero-day attack), while minimizing the false detections that could be generated by the (Al) modules.
Furthermore, some example implementations outlined throughout the present disclo- sure provide for a robust security game (based on a non-cooperative game) execut- ed by the Al modules on monitoring mutually their behaviors with a goal of detecting accurately the malicious security module and malicious attack detection module, while considering the false positive and false negative issues.
A reliable reputation metric based on a cooperative game is provided to assess the reputation of the attack detection module with a goal of activating optimally the detection techniques, rules-based or hybrid detection (rules-based and machine learning-based). The reputation metric aims to harden the security and detection at the attack detection module.
The interactions between the security module and the attack detection module to determine the features (i.e. attributes) related to normal and attack behaviors lead to detecting accurately the attacks, specifically new kind of attacks (that exhibit un- known misbehaviors), while the false positive and false negative rates generated by the security and attack detection modules are reduced over time.
A reliable reputation metric is provided to monitor the behavior of the attack detec- tion module with a goal of activating optimally the detection techniques (e.g., rules- based detection or hybrid detection) used by the attack detection module. This opti- mal activation means ensuring a tradeoff between a high level of security (i.e., high detection rate and low false positive and false negative rates) and low computation and communication overheads (therefore in particular allowing for energy improve- ments).
The attack detection and security modules are forced to provide, respectively, the correct attack and normal patterns. This is achieved in particular due to the non- cooperative security game between the security module and the attack detection module.
In view of the above, in some examples, a robust and reliable (Al) Security Frame- work for a heterogeneous network can be provided.
It will be appreciated that the present disclosure has been described with reference to exemplary embodiments that may be varied in many aspects. As such, the present invention is only limited by the claims that follow.

Claims

Claims
1. A method (400) for use in detecting an attack on a network, wherein the method comprises: monitoring (S402), by one or more attack detection modules, a behavior of one or more target devices in the network; if it is detected by a first one of the one or more attack detection modules that the monitored behavior of one of the one or more target devices satisfies a prede- fined condition, providing (S404), by the first attack detection module to a central defense module, data relating to the monitored behavior which satisfies the prede- fined condition; and updating (S406), based on the monitored behavior which satisfies the prede- fined condition, a model implemented by one or more of the one or more attack detection modules to detect a said attack on the network.
2. A method as claimed in claim 1, wherein the model implemented by the first attack detection module to detect a said attack on the network comprises a first detection training model.
3. A method as claimed in claim 1 or 2, wherein the model implemented by a second one of the attack detection modules to detect a said attack on the network comprises a hybrid detection model comprising a second detection training model and a rules-based detection model.
4. A method as claimed in any preceding claim, wherein the predefined condition comprises that the behavior of one of the one or more target devices deviates from an average behavior of the other target devices by more than a predefined deviation threshold.
5. A method as claimed in any preceding claim, further comprising monitoring, by a said attack detection module based on an attack model generated at least par- tially by the central defense module based on one or more features of the monitored behavior which satisfies the predefined condition, data traffic in the network gath- ered by a security module which is configured to determine one or more features relating to a normal behavior of the monitored one or more target devices.
6. A method as claimed in claim 5, wherein, if one of the attack detection mod- ules detects that a security module behavior of the security module satisfies a prede- fined security module condition, providing, by said attack detection module to the central defense module, data relating to the security module behavior for updating the attack model.
7. A method as claimed in claim 6, wherein the predefined security module con- dition relates to one or more of:
- an attacks-suspected rate of the security module,
- a false positive generated by the one of the attack detection modules against the security module, and
- a false negative generated by the one of the attack detection modules against the security module.
8. A method as claimed in claim 6 or 7, wherein the predefined security module condition is dependent on a number of interactions between the security module and any one or more of the one or more attack detection modules.
9. A method as claimed in claim 7, or as claimed in claim 8 when dependent on claim 7, wherein the security module is considered to be a malicious security module if a function, UAttack module, which is dependent on the attacks-suspected rate, the false positive and the false negative, has reached an equilibrium, in particular a Nash equilibrium.
10. A method as claimed in claim 9, wherein the equilibrium is reached when UAttack module = Y2, wherein Y2 = ArgmaxD.AM ArgmaxFP_AM,FN_AM UAttack module (DAM, FPAM, FNAM), wherein DAM is the attacks-suspected rate of the security module, FPAM is the false positive, and FNAM is the false negative.
11. A method as claimed in any preceding claim, wherein a said attack detection module is requested to switch from a machine learn- ing-based detection to a rules-based attacks detection if a reputation value of the attack detection module is below a first reputation value threshold, and/or wherein the attack detection module is requested to switch from the machine learning-based detection to a hybrid detection which comprises a said machine learn- ing-based detection and a rules-based detection if the reputation value of the attack detection module is above a second reputation value threshold, wherein the reputation value is based on (i) attacks detection rates detected by the machine learning-based detection, by the rules-based detection and by the hybrid detection, and (ii) a false detection rate provided by the respective attack detection module and detected by the central defense module.
12. A method as claimed in claim 11, wherein the reputation value increases with increasing attacks detection rates and decreases with an increasing false detection rate.
13. A method (500) for use in detecting an attack on a network, wherein the method comprises: monitoring (S502), by one or more security modules, a behavior of target devices in the network; and determining (S504), by the one or more security modules and based on the behavior, a normal behavior of a said target device.
14. A method as claimed in claim 13, wherein determining the normal behavior comprises determining one or more features related to the normal behavior.
15. A method as claimed in claim 14, wherein a single one of the one or more security modules cooperates with a central defense module to determine, from a said feature, a model related to the normal behavior.
16. A method as claimed in any one of claims 13 to 15, further comprising: monitoring, by a said security module, a behavior of an attack detection mod- ule configured to detect an attack on the network, and if the monitored behavior of the attack detection module satisfies a predefined attack detection module condition, providing, by the security module to the central defense module, data relating to the monitored behavior of the attack detection module.
17. A method as claimed in claim 16, wherein the predefined attack detection module condition relates to one or more of an attacks-suspected rate of the attack detection module, a false positive generated by the security module against the attack detection module, and a false negative generated by the security module against the attack detection module.
18. A method as claimed in claim 16 or 17, wherein the predefined attack detec- tion module condition is dependent on a number of interactions between the attack detection module and any one or more of the one or more security modules.
19. A method as claimed in claim 17, or as claimed in claim 18 when dependent on claim 17, wherein the attack detection module is considered to be a malicious attack detection module if a function, Usecurity module, which is dependent on the at- tacks-suspected rate, the false positive and the false negative, has reached an equi- librium, in particular a Nash equilibrium.
20. A method as claimed in claim 19, wherein the equilibrium is reached when Usecurity module = Y1, Wherein Yi = ArgmaXD_SM ArgmaXFP_SM,FN_SM Usecurity module (DSM, FPSM, FNSM), wherein DSM is the attacks-suspected rate of the attack detection module, FPSM is the false positive, and FNSM is the false negative.
21. A method (600) for use in detecting an attack on a network, wherein the method comprises: selecting (S602), by a central defense module, from a plurality of attack detec- tion modules one or more attack detection modules; receiving (S604), by the central defense module from the selected one or more attack detection modules, data related to an attack on the network; and generating (S606), by the central defense module and based on the data, an attack model for downloading by the selected one or more attack detection modules for attack detection.
22. A method as claimed in claim 21, wherein the selecting of the one or more attack detection modules from the plurality of attack detection modules is based on a respective reliability of each of the plurality of attack detection modules, and wherein the reliability is based on a behavior of a respective attack detection module.
23. A method as claimed in claim 22, wherein a first one of the attack detection modules is not selected by the central defense module if the behavior of the first attack detection module satisfies a predefined attack detection module condition.
24. A method as claimed in claim 23, wherein the predefined attack detection module condition comprises an attack detected by the attack detection module relat- ing to a normal behavior of data traffic in the network.
25. A method as claimed in any one of claims 22 to 24, wherein the behavior of a respective attack detection module is monitored by a security module.
26. A method as claimed in any one of claims 21 to 25, wherein the selecting comprises selecting only those one or more attack detection modules which execute, when monitoring a behavior of one or more target devices in the network, a hybrid detection model comprising a detection training model and a rules-based detection model.
27. A method as claimed in any one of claims 21 to 26, further comprising gener- ating, by the central defense module and a single security module, a normal model related to one or more features identifying a normal behavior of a target device in the network.
28. A method comprising any combination of two or more of:
(i) the method (400) for use in detecting an attack on a network as claimed in any one of claims 1 to 12;
(ii) the method (500) for use in detecting an attack on a network as claimed in any one of claims 13 to 20; and
(iii) the method (600) for use in detecting an attack on a network as claimed in any one of claims 21 to 27.
29. A computer program product comprising program code portions that, when executed on at least one processor, configure the processor to perform the method of any one of the preceding claims.
30. The computer program product of claim 29, stored on a computer-readable recording medium or encoded in a data signal.
31. An attack detection module (104) for use in detecting an attack on a network (100), wherein the attack detection module (104) is configured to: monitor a behavior of one or more target devices in the network (100); detect that the monitored behavior of one of the one or more target devices satisfies a predefined condition, provide, to a central defense module (102), data relating to the monitored be- havior which satisfies the predefined condition; and update, based on the monitored behavior which satisfies the predefined condi- tion, a model implemented by the attack detection module (104) to detect a said attack on the network (100).
32. An attack detection module (104) as claimed in claim 31, wherein the attack detection module (104) is configured to perform the method of any one of claims 1 to 12.
33. A security module (106) for use in detecting an attack on a network (100), wherein the security module (106) is configured to: monitor a behavior of target devices in the network (100); and determine, based on the behavior, a normal behavior of a said target device.
34. A security module (106) as claimed in claim 33, wherein the security module (106) is configured to perform the method of any one of claims 13 to 20.
35. A central defense module (102) for use in detecting an attack on a network (100), wherein the central defense module (102) is configured to: select from a plurality of attack detection modules one or more attack detec- tion modules; receive, from the selected one or more attack detection modules, data related to an attack on the network (100); and generate, based on the data, an attack model for downloading by the selected one or more attack detection modules for attack detection.
36. A central defense module (102) as claimed in claim 35, wherein the central defense module (102) is configured to perform the method of any one of claims 21 to 27.
37. A system (101) comprising any combination of two or more of:
(i) the attack detection module (104) of claim 31 or 32;
(ii) the security module (106) of claim 33 or 34; and
(iii) the central defense module (102) of claim 35 or 36.
PCT/EP2022/069730 2022-07-14 2022-07-14 Security framework for a network WO2024012681A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/EP2022/069730 WO2024012681A1 (en) 2022-07-14 2022-07-14 Security framework for a network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2022/069730 WO2024012681A1 (en) 2022-07-14 2022-07-14 Security framework for a network

Publications (1)

Publication Number Publication Date
WO2024012681A1 true WO2024012681A1 (en) 2024-01-18

Family

ID=82846232

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2022/069730 WO2024012681A1 (en) 2022-07-14 2022-07-14 Security framework for a network

Country Status (1)

Country Link
WO (1) WO2024012681A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160261621A1 (en) * 2015-03-02 2016-09-08 Verizon Patent And Licensing Inc. Network threat detection and management system based on user behavior information
US20210112083A1 (en) * 2019-10-10 2021-04-15 Honeywell International Inc. Hybrid intrusion detection model for cyber-attacks in avionics internet gateways using edge analytics
US20210273953A1 (en) * 2018-02-20 2021-09-02 Darktrace Holdings Limited ENDPOINT AGENT CLIENT SENSORS (cSENSORS) AND ASSOCIATED INFRASTRUCTURES FOR EXTENDING NETWORK VISIBILITY IN AN ARTIFICIAL INTELLIGENCE (AI) THREAT DEFENSE ENVIRONMENT

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160261621A1 (en) * 2015-03-02 2016-09-08 Verizon Patent And Licensing Inc. Network threat detection and management system based on user behavior information
US20210273953A1 (en) * 2018-02-20 2021-09-02 Darktrace Holdings Limited ENDPOINT AGENT CLIENT SENSORS (cSENSORS) AND ASSOCIATED INFRASTRUCTURES FOR EXTENDING NETWORK VISIBILITY IN AN ARTIFICIAL INTELLIGENCE (AI) THREAT DEFENSE ENVIRONMENT
US20210112083A1 (en) * 2019-10-10 2021-04-15 Honeywell International Inc. Hybrid intrusion detection model for cyber-attacks in avionics internet gateways using edge analytics

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
E. SHIEH ET AL.: "PROTECT: An Application of Computational Game Theory for the Security of the Ports of the United States", PROCEEDINGS OF THE TWENTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, pages 2173 - 2179
I. DIANA JEBA JINGLE ET AL.: "A collaborative defense protocol against collaborative attacks in wireless mesh networks", INTERNATIONAL JOUR NAL OF ENTERPRISE NETWORK MANAGEMENT, vol. 12, no. 3, 2021, pages 199 - 220
M. PIRANI: "A Game-Theoretic Framework for the Security-Aware Sensor Placement Problem in Networked Control Systems", IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2021
N. WANG: "Machine Learning-based Spoofing Attack Detection in MmWave 60GHz IEEE 802. Had Networks", IEEE INFOCOM, 2020
R. KHALID: "A Secure Trust Method for Multi-Agent System in Smart Grids Using Blockchain", IEEE ACCESS, vol. 9, 2021, pages 59848 - 59859, XP011851545, DOI: 10.1109/ACCESS.2021.3071431
R. SHOKRI: "Membership Inference Attacks Against Machine Learning Models", 2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP
SPARSH SHARMA ET AL: "A survey on Intrusion Detection Systems and Honeypot based proactive security mechanisms in VANETs and VANET Cloud", VEHICULAR COMMUNICATIONS, vol. 12, 1 April 2018 (2018-04-01), NL, pages 138 - 164, XP055536087, ISSN: 2214-2096, DOI: 10.1016/j.vehcom.2018.04.005 *

Similar Documents

Publication Publication Date Title
Zonouz et al. RRE: A game-theoretic intrusion response and recovery engine
US11201882B2 (en) Detection of malicious network activity
Osanaiye et al. Distributed denial of service (DDoS) resilience in cloud: Review and conceptual cloud DDoS mitigation framework
Ye et al. An efficient dynamic trust evaluation model for wireless sensor networks
Pawlick et al. iSTRICT: An interdependent strategic trust mechanism for the cloud-enabled internet of controlled things
US10218729B2 (en) Specializing unsupervised anomaly detection systems using genetic programming
WO2010076832A1 (en) Anomaly detection for packet-based networks
Oo et al. The design of SDN based detection for distributed denial of service (DDoS) attack
Kyaw et al. Machine-learning based DDOS attack classifier in software defined network
Mozaffari et al. Learning based anomaly detection in critical cyber-physical systems
Ahuja et al. Ascertain the efficient machine learning approach to detect different ARP attacks
Dinh et al. Dynamic economic-denial-of-sustainability (EDoS) detection in SDN-based cloud
Vetha et al. A trust‐based hypervisor framework for preventing DDoS attacks in cloud
Leite et al. A hybrid and learning agent architecture for network intrusion detection
Nakip et al. Decentralized online federated g-network learning for lightweight intrusion detection
Sedjelmaci et al. On cooperative federated defense to secure multi-access edge computing
Jithish et al. A game‐theoretic approach for ensuring trustworthiness in cyber‐physical systems with applications to multiloop UAV control
Alem et al. A novel bi-anomaly-based intrusion detection system approach for industry 4.0
Sood et al. Computational intelligent techniques to detect ddos attacks: a survey
WO2024012681A1 (en) Security framework for a network
Anushiya et al. A comparative study on intrusion detection systems for secured communication in internet of things
Singh et al. Analysis of various trust computation methods: a step toward secure FANETs
Zhang et al. DREVAN: deep reinforcement learning-based vulnerability-aware network adaptations for resilient networks
de Souza et al. Intrusion detection with Machine Learning in Internet of Things and Fog Computing: problems, solutions and research
Alhamami et al. DDOS attack detection using machine learning algorithm in SDN network

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22751683

Country of ref document: EP

Kind code of ref document: A1