EP3065376B1 - Corrélation de couches croisées dans un réseau cognitif sécurisé - Google Patents

Corrélation de couches croisées dans un réseau cognitif sécurisé Download PDF

Info

Publication number
EP3065376B1
EP3065376B1 EP16000447.9A EP16000447A EP3065376B1 EP 3065376 B1 EP3065376 B1 EP 3065376B1 EP 16000447 A EP16000447 A EP 16000447A EP 3065376 B1 EP3065376 B1 EP 3065376B1
Authority
EP
European Patent Office
Prior art keywords
network
node
level
event
distributed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP16000447.9A
Other languages
German (de)
English (en)
Other versions
EP3065376A1 (fr
Inventor
Jerome Sonnenberg
Marco Carvalho
Richard Ford
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harris Corp
Original Assignee
Harris Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harris Corp filed Critical Harris Corp
Publication of EP3065376A1 publication Critical patent/EP3065376A1/fr
Application granted granted Critical
Publication of EP3065376B1 publication Critical patent/EP3065376B1/fr
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1408Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
    • H04L63/1416Event detection, e.g. attack signature detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/02Network architectures or network communication protocols for network security for separating internal from external traffic, e.g. firewalls
    • H04L63/0209Architectural arrangements, e.g. perimeter networks or demilitarized zones
    • H04L63/0218Distributed architectures, e.g. distributed firewalls
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/12Discovery or management of network topologies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/22Alternate routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L61/00Network arrangements, protocols or services for addressing or naming
    • H04L61/50Address allocation
    • H04L61/5053Lease time; Renewal aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/10Network architectures or network communication protocols for network security for controlling access to devices or network resources
    • H04L63/101Access control lists [ACL]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1441Countermeasures against malicious traffic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1441Countermeasures against malicious traffic
    • H04L63/1458Denial of Service
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2111/00Details relating to CAD techniques
    • G06F2111/06Multi-objective optimisation, e.g. Pareto optimisation using simulated annealing [SA], ant colony algorithms or genetic algorithms [GA]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/30Definitions, standards or architectural aspects of layered protocol stacks
    • H04L69/32Architecture of open systems interconnection [OSI] 7-layer type protocol stacks, e.g. the interfaces between the data link level and the physical level
    • H04L69/321Interlayer communication protocols or service data unit [SDU] definitions; Interfaces between layers

Definitions

  • inventive arrangements relate to secure cognitive networks. More particularly, the inventive arrangements concern implementing systems and methods for providing a secure distributed infrastructure in a cognitive network that leverages coordination across disparate abstraction levels.
  • Tactical communication systems are very important for mission critical operations both for military and civilian applications.
  • the introduction of software defined radios and the increasing complexity, dynamism, and criticality of tactical communication systems have demanded the development of new and more effective approaches to reliable and timely network management, monitoring, and optimization.
  • Cognitive network management is an approach to distributed network management in which adaptive algorithms are used to abstract network and environmental indicators to define preferred configurations for specific operational contexts.
  • the cognitive aspect of CNM refers to the ability of the system to learn and evolve, incorporating prior events into its own reasoning to improve its performance from experience.
  • Security is an important issue in CNM because the requirements of CNM methods and systems often provide openings for attack vectors that can permit the system to be exploited.
  • Adaptive cross-layer cross-node optimization allows for conventional cross-layer optimization coupled with the ability to adaptively optimize cross-layer interactions across node boundaries.
  • Adaptive cross-layer cross-node optimization includes adaptively and dynamically shifting functions/layers among nodes in a network, so that a global network objective is achieved.
  • adaptive cross-layer cross-node optimization includes adaptively and dynamically distributing functions/layers across a network, according to changes and/or events in the network.
  • adaptive cross-layer cross-node optimization includes dynamically defining or changing individual node functions within a network, so that a global network functionality may emerge.
  • nodes comprising the network will often utilize an Open System Interconnection ("OSI") protocol stack for communications.
  • the OSI stack includes a plurality of protocol stack layers for performing respective communication functions.
  • the protocol stack layers include the following seven layers: (1) a physical layer; (2) a data link layer; (3) a network layer; (4) a transport layer; (5) a session layer; (6) a presentation layer; and (7) an application layer.
  • a security environment associated with such a computer network can react to effects noticed at the higher layers of the communications protocol stack. For example, if a communications node begins to misroute packets, or drop routes/change routes to favor a previously seldom used routing node, this might be a concern to the security software that detects these anomalies.
  • the security software can recognize that a communication anomaly is seriously wasting transmission capacity, and can flag the effect as a distributed denial of service attack.
  • the result is that one or more nodes have been compromised, the damage has been done, and isolation of the offending node(s) takes a relatively long time.
  • the invention concerns implementing systems and methods for defending a communication network from adversarial attack using a distributed infrastructure that leverages coordination across disparate abstraction levels.
  • a stored event list is used to detect at least one node event.
  • the node event is one which occurs at a machine code level and is known to have the potential to interfere directly with the internal operation of the node computing device.
  • the at least one node event is one which is exclusive of an event within a network communication domain.
  • the node event is one which is outside the domain of the network communication stack, hardware elements that are exclusively associated with the network communication stack, and a plurality of machine code elements that handle events exclusively pertaining to the communication stack.
  • an optimal network-level defensive action is automatically selectively determined.
  • the network level defensive action will involve multiple network nodes comprising the communication network.
  • the defensive action is based on or determined by the at least one detected node event and upon a set of known communication requirements established for the network.
  • the method can further involve automatically selectively implementing a node-level defensive action which affects only the node where the at least one node event has been detected if the at least one node event does not require a network-level defensive action to ensure continued satisfaction of the known communication requirements.
  • a dynamic model is advantageously maintained at the node computing devices, which model is representative of a pattern of network operation for the communication network.
  • the method can further involve using the dynamic model to compare actual network-level events to a range of expected network-level events. Accordingly, a node-level defensive action which is performed in response to the at least one node event can be selectively modified when the actual network-level events do not correspond to a range of expected network-level events. For example, a range of expected network-level events can be reduced in response to the node event which has been detected, such that the network is made more sensitive to unexpected variations in network performance when the at least one node event is detected.
  • inventive arrangements disclosed herein generally relate to implementing systems and methods for providing a secure distributed infrastructure for a cognitive network.
  • cross-layer correlation is provided as between platform-specific events and protocol-related effects to provide a robust, secure infrastructure.
  • the approach described herein utilizes notice of certain events to trigger and tailor more precise responses across OSI layers and across nodes so that network performance is only affected to the extent necessary.
  • the inventive arrangements include a distributed, a cross-layer coordination algorithm that utilizes multiple layer protocol knowledge to coordinate attack mitigation techniques. These attack mitigation techniques are responsive to attacks ranging from lower layer code injection to upper layer protocol exploits.
  • the coordination algorithm determines if an existing encoded mitigation technique will effectively thwart the attack or isolates the attacked node if it cannot.
  • a secure core is used to host the cognitive network management functions so as to provide nearly un-hackable node-by-node defense.
  • secure core informs upper layer defenses of attacks, and the upper layer algorithms inform the secure core of patterns of operation that indicate an attack. Individual nodes are essentially un-hackable, and patterns of attack found in upper layer traffic can be used to invoke existing defense mitigation techniques. Accordingly, the inventive arrangements described herein provide a novel method of automatically identifying attack vectors and determining which mitigation techniques afford mitigation to the new attack.
  • an important feature of the inventive arrangements is the advantageous cross-layer correlation between platform-specific events and protocol-related effects to provide a robust, secure infrastructure.
  • the invention goes beyond such cross-layer correlation to provide a higher degree of security.
  • the invention combines cross-layer attack correlation with hardware-level, instruction by instruction granularity for purposes of detecting platform specific events indicative of an attack at a local node.
  • the integration of instruction-level traps with the cross-layer algorithms described herein provides an exceptionally secure method for cognitive network management as described herein.
  • the higher level and lower level defense capabilities can be used in ways that allow the cognitive network to advantageously adapt, or even anticipate suspicious or malicious events. For example, security events detected by traps in the secure host can be reported to the higher-level coordination algorithms.
  • the node-level defense provided by the secure framework is local to the host, and very fast in response time. Conversely, the higher-level SCNM defenses cover a much wider area, and respond in a much slower time scale, in comparison with the host-based defense.
  • a host computing device 101 is connected to a network node 102 which facilitates physical layer network communications.
  • the network nodes 102 include software-defined radios which facilitate a wireless network 104.
  • the network nodes 102 execute the distributed monitoring and network management coordination tasks described herein. While illustrated as different components, the host computing devices 101 and the network nodes 102 can be implemented as separate devices or as part of the same device.
  • the software applications executing on the host computing device 101 can perform any function which may require network communications support.
  • the software applications running on the host computing device can provide support for voice, video and/or data communications.
  • Other exemplary software applications executing on the host can include software for monitoring troop movements, fire control software applications and so on.
  • cognitive networks designed for use in non-tactical environments may involve use of different software applications and the software applications listed herein are merely provided as examples.
  • a CNM infrastructure can be represented as a set of abstract CNM nodes 106 that communicate with one another to share information and coordinate actions.
  • This abstract view of the infrastructure also supports the case where the CNM is implemented as part of the network node itself.
  • the abstract CNM nodes can represent functionality which is implemented in the host computing device 101, the network node 102, or both.
  • the CNM is implemented as part of the network node 102.
  • the CNM infrastructure 108 could be attacked at the level of the coordination algorithms as implemented by CNM nodes 106.
  • This class of attacks may include the disruption and modification of the messaging protocols, or the injection of bad/false information in the management framework, (such as invalid resource information, location, and so on).
  • data integrity attacks could be designed to target the coordination algorithm.
  • Protocol attacks could be crafted to listen to communications and anticipate changes in resource allocation.
  • attacks can also be directed at lower levels in the network. For example, such attacks can be directed against individual nodes (e.g. involve code injection attacks).
  • the flexibility of an adaptive network at the communications layer is actually a liability for overall security in the event of a low-level compromise. In essence, by allowing the system to adapt, the challenge of modeling and monitoring the system as a whole is increased.
  • CNM cognitive network management
  • an SCNM infrastructure comprise (1) a set of distributed communication and coordination algorithms to be used by the CNM infrastructure 108 for improved resistance to attacks, (2) secure hardware-based computational platforms which are generally resistant to software attack (3) instruction-level traps integrated into those hardware-based computational platforms to detect attacks which are directed against the computing platform, and (4) coordinated interaction between higher-level defense mechanisms (at the protocol and coordination levels) and low-level defenses (hardware level) defense capabilities.
  • elements (1), (2) and (3) of the SCNM infrastructure facilitate element (4) by providing secure conditions in which the higher level and lower level defense mechanisms can be implemented.
  • the SCNM is advantageously implemented in a distributed manner as illustrated in FIG. 2 by the distributed cognitive network management infrastructure 108.
  • FIG. 3 there is shown a more detailed drawing of a network node 102 that is useful for understanding a secure computational platform according to the inventive arrangements.
  • the network node 102 can include more or less components than those shown in FIG. 3 .
  • the architecture shown in FIG. 3 is sufficient for understanding operations of secure computational platform as described herein.
  • the network node 102 is comprised of a secure core 300 which includes a processor 306, a main memory 302, a wired network interface device 310, wireless network interface hardware 308, and a data communication bus 312 for communications among the various secure core components.
  • the main memory 302 is comprised of a computer-readable storage medium on which is stored one or more sets of instructions 304 (e.g., firmware) configured to implement one or more of the methodologies, procedures, or functions described herein.
  • the instructions 304 can also reside, completely or at least partially, within the processor 306 during execution thereof by the computer system.
  • the main memory 302 can also have stored therein hardware resource data, hardware environment data, policy data, and instructions.
  • the hardware resource data includes, but is not limited to, data specifying at least one capability of the network node 102.
  • the hardware environment data includes, but is not limited to, data characterizing a network node environment.
  • the policy data includes, but is not limited to, data specifying current regulatory policies, project policies, and/or mission policies.
  • All of the various entities comprising the secure core 300 are advantageously implemented in the form of hardware elements which are resistant to software-based attacks.
  • the computer hardware implementation is advantageously comprised of at least one of a non-real-time alterable circuit logic device that is capable of being created with or loaded with logical sequences of operation.
  • An example of such a device that is created with sequential operation logic is an ASIC.
  • An example of such a device that is off-line loaded (non-real time alterable) with sequential operation logic is a FPGA.
  • the various entities comprising the secure core 300 can be advantageously implemented in a field programmable gate array (FPGA), as an application specific integrated circuit (ASIC).
  • FPGA field programmable gate array
  • ASIC application specific integrated circuit
  • the wireless network interface hardware 308 comprises physical layer communication components for facilitating wireless communications with other nodes of the communication network.
  • the wireless network interface hardware 308 is designed to be adaptive so as to facilitate implementing a cognitive radio network in which communication protocols can and do change over time as needed in response to changes in a communication environment.
  • the wireless network interface hardware can include radio frequency (RF) hardware components to facilitate implementation of a software defined radio (SDR).
  • RF radio frequency
  • the hardware components can also include an analog-to-digital (A/D) converter, digital-to-analog (D/A) converter, and other signal processing components.
  • the network interface device 310 comprises physical layer communication components for facilitating wired physical layer data communication.
  • the network interface device 310 can facilitate wired communication between the network node 102 and other nodes of a cognitive computer network.
  • the network interface device can also facilitate wired communications with a local host computing device 101 and/or certain user display and control elements as hereinafter described in relation to FIG. 4 .
  • the functions and operations of secure core 300 are discussed in further detail below.
  • local control over the network node 102 can be facilitated by entities associated with a user display and control system 400.
  • entities can include a display unit 402 such as a video display (e.g., a liquid crystal display or LCD), a user input device 404 (e.g. a keyboard), and a cursor control device 406 (e.g., a mouse or trackpad) for making selections from displayed elements of a graphical user interface (GUI).
  • the user display and control system 400 also includes a network interface device 410 to facilitate wired local network communications with the secure core of network node 102.
  • a system data bus 408 can be provided to facilitate communications among the various entities comprising the user display and control system 400.
  • elements of the user display and control system 400 can be provided as part of a host computing device 101; however, the invention is not limited in this regard and these components can be independent of the host computing device 101.
  • a local network connection 412 can be provided to facilitate data communications between the secure core 300 and the user display and control system 400.
  • the secure core 300 hosts a communication interface manager (CIM) 504 (which is integrated with the operations of an OSI protocol stack 506), a node level event detection and monitoring module (NLEDM) 512, and certain distributed components 514 of a distributed processor.
  • CIM communication interface manager
  • NLEDM node level event detection and monitoring module
  • the CIM 504 facilitates implementation of the high-level defensive algorithms which are specified by the distributed components. As such the CIM can facilitate a network response to attack conditions noted by the distributed SCNM infrastructure. These high level defenses will be discussed in more detail below, as the discussion progresses.
  • the NLEDM 512 is comprised of event trapping algorithms which detect low-level attacks directed locally at the node itself.
  • the distributed components of a distributed processor 514 coordinate the operation of node 102 with other nodes 102 comprising the communication network. Such coordination can include evaluation of network threats and selection of high-level defensive algorithms for responding to such threats. Each of these components will now be discussed in greater detail.
  • the operation of the CIM 504 is integrated with an OSI protocol stack 506.
  • the OSI protocol stack facilitates network communications for one or more software applications 502 residing on host computing device 101. These communications can involve voice, video or data communications with applications hosted on other network nodes 102 in the communication network 104.
  • CIM 504 processes or manages data communications at a level between the data link layer 508 and the physical layer 510 of the OSI stack.
  • the physical layer 510 in FIG. 5 can include physical layer devices which facilitate network communications. An example of such a physical layer device would include wireless network interface hardware 308 and/or wired network interface device 310.
  • the CIM essentially abstracts all the communication interfaces for applications in each host computing device 101 to manage and control the exchange of information with the communication network 104. As such the CIM provides a set of communication abstracts for link management and resource control, as well as low level interfaces for frame control and topology management.
  • FIG. 6 is a more detailed drawing of the CIM which is useful for understanding the invention.
  • the various protocol layers of the OSI stack shown in FIG. 5 are omitted to facilitate an understanding of the operation of the CIM.
  • only a single application 502 is shown in communication with the CIM although it should be understood that the CIM can support communications for several applications 502 as shown in FIG. 5 .
  • the CIM communicates data traffic to and from one or more application 502.
  • the CIM can be configured to pass certain types of data traffic (e.g., unmanaged traffic 610) directly to a particular logical address 612, 614, 616 associated with a physical layer device 616.
  • unmanaged traffic 610 can include data which constitutes communications between an application 502 and a user display and control system 400 as shown in FIG. 4 .
  • Other types of data e.g., managed traffic 608 are managed in the CIM to facilitate the defensive high level algorithms described herein.
  • the information base 618 contains correlated event data which comprises observed patterns of behavior of the network of nodes from the local node's viewpoint.
  • the route manager 620 determines which of several communications devices or modules to utilize. In this example, modules 622 1 , 622 2 are shown as being implemented in the node. However, more or fewer communication modules are possible.
  • the system routing table 624 contains the set of reachable distant nodes for use by the route manager.
  • the packet queue 626 contains packets of managed traffic placed for transmission on a wired or wireless medium.
  • the resource monitoring and control function facilitates regulation the packets of traffic to the CIM from the application 502.
  • events detected by the NLEDM 512 are reported to distributed components of the distributed processor 514. Such reporting facilitates determinations by the distributed SCNM infrastructure regarding when certain high-level defense algorithm should be implemented in response to low-level threats detected by the NLEDM. This information about low-level node attacks can also be used locally at node 102 to determine a local defensive action that is appropriate for responding to an attack occurring at such network node 102. A determination concerning such a localized response can be made by the distributed components 514 and/or by other processing elements (not shown) hosted by the secure core.
  • the NLEDM 512 is comprised of machine or instruction level event trapping algorithms that detect low-level attacks intended to directly disrupt the operations of the network node 102.
  • the event trapping algorithms described herein will advantageously be designed to detect low-level attacks (including code injection attacks) which interrupt or interfere with the machine code or machine language instructions which execute on processor 306.
  • the types of events which are detected can include without limitation attempts to execute instructions stored in memory space which has been designated only for data, attempts to return from a subroutine on an incorrect or unauthorized memory address, and use of invalid opcodes.
  • Other types of machine or instruction level events indicative of an attack can also be detected.
  • the secure core can be implemented using a modified Harvard Architecture, in which memory is tagged as either code or data. Such an implementation prevents mixing of the two types of information.
  • Dual stacks can be used to separate control flow and data. One stack can only be accessed by RET and CALL instructions, and cannot be modified directly. The second stack supports the instructions one would normally expect for a stack, including PUSH and POP. Instruction set randomization can be employed to randomize the binary representation of the machine's native instruction set each time the machine is initialized. Such actions ensure that the opcodes necessary for the machine to execute data will not be known to an attacker. When there is an event which attempts to violate one of the foregoing security protocols, such event will be detected by the algorithms associated with NLEDM 512. Other events can also be detected and the invention is not intended to be limited to the specific types of attack events described herein.
  • Events detected by NLEDM 512 are reported to the distributed components of the distributed processor 514.
  • the distributed components of the distributed processor 514 may determine that the current operating conditions of the network, or the node in which the event has been detected, do not warrant any response to the detected event or events.
  • the distributed components 514 at a node 102 can respond to the occurrence of one event or a combination of events by performing certain defensive actions exclusively at a local level, and without notifying other nodes of the network.
  • the occurrence of a detected event or combination of events is evaluated by the distributed components of the distributed processor to determine possible high-level network defensive responses to the low-level attack.
  • the occurrence of the detected events can optionally be communicated to other network nodes 102.
  • the distributed processor in that case will determine if and when a high-level defensive response is needed by the network. The analysis of such events by the distributed processor is discussed below in further detail in relation to FIGs. 7-11 .
  • a security level applied at each of the plurality of network nodes can be adjusted to selectively control a sensitivity at each network node to subsequent low-level host attacks. Such adjustments can vary the kinds of events that are reported by the NLEDM, the conditions under which network nodes perform localized defensive actions in response to reported events, and/or conditions under which detected events are reported to other nodes in the network.
  • the distributed components of a distributed processor 514 facilitate the coordination of network actions as between the various nodes comprising the communication network.
  • a distributed SCNM infrastructure In order to understand the function and operation of the distributed components 514, it is useful to first discuss certain features of a distributed SCNM infrastructure according to the inventive arrangements.
  • the present invention provides a cognitive network with distributed intelligence, i.e., the intelligence is implemented by a plurality of network nodes, rather than by a single network node.
  • a cognitive network 104 in accordance with the inventive arrangements will advantageously comprises the following features:
  • PSO is generally an multi-objective optimization ("MOO") Artificial Intelligence (“AI”) based technique to finding a solution to a problem.
  • MOPSO Multi-Objective PSO
  • An MOPSO technique generally involves: obtaining a population of candidate solutions ("particles"); and moving each particle in a search space with a velocity according to its own previous best solution and its group's best solution. A particle's position may be updated in accordance with the following mathematical equations (1) and (2).
  • x id represents a position of a particle.
  • ⁇ x id represents a position change of the particle.
  • c 1 and c 2 are positive constants. rand 1 and rand 2 are random number between 0 and 1.
  • p id represents a previous best solution for the particle.
  • p gd represents the previous best solution for the group.
  • SI is generally the collective behavior of decentralized, self-organized system made up of a population of simple simulation agents interacting locally with one another and with their environment. The simulation agents follow very simple rules. Although there is no centralized control structure dictating how individual simulation agents should behave, local, simple and to a certain degree random interactions between such simulation agents lead to the emergence of "intelligent" global behavior. Natural examples of SI include, but are not limited to, ant colonies, honey bee colonies, honey bee swarms, brains, fish schools, and locust swarms.
  • SI algorithms include, but are not limited to, an Artificial Ant Colony Algorithm ("AACA”), an Artificial Bee Colony Algorithm (“ABCA”), an Artificial Honey Bee Swarm (“AHBS”), an Artificial Brain Algorithms (“ABA”'), an Artificial Fish Swarm Algorithm (“AFSA”), and an Artificial Locust Swarm Algorithm (“ALSA”).
  • AACAs, ABCAs, AHBSs, ABAs, AFSAs and ALSAs are well known in the art, and therefore will not be described in detail herein.
  • MOO algorithms are employed in addition to PSO algorithms and/or biologically inspired PSO algorithms for providing the cognitive capabilities of the cognitive network.
  • the other types of MOO algorithms include, but are not limited to: a Normal Boundary Intersection ("NBI") algorithm; a modified NBI algorithm; a Normal Constraint (“NC”) algorithm; a successive Pareto optimization algorithm; a Pareto Surface Generation (“PGEN”) algorithm for convex multi-objective instances; an Indirect Optimization on the basis of Self-Organization (“IOSO”) algorithm; an S-Metric Selection Evolutionary Multi-Objective Algorithm (“SMS-EMOA”); a Reactive Search Optimization (“RSO”) algorithm; and/or a Benson's algorithm for linear vector optimization problems.
  • NBI Normal Boundary Intersection
  • NC Normal Constraint
  • PGEN Pareto Surface Generation
  • IOSO Indirect Optimization on the basis of Self-Organization
  • S-EMOA S-Metric Selection Evolutionary Multi-Objective Algorithm
  • RSO Reactive Search Optimization
  • every MOO algorithm (including PSOs, MOPSOs and biologically inspired PSOs) yields an N-dimensional Pareto Front of non-inferior solutions, where N is the number of objectives.
  • the non-inferior solutions are solutions where any deviation along any objective axis results in that solution being dominated by a better solution.
  • An example of a Pareto Front 700 for two objective functions F1 and F2 is shown in FIG. 7 .
  • another algorithm can be used to select a best overall solution based on some a priori selected criteria.
  • G i ( x ) be a constant or bound.
  • k e is the number of equality constraints.
  • k - k e is the number of inquality contraints.
  • u is the upper bound of x.
  • the performance vector F ( x ) maps parameter space into fitness function space.
  • FIG. 8 An exemplary two-dimensional representation of a figure set of non-inferior solutions is provided in FIG. 8 .
  • the set of non-inferior solutions lies on the curve between point C and point D .
  • Points A and B represent specific non-inferior points.
  • Points A and B are clearly non-inferior solution points because an improvement in one objective F 1 requires a degradation in the other objective F 2 , i.e., F 1B ⁇ F 1A , F 2B > F 2A . Since any point in ⁇ that is an inferior point represent a point in which improvement can be attained in all the objectives, it is clear that such a point is of no value.
  • MOO is therefore concerned with the generation and selection of non-inferior solution points.
  • Non-inferior solutions are also called Pareto Optima.
  • a general goal in MOO is constructing the Pareto Optima.
  • FIGS. 9-11 Exemplary systems of the present invention will now be described in relation to FIGS. 9-11 .
  • the following discussion describes an approach implemented by a communication network 104 to select an optimal defensive algorithm for responding to network communication threats.
  • various PSO algorithms are used as the basis of command and control communication.
  • the PSO algorithms can be thought of as not only supplying some of the required machine intelligence, but also acting in an information compression roll for inter-node messages.
  • a cognitive network 104 can be multiple-parameter optimized so that its overall project or mission metrics are met, and not just one parameter that is either specific to a protocol stack layer or shared by only two protocol stack layers.
  • PSO is employed by cognitive network 300 for achieving the multiple-parameter optimization.
  • Such multiple-parameter optimization can include actions involved with selecting an optimal defensive algorithm in response to dynamic network conditions.
  • different PSO models can be used, each with properties aligned with the characteristics of a particular protocol stack layer, to form the basis of a distributed cross-layer cognitive engine.
  • a distributed biologically inspired PSO technique employing an AHBS is used for optimizing operations of a physical layer of an OSI protocol stack because of its messaging characteristics.
  • a distributed biologically inspired PSO technique employing an AACA is used for optimizing operations of a data link layer of the OSI protocol stack because of its pheromone inspired finite fading memory and reinforcement property.
  • the present invention is not limited to the particularities of this example.
  • Other examples can be provided in which distributed biologically inspired and/or non-biologically inspired PSOs are used in protocol stack layers to minimize non-payload inter-node communication and which match the requirements thereof.
  • the PSO models and distributed intelligence algorithm parameters employed by cognitive network 104 can be dynamically adjusted during operations thereof. This dynamic adjustment can be made in accordance with changes in network requirements and network conditions.
  • the PSO models and distributed intelligence algorithm parameters may be dynamically changed based on changes in latency requirements, bandwidth requirements, and/or other communication requirements.
  • the PSO models and distributed intelligence algorithm can be dynamically changed based in response to changes in attacks directed against the network.
  • Biologically inspired PSOs generally display many properties that are consistent with the cognitive requirements of networks that are required to coordinate themselves via RF communication to meet changing project, mission, radio environment, and policy conditions.
  • the "particles" in biologically inspired PSOs are computation agents which communicate locally via simple messaging which collectively form an intelligent entity ("the swarm").
  • the computation agents comprise processing devices 306 contained in the secure core 300 of each network node 102.
  • the processing devices 300 form a distributed processor which is instantiated in each of the network nodes.
  • the distributed processor includes hardware (i.e., electronic circuits) and/or firmware configured to perform the basic concepts described below in relation to FIG. 9 and methods described below in relation to FIG. 10 .
  • a function of the distributed processor described herein is select optimal defensive algorithms to keep the network operation near optimal with a minimum of overhead in ever changing requirements and conditions.
  • the computational loading can be dynamically partitioned across all active network nodes 102 based on the number of network nodes, node information density, and system level computational requirements. This is beneficial when the computational capability of the cognitive network 104 grows asymptotically and when the computational capacity of the cognitive network 104 exceeds the asymptotic limit of the computational requirement.
  • the computational load of each network node 102 can be scaled back as more nodes join the network 104, thus reducing the power draw of each network node 102 to extend the life of the power source and likely decrease its heat and electromagnetic radiation signature.
  • cognitive network 104 generally employs a distributed intelligence algorithm for optimizing its overall performance.
  • the cognitive network can employ a distributed SCNM intelligence in the form of cognitive engines respectively provided in each node as part of the distributed components 514.
  • the distributed SCNM intelligence determines optimal network configurations including defensive algorithms which are to be employed in response to dynamic network conditions, such as attacks directed against the network.
  • the distributed intelligence is implemented in the form of the distributed processors (e.g. processor 306) which are instantiated in the secure cores 300 of network nodes 102 forming the cognitive network 104. Accordingly, the actions of functional blocks 904-940 of conceptual diagram 900 are achieved by performing corresponding operations at the distributed processor defined by the network nodes.
  • new or updated project or mission requirements 902 are received by the distributed processors as implemented in the secure cores 300 of nodes 102 in the cognitive network 104.
  • the project or mission requirements 902 may be in a standard ontology.
  • the standard ontology represents project or mission requirements as a set of concepts within a domain, and the relationships among these concepts.
  • the ontology includes a plurality of terms and an index.
  • the index defines a plurality of relationships between the terms and project/mission requirements 902.
  • a project or mission requirement is identified based on at least one term and the index.
  • operations for optimization algorithm initialization are performed in functional block 904.
  • Such operations include using at least one AI algorithm and/or at least one Table Look Up (“TLU") method to compute initialization parameters 950 for a plurality of distributed optimization algorithms which collectively are to be used to optimize performance of the cognitive network 104.
  • TLU Table Look Up
  • the AI algorithm may determine that optimized performance requires implementation of a particular defensive algorithm.
  • the AI algorithm includes, but is not limited to, a symbolic AI algorithm, a sub-symbolic AI algorithm, or a statistical AI algorithm.
  • Each of the listed types of AI algorithms is well known in the art, and therefore will not be described herein.
  • the type of AI algorithm(s) and/or initialization parameter(s) can be selected in accordance with a particular "use case".
  • use case refers to a methodology used in system analysis to identify, clarify, and organize system requirements.
  • a "use case” is made up of a set of possible sequences of interactions between system components (e.g., network nodes) and users in a particular environment and related to a particular goal.
  • a "use case” can have the following characteristics: organizes functional requirements; models the goals of system/user interactions; records paths from trigger events to goals; describes one main flow of events and/or exceptional flow of events; and/or is multilevel such that another "use case” can use the functionalities thereof.
  • the functions of block 904 are achieved using feedback layer constraints 938 derived from successful project or mission executions.
  • the feedback layer constraints 938 may specify instances where a particular defensive algorithm has been effective for responding to a particular type of network attack.
  • Block 904 uses the successful project or mission feedback layer constraints to "learn” and to later use said successful project or mission feedback layer constraints to generate initialization parameters in future similar use cases.
  • the "learning" mechanisms for the aforementioned algorithms are well known in the art, and therefore will not be described in detail herein. These inputs are then used to determine a previously seen similar set of circumstances and the corresponding end results. The end results are then used for initialization.
  • the feedback layer constraints 938 include information concerning the status and constraints that apply to protocol stack layer resources of at least one network node.
  • the functions of block 904 are also achieved using network-related information concerning the resources that are available on each network node 102.
  • the network-related information includes, but is not limited to, a free computational capacity of each network node, a reserve power of each network node, and/or a spectral environment of each network node.
  • the network-related information may be updated on pre-defined periodic bases.
  • the operations of functional block 904 are performed in a distributed fashion in which all network nodes assist in computing the initialization parameters 950.
  • the initialization parameters 950 are computed by a single network node, and then distributed to the remaining network nodes.
  • the initialization parameters 950 are computed using only a select few of the network nodes, and then distributed to the remaining network nodes.
  • geographically close network nodes are grouped so as to define a sub-cognitive network. One of the network nodes of the sub-cognitive network is selected to compute the initialization parameters for itself and the other network nodes of the sub-cognitive network. Such a sub-cognitive network configuration is power and security efficient.
  • the initialization parameters 950 are distributed to functional blocks 906-918, respectively.
  • the initialization parameters 950 and/or the network-related information are used for determining possible outcomes that are Pareto efficient when different values for protocol stack layer parameters are employed.
  • a Pareto Front for at least one distributed MOO algorithm can be determined in each functional block 906-918. Pareto Fronts are well known in the art, and briefly described above.
  • a Pareto Front for at least one distributed MOO algorithm is determined in each functional block 906-910 which may result in protocol optimization of a physical layer, a data link layer, or a network layer.
  • protocol optimization can involve selection and implementation of one or more specific defensive algorithms employed by the network to thwart attacks directed against the network.
  • the distributed MOO algorithms employed in functional blocks 906-910 can include distributed biologically inspired PSO algorithms. The present invention is not limited in this regard. Functional blocks 906-910 can additionally or alternatively employ other types of MOO algorithms.
  • a Pareto Front for at least one distributed MOO algorithm is determined in each functional block 912-918 which may result in protocol optimization of a transport layer, a session layer, a presentation layer, or an application layer.
  • the MOO algorithms employed in functional blocks 912-918 include MOO algorithms other than PSO algorithms. The present invention is not limited in this regard.
  • Functional blocks 912-918 can additionally or alternatively employ PSO algorithms, and more particularly distributed biologically inspired PSO algorithms.
  • the number and types of MOO algorithms employed for each protocol stack layer can be selected in accordance with a particular "use case".
  • the same or different type of distributed MOO algorithm can be used for optimizing protocols of each of the protocol stack layers.
  • a first distributed biologically inspired PSO e.g., a distributed AHBS
  • a second different distributed biologically inspired PSO e.g., a distributed AACA
  • a network layer of the OSI protocol stack can be used for optimizing protocols of a network layer of the OSI protocol stack.
  • a first distributed MOO e.g., a distributed SMS-EMOA algorithm
  • a third distributed PSO can be used for optimizing protocols of a transport layer of the OSI protocol stack.
  • a second different distributed MOO e.g., a distributed successive Pareto optimization
  • a fourth distributed PSO can be used for optimizing protocols of a session layer, presentation layer, and/or application layer of the OSI protocol stack.
  • the third and fourth distributed PSOs can be the same as or different than the first distributed biologically inspired PSO or second distributed biologically inspired PSO.
  • the present invention is not limited in this regard.
  • the distributed MOO algorithm(s) used in each functional block 906-918 may be unique thereto and/or customized to the requirements of a respective protocol stack layer.
  • the distributed MOO algorithm(s) for each protocol stack layer is (are) part of a larger distributed intelligence algorithm implemented by the plurality of network nodes 102.
  • inter-node communications may or may not be required for facilitating functions of blocks 906-918. If inter-node communications are required for facilitating functions of a block 906-918, then the inter-node communications may or may not be part of the larger distributed intelligence algorithm.
  • At least one distributed PSO is employed in a functional block 906-918 as the distributed MOO algorithm when the inter-node communications therefore comprise part of the larger distributed intelligence algorithm.
  • the term "best overall network solution”, as used herein, refers to an optimal solution for overall protocol stack configuration given at least the current network architecture, current network environment (including the status of any attacks directed upon the network), low-level events detected at individual nodes by node level event detection/monitoring modules, current network conditions, current project or mission requirements, and current project/mission objectives.
  • the best overall network solution can advantageously include a selection of at least one high-level defensive algorithm. The defensive algorithm will be applied at each node for responding to an attack directed upon the network and/or a particular node or nodes.
  • the functions of functional block 920 may be implemented in a distributed fashion in which a plurality of network nodes perform some of the "additional computations" or a centralized fashion in which a single network node performs all of the "additional computations".
  • the additional computations involve: applying another set of algorithms to the entire solution spaces including the Pareto Fronts; developing the best overall network solutions based on the solutions for the algorithms; and ranking the best overall network solutions according to a set of criteria appropriate to a specific application space and conditions in which the cognitive network is operating.
  • the set of algorithms used in functional block 920 can include, but are not limited to, Case-Based Reasoning ("CBR") algorithms, expert system algorithms, and neural network algorithms. Such algorithms are well known in the art, and therefore will not be described in detail herein. Still, it should be understood that inputs to functional block 920 may include, but are not limited to, project-related inputs, mission-related inputs, network topology inputs, and/or RF environment inputs. These inputs are then used to determine a previously seen similar set of circumstances and the corresponding end result. The end results are then used for initialization of configuration optimization. If a CBR algorithm or a neural network algorithm is used in functional block 920, then the end results may be fed back for use in a next iteration of said algorithm. In contrast, if expert system algorithms are employed in functional block 920, then the end results may not be fed back.
  • CBR Case-Based Reasoning
  • the ranked "best overall network solutions" are then analyzed in functional block 922 to: identify which solutions are compliant with project/mission policies; and identify a top ranked solution from the identified solutions.
  • the top ranked solution can include a particular defensive algorithm which is to be used for responding to an attack.
  • a defensive algorithm may be specified in the case where one or more conditions indicate that the network or an individual node is experiencing an attack.
  • a policy engine 940 attempts to "suggest” possible approaches that would bring the cognitive network system 104 into compliance.
  • the "suggested” possible approaches are then supplied to functional block 922 first.
  • a second iteration of the functions of block 922 are performed for use thereby to generate policy compliant solutions.
  • functional block 922 cannot generate a compliant solution the "suggested" possible approaches are then supplied to functional block 904 for use thereby.
  • a second iteration of the functions of blocks 904-922 are performed to generate policy compliant solutions.
  • a "favored solution” is selected in functional block 922. If an attack or event has been reported by one or more nodes, the favored solution can optionally specify, among other criteria, a defensive algorithm to be used for responding to such attack. Similarly, if the occurrence of certain high-level conditions are detected which correspond to attacks directed to the SCNM, a defensive algorithm can be specified as part of the network solution chosen in block 922.
  • configuration parameters 970 are computed for the protocols of the protocol stack layers that enable an implementation of the "favored solution" within the cognitive network 104. Subsequently, the network resources of the protocol stack layers are configured in accordance with the respective configuration parameters 970, as shown by functional blocks 924-936. These actions can be performed by the CIM 504 executing in each node.
  • the network resources remain in their current configuration until the project or mission changes, the network topology changes and/or the network's operating environment changes. Accordingly, events detected at the node level which are indicative of low-level attacks, and unexpected changes in the behavior of the network can trigger the selection of a new network solution as shown in FIG. 9 .
  • a network node 102 includes at least one processing device 306, which together with similar processing devices 306 in other nodes 102 will comprise a part of a distributed processor.
  • the distributed components 514 in each network node 102 are hosted on processing devices 306 respectively in each network node.
  • the distributed processor employs a distributed intelligence algorithm for facilitating the optimization of the overall performance of the cognitive network 104.
  • the distributed processor includes hardware (e.g., electronic circuits) and/or firmware configured to perform the operations described above in relation to FIG. 9 and the method described below in relation to FIG. 11 .
  • the processing device 306 of each network node 102 will also host an Environment Observation Component ("EOC") 1060.
  • the EOC can sense frequency channels that are available for use by the cognitive network.
  • initialization parameters 950 for the distributed intelligence algorithm are computed during operation of the cognitive network 104.
  • cognitive engine 1076 of network node 102 includes an optional Initialization Parameter Generator ("IPG") 1052.
  • IPG 1052 is configured to use project or mission requirements 902, feedback layer constraints 938 and/or network-related information for computing the initialization parameters for the MOO algorithms 1084 employed by itself and/or other network nodes 102.
  • the initialization parameters can be computed using at least one AI algorithm 1082 and/or TLU method.
  • the type of AI algorithm 1082 or initialization parameters can be selected in accordance with a particular "use case", as described above.
  • network node 102 communicates the initialization parameters to those other network nodes, respectively.
  • the initialization parameters can be communicated via command and control communication.
  • the network node 102 uses the respective initialization parameters and/or network-related information to facilitate the optimization of overall network performance, including implementation of any defensive strategies.
  • the initialization parameters are computed using CBR and/or fuzzy algebra.
  • CBR and fuzzy algebra are well known in the art, and therefore will not be described in detail herein.
  • a brief discussion of the operations performed by the network node 102 for computing the initialization parameters is provided below to assist a reader in understanding CBR scenarios.
  • the IPG 1052 includes a CBR component 1080 that is generally configured to receive case-related information from EOC 1060 and process the same.
  • the EOC 1060 performs operations to generate a Full Characterization of the Network Node Environment ("FCNNE").
  • FCNNE is generated by combining hardware resource data stored at a node 102 with Radio Environment Map ("REM") data 1066.
  • the REM data 1066 characterizes a static local network node environment (e.g., hidden nodes, terrain, etc.) and distant network node environments.
  • the REM data 1066 is updatable via command and control communication.
  • FCNNE is then communicated from the EOC 1060 to the scenario synthesizer 1070.
  • FCNNE is combined with the current project or mission requirements 1072 so as to synthesize a set of objectives, limits, and boundary conditions for the cognitive engine 1076.
  • the objectives may be stored in a memory (e.g. main memory 302) in a particular format (e.g., a table format). Thereafter, the objectives are combined with the radio hardware environment data to generate combined objective/environment data.
  • the combined objective/environment data is used by the scenario synthesizer 1070 to generate at least one case identifier.
  • the case identifier(s) is(are) then communicated to the CBR component 1078 of the cognitive engine 1076.
  • the CBR component 1078 uses the case identifier(s) to: select the number of MOO algorithms that should be employed for each protocol stack layer; select the type of MOO algorithm(s) to be employed for each protocol stack layer; and/or determine the initialization parameters for the MOO algorithms 1084.
  • a Pareto Front for each selected MOO algorithm 1084 is determined.
  • the MOO algorithms 1084 comprise at least one MOO algorithm for each protocol stack layer that is unique thereto and/or customized to the requirements thereof.
  • the same or different MOO algorithm can be used for two or more of the protocol stack layers.
  • a PSO algorithm (more particularly, a biologically inspired PSO algorithm) is employed for at least one of the protocol stack layers (e.g., a physical layer, a data link layer, and/or a network layer).
  • Each of the MOO algorithms (including PSOs and biologically inspired PSOs) yields an N-dimensional Pareto Front of non-inferior solutions, as described above.
  • the MOO algorithms are part of a larger distributed intelligence algorithm implemented by network nodes 102 of the cognitive network 104.
  • inter-node communications may be required for computing the Pareto Fronts.
  • network node 102 communicates with other network nodes 102 using command and control communications for purposes of deriving a solution to one or more MOO algorithms 1084.
  • the cognitive engine 1076 After the cognitive engine 1076 generates a Pareto Front, it communicates the Pareto Front to the policy engine 1090.
  • the policy engine 1090 forms part of a distributed policy engine. The functions of such a distributed policy engine are described above in relation to functional block 940 of FIG. 9 . At least some of the functions described above in relation to functional block 940 of FIG. 9 are performed by policy engine 1090.
  • additional operations are performed by the policy engine 1090 to facilitate the development of the best overall network solutions.
  • the additional operations involve: applying additional algorithms at least to the Pareto Fronts generated by cognitive engine 1076; assisting in the development of the best overall network solutions based on the solutions to the additional algorithms; and assisting in the ranking of the best overall network solutions according to a set of criteria appropriate to a specific application space and conditions in which the cognitive network 104 is operating.
  • the additional algorithms can include, but are not limited to, CBR algorithms, expert system algorithms, and/or neural network algorithms.
  • the policy engine 1090 assists in the analysis of the ranked best overall network solutions to: identify which solutions are compliant with current regulatory policies and/or project/mission policies; and identify a top ranked solution from the identified solutions.
  • the top ranked solution can include one or more high-level defensive actions or algorithms to be performed by the network 104 for thwarting a particular attack.
  • the solution can also specify responses to individual nodes to detected events indicative of low-level attacks at the node level or high level node conditions that are at variance with expectations.
  • Policy compliance can be determined using the boundary conditions generated by scenario synthesizer 1070. If no ranked best overall solutions are policy compliant, then the policy engine 1090 assists in a determination of possible approaches that would bring the cognitive network 104 into compliance. The possible approaches are feedback to the MOOA component 1084 or the CBR component 1080 to give direction regarding how the solution can be brought into compliance. There is no fixed process for how the MOOA component 1084 or the CBR component 1080 uses the fed back information.
  • LCO Link Configuration Optimization
  • the LCO engine 1094 uses a radio resource cost function to down select to a single configuration solution.
  • the solution is evaluated to assess quality. Ultimate a solution is selected on this basis, and the solution can include one or more of the high-level defensive algorithms described herein for thwarting attacks directed against the network.
  • the solutions can also include defensive actions to be implemented at individual nodes and specified on a node-by-node basis.
  • step 1100 begins with step 1102 and continues with step 1104.
  • step 1104 project or mission requirements (e.g., project or mission requirements 902 of FIG. 9 ) are received.
  • the project or mission requirements are then used in step 1106 to generate initialization parameters for a plurality of MOO algorithms.
  • the initialization parameters can be generated using at least one AI algorithm and/or TLU method.
  • the AI algorithm can include, but is not limited to, a CBR algorithm and/or a fuzzy algebra algorithm.
  • the type of algorithm used in step 1106 may be selected in accordance with a use case.
  • the use case can be made up of a set of possible sequences of interactions between network components and users in a particular environment.
  • the initialization parameters may be generated using: information specifying a status and constraints that apply to protocol stack layer resources of at least one network node; and/or information concerning resources that are available on each network node of the cognitive network.
  • step 1106 is performed in a distributed fashion in which all network nodes of the cognitive network assist in generating the initialization parameters. In other scenarios, step 1106 is performed in a centralized fashion in which a single network node generates the initialization parameters. In yet other scenarios, step 1106 is performed in a semi-distributed fashion in which only a select few of the network nodes assist in the generation of the initialization parameters.
  • Pareto Fronts are determined by solving the distributed MOO algorithms.
  • At least one of the distributed MOO algorithms comprises a biologically inspired PSO algorithm.
  • the biologically inspired PSO algorithm is used for a physical layer, a data link layer, and/or a network layer of a protocol stack.
  • a different type of distributed MOO algorithm may be employed for at least two of the protocol stack layers.
  • the type of distributed MOO algorithm to be employed for at least one protocol stack layer can be selected based on an amount of non-payload inter-node communication and requirements of the protocol stack layer.
  • the number of distributed MOO algorithms to be employed for at least one protocol stack layer is selected based on an amount of non-payload inter-node communication and requirements of the protocol stack layer.
  • the Pareto Fronts are analyzed in aggregate to develop a plurality of best overall network solutions.
  • One or more of these solutions can include implementation of a network defensive algorithm for thwarting an attack upon the network.
  • the best overall network solutions can be developed using a case-based reasoning algorithm, an expert system algorithm or a neural network algorithm.
  • the best overall network solutions are then ranked according to a pre-defined criteria, as shown by step 1112.
  • a top ranked solution is identified in step 1114 for the best overall network solutions that comply with current regulatory policies and/or project/mission policies.
  • step 1116 is performed where configuration parameters are computed for protocols of the protocol stack layers that enable implementation of the top ranked solution within the cognitive network.
  • the configuration parameters will specify actions to be performed for implementing such algorithm.
  • the top ranked solution is implemented in the cognitive network by configuring the network resources (e.g., hardware and software resources of the various network nodes 102 of FIG. 2 ) thereof in accordance with the configuration parameters, as shown by step 1118.
  • This step also includes implementing the selected high-level defensive algorithm at one or more of the nodes, and implementing any low-level defensive actions that are determined to be necessary with respect to individual nodes.
  • the low-level defensive strategies implemented at individual nodes can be performed on a node-by-node basis.
  • a high-level defense algorithm as described herein is one which will substantially eliminate, or mitigate, attacks directed upon a CNM infrastructure by an adversary.
  • High-level defensive algorithms as referenced herein can include any algorithm, action or protocol implemented at the network level (as opposed to algorithms performed with respect to an individual node) that function to defeat attacks upon the cognitive network. Accordingly, many different types of defensive algorithms are possible for use with the inventive arrangements described herein. Still, it has been determined that there are three basic types of high-level defensive algorithm that are sufficient to mitigate most potential high-level attacks on a cognitive network. These high-level algorithms include (1) multi-layer address hopping (2) network and link interface migration, and (3) dynamic topology management. Algorithms of this type are known in the art and therefore will not be described in detail. However, a brief discussion of each defensive algorithm to facilitate understanding of the invention.
  • Multi-layer hopping involves frequent pseudo-random changes to node IP and/or MAC addresses. Multi-layer hopping is expected to mitigate the effects of targeted packet dropping, packet injection and modification, as it makes it difficult for an attacker to identify specific sessions or flows in the network. For distributed tactical networks, and in particular, for tactical networks operating in a mesh mode, pseudorandom changes in IP address alone are not necessarily sufficient to enable effective hopping. Accordingly, simultaneous hopping is advantageously enabled by the CIM at both the data link and network layers (i.e. layers 2 and 3 of the OSI stack). This approach reduces the chances of an attacker compromising the hopping strategy by simply tracking layer 2 frame addresses.
  • IP hopping schemes are known in the art. Accordingly, the IP and MAC address hopping methods will not be described here in detail. However, it should be understood that any suitable technique for implementing and coordinating an IP hopping scheme can be used, provided that is resistant to exploitation.
  • high layer protocols in a communications stack are conventionally designed to detect and compensate for temporary failures in the communications link.
  • Numerous windowing techniques and application level acknowledges are often used, especially in tactical network environments, to account for the unreliability of the link. While reasonably effective against transient link failures and instabilities, these higher-level techniques can be completely ineffective against malicious disruptions.
  • conventional error correction and transmission techniques can be used against the infrastructure itself, amplifying or enabling the effects of the attack.
  • a multi-link coordination algorithm is provided that allows nodes to contextually utilize redundant links to mitigate this class of attacks.
  • FIG 12 shows two host computing devices 1201 which are communicating in a network environment.
  • Each host is executing one or more software applications 1202 which communicate with applications in other host computing devices.
  • the applications executing on the hosts in this example can communicate data by means of two links, eth0, eth1.
  • the links are assumed to be non-interfering, but not necessarily with the same capacity, or even operating in the same network.
  • the applications on each host computing device will exchange information based on the interface to which they are bound, the availability of the links, and/or the order in which the operating system on the sender's side has registered each interface.
  • Each host computing device 101 executes at least one application 502 as described above.
  • the host computing device communicates with secure core 300 residing in network node 102 and further includes a communication interface manager (CIM) 504 as described above.
  • the CIM advantageously abstracts multiple physical layer links 1304, 1306 into a single logical link 1302 that is managed by the CIM. Consequently, regardless of the application bindings on each host application 502, the CIM will choose the best link for data transmission, and will also take advantage of both links 1304, 1306 to effectively split the transmission of frames between host computing devices as needed.
  • links can be managed by the CIMs 504 to support capacity maximization, reliability, or adaptive data transmission.
  • Multi-link management techniques are known in the art and therefore will not be described here in detail.
  • the link management capability described herein will advantageously include managing the links 1304, 1306 to provide a fully redundant transmission mode using a plurality of available links under conditions where adaptive link disruption network attacks are suspected, or anticipated.
  • each frame received from an application 502 is duplicated and transmitted simultaneously on all links 1304, 1306.
  • the redundant transmission across non-interfering links 1304, 1306 makes it harder for a cognitive jammer to selectively disrupt the communication flow and thereby cause instabilities in the system.
  • the CIM will advantageously use the available links 1304, 1306 to maximize capacity when possible, and to maximize reliability when necessary. More particularly, the CIMs 504 will dynamically respond to each communication context by using the multiple redundant links to maximize capacity or reliability, as communication conditions change. In this regard it should be noted that there are no requirements with regard to the capacity or similarity between links 1304, 1306 that are managed.
  • the multi-link algorithm used by the CIM for this purpose will advantageously support the load balance against different links, and the full synchronization of links, to maximize capacity, or reliability.
  • the CIM 504 in each network node can advantageously transmit frames on each link with a timing offset to reduce the effectiveness of adaptive wide-band jammers.
  • the same frame may be sent on different links at slightly different times (rather than being synchronized) so that an adaptive wide-band jammer will be less likely to cause interference to both frames.
  • topology control A third level of defense that is implemented by the CIM 504 is based on topology control.
  • the purpose of topology control in this context, is to allow for the SCNM infrastructure to respond to localized threats with a broad change in the physical topology of the network. From a network topology perspective there are at least two levels of control that can be used to support or augment the distributed coordination algorithms for network management.
  • the physical network topology establishes the specific links between nodes.
  • the physical topology of the network is a function of many variables, including the connectivity settings, geographical distribution of nodes, mobility, and environmental effects. In cognitive network management, it is often the case that some of the physical properties of transmitters and receivers (such as transmission power, waveforms, directionality, etc.) are also manipulated to control the physical topology.
  • a second level of topology control happens at the network layer, when different nodes choose to favor communications through specific links by defined higher-level routes between those neighbor nodes that will be used by applications.
  • the network layer in this case, is used to identify the broader paths for data flows, given local link conditions. It is important to note that topology management at OSI layer three (the network layer) does not affect the way that transmissions occur at the local level. That is, despite how routes are defined, the local transmissions will always happen in accordance with the characteristics of the local RF topology.
  • a reactive topology control algorithm implemented by the SCNM uses the CIM to construct at least three types of topologies based on system requirements and operational context.
  • the three or more target topologies including: a) a mesh topology, b) a tree topology, and c) a point-to-point (p2p) topology.
  • p2p point-to-point
  • FIGs. 14A-14C each of the proposed topologies is illustrated.
  • the mesh topology, the tree topology, and the point-to-point topologies can each be constructed by SCNM when necessary. More particularly, when a threat is perceived, the SCNM can use the CIM in each node to dynamically disassemble and reassemble any one the topologies currently in use, in favor of a different topology. In all cases, SCNM builds the different topologies at the physical level.
  • the topology management implementation can use the CIM to control a combination of transmitter power and transmission frequency to build the different topologies at the physical level.
  • the invention is not limited in this regard and any suitable method can be used to assemble the different topologies as described herein. From an adversary's perspective, a change in topology can disrupt localized attacks launched, for example, by a compromised node in the network. Furthermore, it can advantageously disrupt localized jamming attacks, and coordinated eavesdropping monitoring a particular command and control structure operating in the topology.
  • inventive arrangements described herein utilize a unique cross-layer correlation between platform-specific events and protocol-related effects to provide a robust, secure infrastructure.
  • a conventional security environment reacts to effects noticed at the higher layers of the communications protocol stack. For example, if a communications node begins to misroute packets, or drop routes/change routes to favor a previously seldom used routing node, this might be a concern to the security software that detects these anomalies. The security software recognizes that these actions represent a waste of transmission capacity, and may flag the effect as a distributed denial of service attack. However, the result is that one or more nodes have been compromised, the damage has been done, and isolation of the offending node(s) takes a relatively long time. More sophisticated security analysis tools might then later determine that the event that caused these effects was a code injection attack on a vulnerable node.
  • FIG. 15 there is shown an example of how a SCNM as described herein creates a novel capability to thwart attacks.
  • 102c is a mission critical node.
  • Node 102a needs to communicate with 102c for mission success as does node 102e.
  • Nodes 102b and 102d are along the critical path.
  • Nodes 102f, 102g, 102h and 102j are all in the mesh communications network, but are not currently along a critical path.
  • the secure core 300 in a node 102a or 102e traps a code injection attempt and notifies the distributed SCNM infrastructure.
  • the SCNM infrastructure has knowledge that node 102c is a critical node and that links from its managed source nodes 102b and 102e are critical. Based on a network optimization analysis performed in accordance with FIGs. 9-11 , the SCNM determines that nodes 102b, 102c and 102d should increase their sensitivity to perceived network threats. In other words, under normal conditions when there is no perceived threat, the routing operation might continue with a higher threshold for accepting the occurrence eof dropped or rerouted packets. But the event or events at a lower layer at a distant node (e.g., node 102a or 102e) can trigger a response across the network of distributed entities to be on high alert and tighten up the tolerance for any effects potentially due to malicious attacks.
  • node 102a or 102e can trigger a response across the network of distributed entities to be on high alert and tighten up the tolerance for any effects potentially due to malicious attacks.
  • Nodes 102f and 102j being adjacent to nodes where physical layer platform traps have occurred are directed by the SCNM to a medium level of effect tolerance.
  • Nodes 102g and 102h are unaffected by the alerts. They are currently not part of a potential threat path and do not need to throttle thresholds at this time. From the foregoing it will be understood that the inventive arrangements allow for a modulated response to attacks, maintaining the most connectivity and traffic capacity possible among nodes unaffected by the current perceived threat situation.
  • the invention also utilizes the reverse direction as an additional security mechanism.
  • a deployed tactical or emergency services network operates in a manner that can be seen as a pattern of operation. Mobile nodes go in and out of canopy, may be blocked by buildings and may lose line-of-sight peer connectivity due to terrain. All of these physical activities create a pattern of packet retransmissions and routing state updates. A model of these effects is maintained at the network layer. During operations, even application layer changes (such as when video is prioritized over voice due to current operations) affect the network traffic model the node creates and these affects are maintained in memory as "correct".
  • the node may have an indication of an attack vector not trapped by the secure core 300. If the nodes are in motion but the model does not follow the learned pattern, this also might indicate an unknown type of attack.
  • the upper layer distributed components of the SCNM can alert the secure core 300 in a node of a possible attack. In this case, the system might conclude that a new low-level exploit has compromised one or more nodes. While this is not part of the existing event trap list used in the NLEDM 512 of the secure core, the nodes can be isolated and the exploit noted for future core updates.
  • the inventive arrangements involve use of a dual process machine learning (cognition) approach to take advantage of the cross-layer activity.
  • First it models the system operation to fit the observations of events and effects. So when certain attack types cause effects across the range of the communications protocol stack, the model correlates these.
  • Second, the model anticipates the next state of the system. In critical path traffic scenarios, a certain level of retransmission is statistically expected. Should this level change, the next state of the system will not be what is expected.
  • the model allows the conditions noted at higher layers of the protocol stack to be noted by the distributed components of a distributed processor, and to be fed back down to the secure core in each node. The individual secure processes at each node 102 notice of potential problems due to upper layer pattern recognition.
  • FIG. 16 there is shown a flowchart which is useful for understanding certain actions performed in secure core 300.
  • the process begins at 1602 and continues to 1603 where the node performs actions to detect network-level effects of attacks noticed at higher layers of the communication stack. These actions can be performed at node 102 by processor 306.
  • Such detected effects can include instances where a communications node begins to misroute packets, drop routes/change routes to favor a previously seldom used routing node. Effects such as these can indicate the presence of an ongoing attack upon the network, e.g., a jamming attack.
  • the secure core can recognize that such actions are seriously wasting transmission capacity, and flag the effect as a distributed denial of service attack.
  • the secure core perform actions to detect machine or instruction level events indicating the possible occurrence of a node-level attack intended to disrupt internal operations or functions of a node. These actions can be performed using NLEDM 512. Although steps 1603 and 1604 are shown as being performed serially, it should be understood that the detection of machine or instruction level events can be performed concurrently with monitoring of the network communications to detect network-level effects of attacks.
  • step 1612 a determination is made as to whether the event or events requires a local defensive action at the node. If so (1612: Yes) the secure core at node 102 will at 1614 implement certain defensive actions at the node. The decision to implement these local defensive actions at the node may or may not require the involvement of the distributed SCNM infrastructure.
  • nodes within the network may be updated to have different node configuration settings (including different defensive settings). These different configurations can be selected and/or optimized for each node 102 by the distributed SCNM infrastructure in accordance with the mission requirements of the network 104.
  • the updated node configuration settings can cause the node to implement certain high-level defensive actions.
  • These defensive actions can include implementation of any defensive high-level network algorithms such as described herein.
  • These high-level defensive actions can involve the entire communication network or a plurality of nodes comprising only a portion of the network. For example, selected nodes in the network can be reconfigured to implement (1) multi-layer address hopping (2) network and link interface migration, and/or (3) dynamic topology management algorithms to defend against high-level network attacks. Other high-level defensive algorithms are possible without limitation.
  • the updated node configuration can also cause the node to implement low-level (node-level) defensive actions.
  • low-level defensive actions can involve (1) varying the number and/or types of machine level events which are detected (2) modifying a threat evaluation threshold level applied at 1608 when determining whether a particular event is a threat, (3) modifying the number and/or types of local defensive actions which are implemented at 1612 in response to events that are deemed threats, and/or (4) varying an evaluation process applied at 1618 for purposes of determining when events/effects should be reported to the distributed SCNM infrastructure.
  • the node-level events which are detected and acted upon by the network as described herein are advantageously selected to include instruction-set level events exclusive of events which are associated with the network communication domain.
  • the adaptive network described herein is responsive to node-level events, including events occurring at the instruction level, which are outside the normal domain of network communications. These events are advantageously used as indicators and/or trigger by the SCNM distributed infrastructure, even though they do not correspond to functions or processes normally associated with network communications.
  • the network communication domain generally includes all aspects of the communication between two systems; that is, the network communication domain includes not only the actual network communuication stack and associated network hardware, but also the machine code elements that execute in response to handling events within the communication domain.
  • Node-level attacks directed to functions and processes which are associated with the network communication domain will naturally be used to inform the SCNM distributed infrastructure for purposes of triggering and shaping adaptive responses described herein.
  • the inventive arrangements herein go further insofar that events which trigger and shape adaptive network responses at the macro or network level can include events which have no relation to the network communication domain.
  • Instruction level events outside the realm or domain of network communications can provide early and effective indications of network related threats. Examples of instruction level events could be but are not limited to function calls to anomalous locations, access to invalid memory, or attempts to execute data. While these events can (and do) occur within the communication domain, we can also observe these events in machine-level instructions that are not directly related to the communication domain and do not involve processing of network data. For example, when packets arrive at a network interface there is machine code associated with handling these packages and processing their content. In this invention, events as described above can occur outside of this machine code associated with the network communication domain, and yet can still drive changes in the network domain.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Hardware Design (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Technology Law (AREA)

Claims (9)

  1. Procédé destiné à la défense d'un réseau de communication d'une attaque hostile utilisant une infrastructure distribuée qui multiplie la coordination à travers des niveaux d'abstraction disparates, comprenant :
    sur chacun d'une pluralité de dispositifs de calcul (101) de noeuds (102) comprenant un réseau de communication, l'utilisation d'une liste d'événements stockés pour détecter au moins un événement nodal de bas niveau comprenant des tentatives d'exécution, injecter un code à un niveau de code machine ou d'instruction matérielle adressé au dispositif de calcul nodal (101) qui est connu pour avoir la possibilité d'interférer directement avec le fonctionnement du noeud (102) : et
    en réponse à la détection de l'au moins un événement nodal sur l'un de la pluralité de noeuds (102) de réseau, la détermination automatiquement de façon sélective d'une action défensive au niveau du réseau optimale impliquant une pluralité de noeuds de réseau comprenant le réseau de communication, l'action défensive de niveau réseau basée sur l'au moins un événement nodal détecté et sur un jeu d'exigences de communication connues établi pour ledit réseau.
  2. Procédé selon la revendication 1, comprenant en outre d'implémenter automatiquement de façon sélective une action défensive au niveau du réseau qui affecte uniquement le noeud où l'au moins un événement nodal a été détecté si l'au moins un événement nodal ne requiert pas d'action défensive au niveau du réseau pour garantir une satisfaction continue des exigences de communication connues.
  3. Procédé selon la revendication 1, comprenant en outre :
    le maintien sur la pluralité de dispositifs de calcul nodaux (101) d'un modèle dynamique qui est représentatif d'un schéma de fonctionnement du réseau pour ledit réseau de communication ; et
    l'utilisation dudit modèle dynamique pour comparer des événements en cours au niveau du réseau à une plage d'événements prévus au niveau du réseau ; et
    la modification de façon sélective d'une action défensive au niveau du réseau qui est exécutée en réponse audit au moins un événement nodal lorsque lesdits événements en cours au niveau du réseau ne correspondent pas à une plage d'événements prévus au niveau du réseau.
  4. Procédé selon la revendication 3, comprenant en outre de réduire de façon sélective ladite plage d'événements prévus au niveau du réseau en réponse à l'événement nodal qui a été détecté, ce par quoi le réseau est rendu plus sensible aux variations inattendues de performance du réseau lorsque ledit au moins un événement nodal est détecté.
  5. Procédé selon la revendication 1, dans lequel les activités de traitement sur le dispositif de calcul nodal (101) sont exécutées exclusivement en utilisant une implémentation matérielle informatique qui est résistante à une attaque d'injection de code.
  6. Procédé selon la revendication 1, en réponse à la détection de l'au moins un événement nodal sur le dispositif de calcul nodal, modifiant automatiquement de façon sélective une réponse défensive prédéterminée d'un ou plusieurs des dispositif(s) de calcul nodal (101) à des événements nodaux détectés par la suite.
  7. Réseau de communication qui se défend d'une attaque hostile utilisant une infrastructure distribuée qui multiplie la coordination à travers des niveaux d'abstraction disparates, comprenant :
    une pluralité de dispositifs de calcul (101) comprenant un réseau de communication,
    chacun utilisant une liste d'événements stockés pour détecter au moins un événement nodal de bas niveau comprenant des tentatives d'exécution, injecter un code à un niveau de code machine ou d'instruction matérielle adressé au dispositif de calcul nodal (101) qui est connu pour avoir la possibilité d'interférer directement avec le fonctionnement du noeud (102) ; et
    au moins un dispositif de traitement qui répond à la détection de l'au moins un événement nodal sur l'un premier de la pluralité de noeuds (102) de réseau, et qui détermine automatiquement de façon sélective une action défensive au niveau du réseau optimale impliquant une pluralité de noeuds de réseau comprenant le réseau de communication, l'action défensive de niveau réseau basée sur l'au moins un événement nodal détecté et sur un jeu d'exigences de communication connues établi pour ledit réseau.
  8. Réseau de communication selon la revendication 7, dans lequel l'au moins un dispositif de traitement implémente automatiquement de façon sélective une action défensive au niveau du réseau qui affecte uniquement le noeud où l'au moins un événement nodal a été détecté si l'au moins un événement nodal ne requiert pas d'action défensive au niveau du réseau pour garantir une satisfaction continue des exigences de communication connues.
  9. Réseau de communication selon la revendication 7, dans lequel l'au moins un dispositif de traitement maintient sur la pluralité de dispositifs de calcul nodaux (101), un modèle dynamique qui est représentatif d'un schéma de fonctionnement du réseau pour ledit réseau de communication ;
    utilise ledit modèle dynamique pour comparer des événements en cours au niveau du réseau à une plage d'événements prévus au niveau du réseau ; et
    modifie de façon sélective une action défensive au niveau du réseau qui est exécutée en réponse audit au moins un événement nodal lorsque lesdits événements en cours au niveau du réseau ne correspondent pas à une plage d'événements prévus au niveau du réseau.
EP16000447.9A 2015-03-02 2016-02-24 Corrélation de couches croisées dans un réseau cognitif sécurisé Active EP3065376B1 (fr)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US14/635,064 US9729562B2 (en) 2015-03-02 2015-03-02 Cross-layer correlation in secure cognitive network

Publications (2)

Publication Number Publication Date
EP3065376A1 EP3065376A1 (fr) 2016-09-07
EP3065376B1 true EP3065376B1 (fr) 2017-09-27

Family

ID=55521325

Family Applications (1)

Application Number Title Priority Date Filing Date
EP16000447.9A Active EP3065376B1 (fr) 2015-03-02 2016-02-24 Corrélation de couches croisées dans un réseau cognitif sécurisé

Country Status (6)

Country Link
US (1) US9729562B2 (fr)
EP (1) EP3065376B1 (fr)
KR (1) KR101852965B1 (fr)
CN (1) CN105939331B (fr)
CA (1) CA2921517C (fr)
TW (1) TWI631843B (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220033110A1 (en) * 2020-07-29 2022-02-03 The Boeing Company Mitigating damage to multi-layer networks

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9729562B2 (en) 2015-03-02 2017-08-08 Harris Corporation Cross-layer correlation in secure cognitive network
CN109565737B (zh) * 2016-08-10 2023-03-07 瑞典爱立信有限公司 无线网状网络中的分组转发
CN108024352A (zh) * 2016-11-03 2018-05-11 索尼公司 用于资源管理装置、数据库和对象的电子设备和方法
US10558542B1 (en) 2017-03-31 2020-02-11 Juniper Networks, Inc. Intelligent device role discovery
GB201706132D0 (en) * 2017-04-18 2017-05-31 Nchain Holdings Ltd Computer-implemented system and method
US10491481B2 (en) * 2017-04-28 2019-11-26 Dell Products L.P. Messaging queue topology configuration system
KR102324361B1 (ko) 2017-05-29 2021-11-11 한국전자통신연구원 집단 지능 기반 악의적 기기 탐지 장치 및 방법
US10990682B2 (en) * 2017-12-18 2021-04-27 Nuvoton Technology Corporation System and method for coping with fault injection attacks
CN112219381B (zh) 2018-06-01 2023-09-05 诺基亚技术有限公司 用于基于数据分析的消息过滤的方法和装置
WO2020015831A1 (fr) * 2018-07-19 2020-01-23 Nokia Technologies Oy Modélisation et abstraction d'environnement d'états de réseau pour des fonctions cognitives
US10826756B2 (en) 2018-08-06 2020-11-03 Microsoft Technology Licensing, Llc Automatic generation of threat remediation steps by crowd sourcing security solutions
US10911479B2 (en) * 2018-08-06 2021-02-02 Microsoft Technology Licensing, Llc Real-time mitigations for unfamiliar threat scenarios
CN112956146A (zh) * 2018-09-27 2021-06-11 英特尔公司 协作无线电网络中的特征检测
KR20200041771A (ko) * 2018-10-12 2020-04-22 삼성전자주식회사 전력 특성을 고려한 메모리 시스템의 설계 방법, 상기 메모리 시스템의 제조 방법, 및 상기 메모리 시스템을 설계하기 위한 컴퓨팅 시스템
CN109194692A (zh) * 2018-10-30 2019-01-11 扬州凤凰网络安全设备制造有限责任公司 防止网络被攻击的方法
US11681831B2 (en) 2019-04-10 2023-06-20 International Business Machines Corporation Threat detection using hardware physical properties and operating system metrics with AI data mining
EP4052147A4 (fr) 2019-10-30 2023-11-08 Dull IP Pty Ltd Procédé de communication de données
CN110866287B (zh) * 2019-10-31 2021-12-17 大连理工大学 一种基于权重谱生成对抗样本的点攻击方法
CN112346422B (zh) * 2020-11-12 2023-10-20 内蒙古民族大学 双蚁群智能对抗竞争实现机组作业调度方法
CN112702274B (zh) * 2020-12-24 2022-08-19 重庆邮电大学 战术瞄准网络技术中基于路由稳定性的跨层拥塞控制方法
CN112766865B (zh) * 2021-03-02 2023-09-22 河南科技学院 一种考虑实时订单的互联网电商仓储动态调度方法
CN113411235B (zh) * 2021-06-21 2023-11-07 大连大学 一种基于pso的未知协议数据帧特征提取方法
CN114928494A (zh) * 2022-05-24 2022-08-19 中国人民解放军国防科技大学 一种基于业务容量的网络攻击降效方法

Family Cites Families (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7069589B2 (en) * 2000-07-14 2006-06-27 Computer Associates Think, Inc.. Detection of a class of viral code
WO2002027426A2 (fr) 2000-09-01 2002-04-04 Op40, Inc. Systeme, procede, utilisations, produits, produits-programmes et procedes commerciaux pour services internet repartis et services reseau repartis
US7295956B1 (en) 2002-10-23 2007-11-13 Sun Microsystems, Inc Method and apparatus for using interval techniques to solve a multi-objective optimization problem
US7742902B1 (en) 2003-10-22 2010-06-22 Oracle America, Inc. Using interval techniques of direct comparison and differential formulation to solve a multi-objective optimization problem
US8024036B2 (en) 2007-03-19 2011-09-20 The Invention Science Fund I, Llc Lumen-traveling biological interface device and method of use
GB2439490B (en) 2005-03-08 2008-12-17 Radio Usa Inc E Systems and methods for modifying power usage
US20070192267A1 (en) 2006-02-10 2007-08-16 Numenta, Inc. Architecture of a hierarchical temporal memory based system
US7975036B2 (en) * 2006-05-11 2011-07-05 The Mitre Corporation Adaptive cross-layer cross-node optimization
US7664622B2 (en) 2006-07-05 2010-02-16 Sun Microsystems, Inc. Using interval techniques to solve a parametric multi-objective optimization problem
US8015127B2 (en) 2006-09-12 2011-09-06 New York University System, method, and computer-accessible medium for providing a multi-objective evolutionary optimization of agent-based models
US7899849B2 (en) * 2008-05-28 2011-03-01 Zscaler, Inc. Distributed security provisioning
US8660499B2 (en) 2008-08-25 2014-02-25 Ntt Docomo, Inc. Delivery system, delivery apparatus, terminal apparatus and method
US8699430B2 (en) 2008-10-10 2014-04-15 The Trustees Of The Stevens Institute Of Technology Method and apparatus for dynamic spectrum access
US8660530B2 (en) 2009-05-01 2014-02-25 Apple Inc. Remotely receiving and communicating commands to a mobile device for execution by the mobile device
US8666367B2 (en) 2009-05-01 2014-03-04 Apple Inc. Remotely locating and commanding a mobile device
US8665724B2 (en) 2009-06-12 2014-03-04 Cygnus Broadband, Inc. Systems and methods for prioritizing and scheduling packets in a communication network
JP5522434B2 (ja) 2009-09-01 2014-06-18 アイシン精機株式会社 運転支援装置
GB2474748B (en) 2009-10-01 2011-10-12 Amira Pharmaceuticals Inc Polycyclic compounds as lysophosphatidic acid receptor antagonists
US8666403B2 (en) 2009-10-23 2014-03-04 Nokia Solutions And Networks Oy Systems, methods, and apparatuses for facilitating device-to-device connection establishment
US8112521B2 (en) * 2010-02-25 2012-02-07 General Electric Company Method and system for security maintenance in a network
US8665842B2 (en) 2010-05-13 2014-03-04 Blackberry Limited Methods and apparatus to discover network capabilities for connecting to an access network
WO2012154664A2 (fr) * 2011-05-06 2012-11-15 University Of North Carolina At Chapel Hill Procédés, systèmes et supports lisibles par ordinateur permettant de détecter un code machine injecté
US8332424B2 (en) 2011-05-13 2012-12-11 Google Inc. Method and apparatus for enabling virtual tags
US8665345B2 (en) 2011-05-18 2014-03-04 Intellectual Ventures Fund 83 Llc Video summary including a feature of interest
US9122993B2 (en) 2013-01-30 2015-09-01 Harris Corporation Parallel multi-layer cognitive network optimization
US9147164B2 (en) 2013-01-30 2015-09-29 Harris Corporation Sequential multi-layer cognitive network optimization
CN103973702A (zh) * 2014-05-23 2014-08-06 浪潮电子信息产业股份有限公司 基于改进的粒子群算法的信息安全防御规则智能部署方法
US9729562B2 (en) 2015-03-02 2017-08-08 Harris Corporation Cross-layer correlation in secure cognitive network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
None *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220033110A1 (en) * 2020-07-29 2022-02-03 The Boeing Company Mitigating damage to multi-layer networks
US11891195B2 (en) * 2020-07-29 2024-02-06 The Boeing Company Mitigating damage to multi-layer networks

Also Published As

Publication number Publication date
KR20160106505A (ko) 2016-09-12
US20160261615A1 (en) 2016-09-08
CA2921517A1 (fr) 2016-09-02
TW201707425A (zh) 2017-02-16
CA2921517C (fr) 2018-01-16
CN105939331A (zh) 2016-09-14
TWI631843B (zh) 2018-08-01
CN105939331B (zh) 2018-07-03
KR101852965B1 (ko) 2018-04-27
US9729562B2 (en) 2017-08-08
EP3065376A1 (fr) 2016-09-07

Similar Documents

Publication Publication Date Title
EP3065376B1 (fr) Corrélation de couches croisées dans un réseau cognitif sécurisé
Siriwardhana et al. AI and 6G security: Opportunities and challenges
Rahman et al. Smartblock-sdn: An optimized blockchain-sdn framework for resource management in iot
Veeraiah et al. An approach for optimal-secure multi-path routing and intrusion detection in MANET
US10862918B2 (en) Multi-dimensional heuristic search as part of an integrated decision engine for evolving defenses
US20180309794A1 (en) User interface supporting an integrated decision engine for evolving defenses
CN113748660A (zh) 用于处理指示在经由网络传输的流量中检测到异常的警报消息的方法和装置
Wang et al. When machine learning meets spectrum sharing security: Methodologies and challenges
Tang et al. Cognitive radio networks for tactical wireless communications
Lalropuia et al. A Bayesian game model and network availability model for small cells under denial of service (DoS) attack in 5G wireless communication network
Zhou et al. Jamsa: A utility optimal contextual online learning framework for anti-jamming wireless scheduling under reactive jamming attack
Halabi et al. Towards adaptive cybersecurity for green IoT
Li et al. Secure edge computing in IoT via online learning
Olowononi et al. Deep learning for cyber deception in wireless networks
Ashraf et al. Toward autonomic internet of things: Recent advances, evaluation criteria, and future research directions
Talpur et al. Adversarial attacks against deep reinforcement learning framework in Internet of Vehicles
Sandhu et al. Enhancing dependability of wireless sensor network under flooding attack: a machine learning perspective
Kumar et al. Software defined networks (SDNs) for environmental surveillance: A Survey
Castañares et al. Slice aware framework for intelligent and reconfigurable battlefield networks
Górski et al. A method of trust management in wireless sensor networks
Rughinis et al. TinyAFD: Attack and fault detection framework for wireless sensor networks
Adebayo et al. Cyber deception for wireless network virtualization using stackelberg game theory
Kabdjou et al. Improving quality of service and HTTPS DDoS detection in MEC environment with a cyber deception-based architecture
Pathak Adaptive quality of service and trust based lightweight secure routing algorithm for dense wireless sensor networks
Oliveira et al. Developing Attack Defense Ideas for Ad Hoc Wireless Networks

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20160224

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: GRANT OF PATENT IS INTENDED

RIC1 Information provided on ipc code assigned before grant

Ipc: H04L 12/24 20060101ALI20170411BHEP

Ipc: H04L 29/12 20060101ALI20170411BHEP

Ipc: H04L 29/06 20060101ALI20170411BHEP

Ipc: H04L 12/707 20130101ALI20170411BHEP

Ipc: H04L 29/08 20060101AFI20170411BHEP

INTG Intention to grant announced

Effective date: 20170426

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE PATENT HAS BEEN GRANTED

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: AT

Ref legal event code: REF

Ref document number: 932981

Country of ref document: AT

Kind code of ref document: T

Effective date: 20171015

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602016000434

Country of ref document: DE

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170927

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171227

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170927

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170927

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170927

REG Reference to a national code

Ref country code: NL

Ref legal event code: MP

Effective date: 20170927

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG4D

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 932981

Country of ref document: AT

Kind code of ref document: T

Effective date: 20170927

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171227

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170927

Ref country code: RS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170927

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171228

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170927

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170927

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170927

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170927

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170927

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170927

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180127

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170927

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170927

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170927

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602016000434

Country of ref document: DE

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170927

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170927

26N No opposition filed

Effective date: 20180628

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170927

REG Reference to a national code

Ref country code: IE

Ref legal event code: MM4A

REG Reference to a national code

Ref country code: BE

Ref legal event code: MM

Effective date: 20180228

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180224

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170927

REG Reference to a national code

Ref country code: FR

Ref legal event code: ST

Effective date: 20181031

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180224

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180228

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180228

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20190228

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20190228

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MT

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180224

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170927

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170927

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MK

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20170927

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170927

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO

Effective date: 20160224

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170927

REG Reference to a national code

Ref country code: DE

Ref legal event code: R079

Ref document number: 602016000434

Country of ref document: DE

Free format text: PREVIOUS MAIN CLASS: H04L0029080000

Ipc: H04L0065000000

P01 Opt-out of the competence of the unified patent court (upc) registered

Effective date: 20230525

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20240228

Year of fee payment: 9

Ref country code: GB

Payment date: 20240227

Year of fee payment: 9