CN111224966A - Optimal defense strategy selection method based on evolutionary network game - Google Patents

Optimal defense strategy selection method based on evolutionary network game Download PDF

Info

Publication number
CN111224966A
CN111224966A CN201911401396.XA CN201911401396A CN111224966A CN 111224966 A CN111224966 A CN 111224966A CN 201911401396 A CN201911401396 A CN 201911401396A CN 111224966 A CN111224966 A CN 111224966A
Authority
CN
China
Prior art keywords
strategy
defense
attack
node
evolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911401396.XA
Other languages
Chinese (zh)
Other versions
CN111224966B (en
Inventor
刘小虎
张玉臣
张恒巍
刘璟
李来强
于志超
吕文雷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Information Engineering University of PLA Strategic Support Force
Original Assignee
Information Engineering University of PLA Strategic Support Force
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Information Engineering University of PLA Strategic Support Force filed Critical Information Engineering University of PLA Strategic Support Force
Priority to CN201911401396.XA priority Critical patent/CN111224966B/en
Publication of CN111224966A publication Critical patent/CN111224966A/en
Application granted granted Critical
Publication of CN111224966B publication Critical patent/CN111224966B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/20Network architectures or network communication protocols for network security for managing network security; network security policies in general
    • H04L63/205Network architectures or network communication protocols for network security for managing network security; network security policies in general involving negotiation or determination of the one or more network security mechanisms to be used, e.g. by negotiation between the client and the server or between peers or by selection according to the capabilities of the entities involved
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • G06N5/042Backward inferencing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Computational Linguistics (AREA)
  • Computer Hardware Design (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention belongs to the field of network security, and particularly relates to an optimal defense strategy selection method based on an evolutionary network game.

Description

Optimal defense strategy selection method based on evolutionary network game
Technical Field
The invention belongs to the field of network security, and particularly relates to an optimal defense strategy selection method based on an evolutionary network game.
Background
Currently, network attack and defense present increasingly sharp fighting situations, increasingly complex confrontation scenes, increasingly diverse technical means and other development trends. Particularly, as the network attack is more automated and intelligent, the continuity and the dynamic degree of the network attack gradually increase. Static defense strategies based on matching of specific rules and attack characteristics have been unable to effectively cope with frequent and multiple network attacks. The network security is dynamic rather than static, the defense strategy should dynamically evolve along with the progress of attack and defense, and the self-income maximization can be realized under the constraint conditions of resources, capabilities, preference and the like as people, time and situation change. The network security is relative rather than absolute, and different levels of defense strategies are selected according to different attack scenes, so that the expected security loss is reduced to the minimum, and the network security becomes a key factor influencing the effectiveness of defense measures.
In the traditional network defense decision process, strategies are compared and analyzed mostly from the perspective of a defense party, an optimal defense strategy is obtained through overall optimization, and the judgment on the confrontation relationship of network attack and defense is insufficient due to the fact that the strategy influence factors of an attack party are few. In fact, the nature of network security is in opposition, the attack and defense strategies restrict and influence each other, and the defense strategy selection problem needs to be researched from the attack and defense opposition aspect. The game theory is a theoretical tool for researching interdependency and competition among decision-making main bodies, is very fit with the actual network attack and defense on the essential characteristics of target opposition, strategy dependency, non-cooperation of relationship and the like, and is applied to the analysis of network attack and defense behaviors and the research of strategy selection by partial scholars. The method is based on the classical game theory to develop the research of network attack and defense modeling analysis, and can be divided into 4 types of complete information static game, complete information dynamic game, incomplete information static game and incomplete information dynamic game according to the game information set and action time sequence. The classical game theory generally assumes that people in the bureau are completely rational, have infinite information processing and computing capabilities, and cannot make mistakes in the decision making process and be influenced by others. However, in the real society, the assumption is difficult to satisfy, and the rationality of both the attacking and defending parties is limited rather than complete, which weakens the theoretical value and the guiding function of the classical game model. Therefore, an effective game model and an effective game analysis method are required to be constructed aiming at the characteristics of limited rationality of the attacking and defending parties in reality.
Disclosure of Invention
The invention aims to provide an optimal defense strategy selection method based on an evolutionary network game, which is characterized in that a learning mechanism is improved based on a network topology structure, a learning object set is established according to the learning range of people in the office, a Fermi function is adopted to calculate the strategy transfer probability to the learning object, random noise is utilized to describe the degree of irrational influence in the learning process, on the basis, a network attack and defense evolutionary network game model is constructed, the attack and defense strategy evolution process is analyzed, the evolutionary network game equilibrium is solved, and the network attack and defense evolutionary network game model is more in line with the network attack and defense reality and has better practical guidance value.
In order to solve the technical problems, the invention adopts the following technical scheme:
the invention provides an optimal defense strategy selection method based on an evolutionary network game, which comprises the following steps:
establishing a learning object set according to the learning range of people in the bureau, calculating the transition probability of a strategy to a learning object by adopting a Fermi function, and describing the degree of irrational influence in the learning process by utilizing random noise;
based on the method, a network attack and defense evolution network game model is constructed, and an attack and defense strategy evolution process is analyzed;
and solving the evolutionary network game balance, and selecting a defense strategy by a defense party.
Further, the learning object nodes jointly form a learning object set of people in the bureau; the learning object node is a node which can interact with a local person and carry out strategy transfer according to a specific probability within the learning ability range of the local person.
Further, a person in a bureau only interacts with the learning object node, and after comparing profits, the strategy of the person is transferred to the dominant strategy with a certain probability, and then the learning process is expressed by a Fermi function:
Figure BDA0002347570290000031
wherein, UxAnd UyRespectively representing the gains of node x and node y, λ being the random noise figure and λ>0; in the network attack and defense evolution network game, a random noise coefficient lambda is used for representing the degree of irrational influence in the learning process, and the irrational degree of people in the game is stronger when the lambda is larger; w(s) when λ → + ∞x←sy) → 0.5, indicating that the node x policy is completely randomly transferred to the node y policy; when lambda → 0, it shows that the evolutionary network game is transformed into a complete rational game, and no learning mechanism exists.
Further, a network attack and defense evolution network game model is constructed, and the evolution network game model is represented by quintuple: ENGM ═ N, S, P, λ, U, where N denotes local human space, S denotes strategy space, P denotes belief space, λ denotes random noise figure, and U denotes revenue space.
Further, the process of solving the evolutionary network game equilibrium is as follows:
calculating the income U of the man node in the attack side bureauA
Calculating the income U of the man node in the defense bureauD
Calculating expected income of the human nodes in the attacking and defending party bureau according to the attacking party policy density and the defending party policy density;
calculating the group trend node income of the attacking and defending party;
adopting a Fermi function to calculate the strategy transfer probability of the man-in-office node;
solving an attack and defense group strategy evolution equation to obtain a balance point;
refining the balance points and screening an evolution stability strategy.
Further, the income U of the man node in the attack side bureauADL-AC; defend side office man node income UDDL-DC; where DL represents defense loss, AC represents attack cost, and DC represents defense cost.
Further, suppose that the strategy selected by the human node in the attacking bureau is an enhanced attack strategy and a common attack strategy, which are denoted as SA=(SA1,SA2) The optional strategies of the human nodes in the defense bureau are an enhanced defense strategy and a common defense strategy which are marked as SD=(SD1,SD2);
Setting up a defensive party central office man node selection SD1The density of the strategy is p,
Figure BDA0002347570290000041
wherein n is the selection SD1Selecting S if the number of local personnel nodes of the strategy is N is the total number of local personnel nodes in the defense groupD2The strategy density of (1) is 1-p;
similarly, set up the man node in the attack side office to select SA1The strategy density is q and the strategy density is q,
Figure BDA0002347570290000042
wherein m is the selection SA1Selecting S if the number of local man nodes of the strategy is M is the total number of local man nodes in the attack party groupA2The strategy density of (1) to (q).
Further, for the person node in the defense bureau, S is selectedD1The expected benefit of the strategy is Ud=qUD1+(1-q)UD2(ii) a For the man node in the attack side office, S is selectedA1The expected benefit of the strategy is Ua=pUA1+(1-p)UA3
Further, the trend node represents the general trend and direction of the evolution of the attack and defense group;
aggressor group trend node
Figure BDA0002347570290000043
And (4) yield:
Figure BDA0002347570290000047
defense square groupVolume trend node
Figure BDA0002347570290000044
And (4) yield:
Figure BDA0002347570290000048
furthermore, the learning behavior of the person nodes in the bureau leads the group strategy density to dynamically change along with time, and the dynamic change rate of the group strategy density can represent the group evolution state; respectively mixing SD1Policy densities p and SA1The strategy density q is derived from the time t and is defined as an attack and defense group strategy evolution equation, and the equation set is formed in a simultaneous mode as follows:
Figure BDA0002347570290000045
when the change rate of the strategy density in the attack and defense group is 0, the game process reaches an evolution stable state, and at the moment, the equation set meets the following conditions:
Figure BDA0002347570290000046
through calculation, the equation set has five groups of solutions, wherein an evolution stable equilibrium point exists.
Compared with the prior art, the invention has the following advantages:
the evolutionary game is based on the assumption that people in the game have limited rationality, the interactive behaviors of people in the game are described through a learning mechanism, and the evolutionary game has theoretical advantages when the network security problem in the real society is modeled and researched. The current network security evolution game model generally adopts a replication dynamic learning mechanism, and assumes that the interaction among people in all the offices in a group conforms to the characteristics of uniform mixing distribution, however, in the actual network attack and defense scene, the learning ability of people in the local is limited, and the people in other local in the group can be interacted only in a limited range, therefore, the invention improves the learning mechanism based on the network topology structure, establishing a learning object set according to the learning range of people in the bureau, breaking through the assumption of uniform mixing and interaction of people in all bureaus in a group, conforming to the objective reality of limited learning capability of people in a network defense and attack bureau, adopting a Fermi function to calculate the transition probability of a strategy to a learning object, describing the degree of irrational influence in the learning process by using random noise, and reflecting the process that the strategy of people in the bureau is transferred to an advantage strategy with higher probability and is gradually optimized in the defense and attack process; on the basis, a network attack and defense evolution network game model is constructed, an attack and defense strategy evolution process is analyzed, an evolutionary network game equilibrium is solved, the evolutionary network game model can more accurately describe and depict the dynamic evolution, diffusion and stable trend process of an attack and defense strategy in a group confrontation scene, a modeling analysis result is more close to the essential law of network attack and defense, and network attack and defense behaviors can be more realistically explained and predicted to guide defense practice.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a flowchart of an optimal defense strategy selection method based on an evolutionary network game in an embodiment of the present invention;
FIG. 2 is a graph of evolution kinetics processes;
FIG. 3 is an evolved network gaming tree;
FIG. 4 is a tree of attack and defense games;
FIG. 5 is F1An evolutionary graph of probability density along with t under the condition of solution;
FIG. 6 is F2An evolutionary graph of probability density along with t under the condition of solution;
FIG. 7 is F3An evolutionary graph of probability density along with t under the condition of solution;
FIG. 8 is F4An evolutionary graph of probability density along with t under the condition of solution;
FIG. 9 is a graph of policy transition probability W variation;
FIG. 10 is
Figure BDA0002347570290000061
Three-dimensional graph with change of p and q.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer and more complete, the technical solutions in the embodiments of the present invention will be described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention, and based on the embodiments of the present invention, all other embodiments obtained by a person of ordinary skill in the art without creative efforts belong to the scope of the present invention.
Example one
As shown in fig. 1, the optimal defense strategy selection method based on the evolutionary network game in the embodiment of the present invention includes the following steps:
step S101, establishing a learning object set according to the learning range of people in the bureau, calculating the transition probability of a strategy to a learning object by adopting a Fermi function, and describing the degree of irrational influence in the learning process by utilizing random noise;
step S102, constructing a network attack and defense evolution network game model, and analyzing an attack and defense strategy evolution process;
and S103, solving the evolutionary network game balance, and selecting a defense strategy by a defense party.
Firstly, carrying out analysis on an evolutionary network game process and designing an evolutionary learning mechanism:
a) evolutionary network gaming process analysis
The network security is a dynamic process of interaction and mutual influence of the attacking and defending parties, and the state of the network security is determined by the strategies of the attacking and defending parties. As social members, the aggressors and the defenders usually do not exist in isolation, but are associated with each other in a social relationship to form an aggressor group and a defender group with a certain network topology. In the continuous dynamic game process of the attacking party group and the defending party group, under the drive of a learning mechanism and the influence of game income difference values, the local people continuously learn to other local people in the group, so that the probability of selecting a low income strategy by the local people is lower and higher, the probability of selecting a high income strategy is higher and higher, and the dominant strategy is gradually diffused in the group and tends to be stable.
In the network defense and attack practice, due to the limitation of resources, abilities, preferences and other reasons, the interaction range and the learning ability of people in the bureau are usually limited, and people in the surrounding bureau can only learn. Therefore, the learning mechanism is improved based on the network topology structure, the individuals in the office are regarded as nodes in the social relationship network, and the connection among the individuals in the office is regarded as network connection; on the basis, an attack and defense scene of an evolutionary network game modeling network is adopted, a defense strategy selection algorithm is designed, and the dynamic evolution, diffusion and stable trend process of the attack and defense strategy in a group confrontation scene is analyzed.
And defining a learning object node, wherein the learning object node is in the learning capacity range of the local population, and the local population can interact with the learning object node and carry out strategy transfer according to a specific probability. The learning object nodes jointly form a set of learning objects for the people in the bureau. The average distance between people in the social relationship network is 6, the learning range of people in the office is defined as 3, and the learning object set is nodes with the distance less than or equal to 3 from the node network of people in the office.
The learning behavior of the man-in-office node can cause the adjustment and optimization of the strategy, and further cause the evolution of the state of the network attack and defense system. The evolution of the system state can change the attack and defense benefits, and further influence the learning behavior of the person nodes in the bureau. In general, due to incomplete rational features of people in a local area, a system cannot reach an evolution stable equilibrium state through one-time learning behavior, and continuously learns and gradually evolves along with the advancing of time until a group strategy converges to an evolution stable strategy. The evolution kinetics are shown in figure 2.
b) Evolutionary learning mechanism design
Learning is an intrinsic motivation for population evolution. In the existing network attack and defense evolution game research, the application of a replication dynamic learning mechanism is the most extensive, and the core of the replication dynamic learning mechanism is a replication dynamic equation. The equation is used to calculate the probability x that a particular strategy i in the population is selectedi(t) followingDynamic rate of change of time
Figure BDA0002347570290000081
Figure BDA0002347570290000082
Wherein x isi(t) represents the probability of the group selection strategy i at time t; u. ofi(t) represents the revenue of the choice of strategy i by the person in the bureau at time t;
Figure BDA0002347570290000083
the average profit of people in the community to select different strategies is shown.
The dynamic learning mechanism is copied, the interactive learning probability among individuals is assumed to be not different, and the method is particularly suitable for population evolution with uniform mixed interactive characteristics. However, the learning ability and resources of the local population are limited, and indiscriminate interaction with all local population is impossible, regardless of the attack population or the defense population. Meanwhile, the decision of people in the bureau has certain randomness and irrational characteristics. Therefore, the method for depicting the evolution process of the attack and defense group by copying the dynamic learning mechanism has certain limitations.
Through a) the analysis of the evolutionary network game process, when a person in the hypothesis bureau only interacts with the learning object node and transfers the strategy of the person to the dominant strategy with a certain probability after the profits are compared, the learning process is matched with the idea of 'pairing comparison' of the Fermi function. Thus, a replicated dynamic learning mechanism can be improved on the basis of a set of learning objects. Assuming that the person node y in the office is the learning object node of the person node x in the office, the strategy of the node x is transferred into the strategy of the node y with the probability W(s)x←sy) Can be given by the fermi function:
Figure BDA0002347570290000084
wherein, UxAnd UyRespectively representing the gains of node x and node y, λ being the random noise figure and λ>0。
In the network attack and defense evolution network game, a random noise coefficient lambda is used for representing the degree of irrational influence in the learning process, and the irrational degree of people in the game is stronger when the lambda is larger; w(s) when λ → + ∞x←sy) → 0.5, indicating that the node x policy is completely randomly transferred to the node y policy; when lambda → 0, it shows that the evolutionary network game is transformed into a complete rational game, and no learning mechanism exists.
When U is turnedx=UyThen W(s)x←sy) 0.5, indicating that node x selects node y policy with a 0.5 probability. If U isx>UyThen W(s)x←sy)<0.5, and following | Ux-UyI increase, W(s)x←sy) Approaching 0 gradually, indicating that the less probability that node x will adopt the node y policy. If U isx<UyThen W(s)x←sy)>0.5, and following | Ux-UyI increase, W(s)x←sy) Approaching 1 gradually, indicating that the greater the probability that node x will adopt the node y policy.
Transition probability W(s) of policy of local person node to learning object nodex←sy) The method is closely related to the income, is influenced by irrational degree, is consistent with the process of gradually evolving from medium and low income strategies to high income strategies in attack and defense confrontation, and can depict the evolution mechanism of group strategies.
The step S102 of constructing a network attack and defense evolution network game model specifically includes:
the long-term continuous learning behavior of the local human nodes can cause the change of the topology of the swarm network, but the change speed of the topology of the swarm network is far slower than the strategy transfer speed of the local human nodes. Therefore, the group network topology can be regarded as static, and the influence of a human learning mechanism in an office on the evolution of the attack and defense group strategy is intensively studied.
The evolved network game model (Evolutionarynetwork game model) is represented by a quintuple, and ENGM ═ N, S, P, λ, U.
①N=(NA,ND) Is the man-in-the-office space. N is a radical ofAAnd NDSeparately representing evolution networksAnd the network game comprises an attacker group and a defender group. The attack and defense groups all contain a plurality of people nodes in the bureau.
②S=(SA,SD) Is the policy space. SAAnd SDRespectively representing an aggressor policy and a defender policy. People in the attack and defense group bureau all contain a plurality of selectable strategies. (S)A1,SA2,…,SAi,…SAm) M attack strategy sets; (S)D1,SD2,…,SDk,…SAn) The method is characterized in that N defense strategy sets are provided, m and N belong to N, and m and N are more than or equal to 2.
③P=(PA,PD) Is a belief space. PARepresenting the beliefs of the attacker, and describing the probability of selecting different strategies by people in the attacker bureau; pDIs the defender belief and is used for describing the probability of selecting different strategies by people in the defender bureau. The belief space corresponds to the strategy space, and the set of aggressor beliefs (P)A1,PA2,…,PAi,…PAm) And is and
Figure BDA0002347570290000101
set of defenders beliefs (P)D1,PD2,…,PDi,…PAn) And is and
Figure BDA0002347570290000102
④ lambda is a random noise coefficient which represents the irrational degree of the person in the office, corresponding to lambda in the fermi function, the higher the irrational degree of the person in the office, the more random the selection strategy is in the process of the evolutionary game.
⑤U=(UA,UD) Is the revenue space. U shapeAAnd UDRespectively representing the attack party profit and the defense party profit. In the evolution network game model, the income is influenced by the strategies of both the attacking and defending parties and appears in the form of income combination. E.g. U ═ UAi,UDk) Indicating adoption of S by an attackerAiStrategy, defense side adopt SDkStrategy lower attack side income UAiAnd yield of defense party UDk
From the above definitions, an evolved network gaming tree is obtained as shown in fig. 3.
Step S103, solving evolutionary network game balance, wherein the defense party selects a defense strategy in the following process:
before the evolutionary network game equilibrium solution, the optional strategy of the person node in the office needs to be given first. In order to facilitate theoretical derivation and analysis, the number of attack and defense strategies is simplified without affecting generality. The optional strategies of the human nodes in the attack bureau are an enhanced attack strategy and a common attack strategy which are marked as SA=(SA1,SA2) (ii) a The optional strategies of the human nodes in the defense bureau are an enhanced defense strategy and a common defense strategy which are marked as SD=(SD1,SD2)。
Step S1031, calculating human node income U in the attack side bureauAAnd the income U of the man node in the defense bureauD
And the human node income in the quantitative attack and defense bureau is the basis for quantitative calculation and game analysis. The local man nodes need to consume the resource cost of manpower, material resources, calculation and the like when implementing network attack and defense actions according to the countermeasure strategy, but also generate corresponding safety return, and have the economic characteristic. For the defenders, the defense strategy selection must find a balance between cost and return in order to achieve overall optimization. The relevant symbols and descriptions are defined as shown in table 1.
TABLE 1 symbols and description
Symbol Means of Measurement method
AC Attack Cost (Attack Cost) Attacker to implement attack strategyResources required to be consumed
DC Defense Cost (Defense Cost) Resources consumed by a defender to implement a defense strategy
DL Loss of Defense (Defense Loss) Loss of defender from attack by attacker
The targets of the attacker and the defender are opposite, so the defense loss DL is used as the attack return; the loss which the defending party avoids by adopting the defense strategy is taken as the defense return and is expressed by the positive value of the defense loss DL. In terms of both cost and return, the income of the people node in the defense bureau can be respectively expressed by the following formulas:
UA=DL-AC
UD=DL-DC
step S1032, calculating expected income of the human nodes in the attacking and defending party bureau according to the attacking party policy density and the defending party policy density;
and defining strategy density, wherein the strategy density is the ratio of the number of the local personnel nodes for selecting a specific strategy in the group to the total number of the local personnel nodes in the group. From the individual perspective, the person node in the office selects the strategy according to the beliefs; from the group, a large number of person nodes in the bureau are represented as strategy density according to the belief selection strategy. Thus, the evolution state of the population is related to the variation of the strategy density over time.
According to definition, the node of the defence bureau selects SD1The density of the strategy is p,
Figure BDA0002347570290000111
wherein n is the selection SD1And N is the total number of the personnel nodes in the local of the defense group. According to mean field approximationTheoretically, strategies are selected for the human nodes in any defense bureau according to strategy density. It selects SD1Selecting S with strategy belief of pD2The strategy belief is 1-p.
Similarly, set up the man node in the attack side office to select SA1The strategy density is q and the strategy density is q,
Figure BDA0002347570290000112
wherein m is the selection SA1And M is the total number of the man nodes in the local of the attack party group. And according to the average field approximation theory, selecting a strategy for the human nodes in any attack side bureau according to the strategy density. It selects SA1The strategy belief is q, and S is correspondingly selectedA2The strategy beliefs are 1-q.
And the income of the person nodes in the bureau is jointly determined by the attack and defense strategy combination. Under the condition that the person node in the attacking party office and the person node in the defending party office respectively have two types of selectable strategies, the attacking and defending game tree is shown in figure 4.
For the nodes of the defenders, S is selectedD1The expected benefit of the strategy is Ud=qUD1+(1-q)UD2(ii) a For the man node in the attack side office, S is selectedA1The expected benefit of the strategy is Ua=pUA1+(1-p)UA3
Step S1033, calculating the gains of the attack and defense party group trend nodes;
and defining a trend node, wherein the trend node is a virtual node and represents the general trend and direction of the evolution of the attack and defense group. The income of the trend node is expected income under different strategy density combinations of the attack and defense group and dynamically changes along with the evolution of the strategy of the group.
Aggressor group trend node
Figure BDA0002347570290000121
And (4) yield:
Figure BDA0002347570290000125
defensive party group trend node
Figure BDA0002347570290000122
And (4) yield:
Figure BDA0002347570290000126
step S1034, calculating the strategy transition probability of the human nodes in the bureau by adopting a Fermi function, and solving an attack and defense group strategy evolution equation to obtain a balance point;
the learning behavior of the person nodes in the bureau leads the group strategy density to dynamically change along with time, and the dynamic change rate of the group strategy density can represent the group evolution state. Respectively mixing SD1Policy densities p and SA1The strategy density q is derived from the time t and is defined as an attack and defense group strategy evolution equation, and an equation set formed by simultaneous is shown as a formula 1:
Figure BDA0002347570290000123
to facilitate the solution of the system of equations, an auxiliary function tanhz is introduced,
Figure BDA0002347570290000124
order to
Figure BDA0002347570290000131
Equation 2 can be converted to
Figure BDA0002347570290000132
Combining equation 1 and equation 3, one can obtain
Figure BDA0002347570290000133
When the change rate of the strategy density in the attack and defense group is 0, the game process reaches an evolution stable state.
At this time, the equation set satisfies the following condition:
Figure BDA0002347570290000134
through calculation, the equation set of formula 5 has five groups of solutions, and five evolution stable equilibrium points may exist correspondingly.
And step S1035, refining the balance points and screening an evolution stability strategy.
(1)
Figure BDA0002347570290000135
Pure strategy S for representing selection of man nodes in bureau of defense groupD2,SD1A policy does not exist; pure strategy S for selecting man node in local of attack party groupA2,SA1No policy exists. The evolution of p, q over time for this set of solutions is shown in fig. 5.
(2)
Figure BDA0002347570290000136
Pure strategy S is adopted for all nodes in the expression defense groupD2,SD1A policy does not exist; pure strategy S is adopted by man-in-office nodes in attack party groupA1,SA2No policy exists. The evolution of p, q over time for this set of solutions is shown in fig. 6.
(3)
Figure BDA0002347570290000137
Pure strategy S for representing selection of people nodes in all offices in defense groupD1,SD2A policy does not exist; all the local man nodes of the attack party group adopt a pure strategy SA2,SA1No policy exists. The evolution of p, q over time for this set of solutions is shown in fig. 7.
(4)
Figure BDA0002347570290000141
Pure strategy S for selecting human nodes in all offices of expressed defense groupD1,SD2A policy does not exist; all nodes of the aggressor group adopt pure strategy SA1,SA2No policy exists. P for the case of this set of solutionsThe evolution of q over time is shown in figure 8.
(5)
Figure BDA0002347570290000142
Representing strategy transition probability from human node to trend node in attack and defense group bureau
Figure BDA0002347570290000143
And
Figure BDA0002347570290000144
strategy transition probability from trend node to local man node
Figure BDA0002347570290000145
And
Figure BDA0002347570290000146
and (4) equality, the population evolves into a dynamic equilibrium state. Using Matlab2018 for F5Solving to obtain
Figure BDA0002347570290000147
Figure BDA0002347570290000148
Are respectively denoted by p*And q is*
According to the evolutionary game theory, F1、F2、F3And F4Is a saddle point, F5Is a central point, and an evolution stable strategy exists in the evolution network game model. By combining different initial states of attack and defense groups, the method can be used for predicting attack strategies which can be implemented by an attacker and guiding a defender to select the defense strategies.
The kernel of the defense strategy algorithm is to adopt a Fermi function, and the local man nodes calculate the strategy transition probability to the learning object nodes by using the Fermi function, so that the practical process that the local man continuously adjusts and optimizes the strategy by interactively learning with the learning object nodes under the constraint of limited learning capacity is described, and the reason and the mechanism that the dominant strategy gradually diffuses in the group can be explained. The dynamic change rate of the strategy density of the attack and defense group along with time is calculated by the formula 1, and the evolution state process of the group can be represented. Formula 3, formula 4 and formula 5 together give the process of solving the evolutionary stability strategy.
The invention is compared and analyzed with the existing literature from 5 aspects of the local population learning range, the learning mechanism, the random interference, the evolution stability strategy solving process detail degree, the application scene and the like, and the result is shown in table 2.
Table 2 comparative analysis table of relevant literature
Figure BDA0002347570290000151
[1]Huang,J.,Zhang,H.,Wang,J.,et al.Defense strategies selection basedonattack-defense evolutionary game model[J].Journal onCommunications,2017(1):168-176.DOI:10.11959/j.issn.1000-436x.2017019
[2]Abass,A.A.,Xiao,L.,Mandayam,N.B.,&Gajic,Z.(2017).Evolutionary GameTheoretic Analysis of Advanced Persistent Threats Against Cloud Storage.IEEEAccess,5,8482-8491.DOI:10.1109/ACCESS.2017.2691326
[3]Boudko,S.,&Abie,H.(2018).An evolutionary game for integrityattacks and defences for advanced metering infrastructure.ECSA.DOI:10.1145/3241403.3241463
[4]Yang,Y.,Che,B.,Zeng,Y.,Cheng,Y.,&Li,C.(2019).MAIAD:A MultistageAsymmetric InformationAttack and Defense Model Based on Evolutionary GameTheory.Symmetry,11,215.DOI:10.3390/sym11020215
[5]Huang,J.,Zhang,H.Improving replicator dynamic evolutionarygamemodel for selecting optimal defense strategies[J].JournalonCommunications,2018(1):1-13.DOI:10.11959/j.issn.1000-436x.2018010
[6]Hu,H.,Liu,Y.,Zhang,H.,&Pan,R.(2018).Optimal Network DefenseStrategy Selection Based on Incomplete Information Evolutionary Game.IEEEAccess,6,29806-29821.DOI:10.1109/access.2018.2841885
The learning range mainly examines whether the interactive objects in the model are people in the whole bureau or a learning object set; the learning mechanism reflects what mechanism is adopted by the model to describe the learning behavior of people in the bureau; random interference mainly examines whether the model considers the irrational degree of people in the office; the ESS solving process mainly inspects the detailed degree of the evolution stable equilibrium solving process, and the more detailed solving process has greater guidance effect on practice; the application scenarios are mainly distinguished according to the application objects of the model. Most documents do not consider the group network topological structure, assume that people in the group bureau can interact with people in all bureaus in a uniform mixing mode, a copy dynamic learning mechanism is adopted to depict people learning and imitating behaviors in the bureau, the limitation of the learning capacity of people in the defense and attack bureau is not considered, the random interference influence in the selection process of people in the bureau cannot be depicted, and the ESS solution is relatively simple, so that the practical guidance value for selecting the defense strategy is limited. The invention improves a learning mechanism based on a network topological structure, establishes a learning object set according to the learning range of people in a bureau, adopts a Fermi function to calculate the transition probability of a strategy to a learning object, and utilizes random noise to describe the degree of irrational influence in the learning process. On the basis, a network attack and defense evolution network game model is constructed, an attack and defense strategy evolution process is analyzed, and evolution balance is solved. As can be seen from comparison, the network security game model based on the evolutionary network game better accords with network attack and defense practice and has better practice guidance value.
In order to verify the effectiveness of the model and the method, simulation experiments of strategy transition probability of the local population and strategy evolution process of the defense population are respectively carried out. The local man strategy transfer probability simulation experiment is used for verifying whether a learning mechanism in the model accords with the network attack and defense reality or not and analyzing the relation between the local man strategy transfer probability W and the random noise coefficient lambda; the defense party group strategy evolution process simulation experiment is used for researching the relation between the time-varying condition of the defense party group strategy and the strategy density of the attack and defense group, and analyzing and providing the regular understanding that the defense group strategy dynamically evolves, spreads and tends to be stable in confrontation under different initial states.
A. Simulation experiment of strategy transfer probability of local population
Evolution network game model based on network topology structure changeIn the learning mechanism, the strategy transfer probability of the local man nodes to the learning object nodes is calculated by utilizing a Fermi function, and parameters are income of the local man nodes and random noise coefficients. Therefore, set U in the simulation experimentxAnd UyThe value interval of (2) is (0, 10), and the local human random noise coefficients lambda are respectively 0.1, 0.5, 1 and 5. Under the condition that Matlab2018A software is used for drawing 4 different lambda values, the strategy transition probability W value from the node x to the learning object node y is along with UxAnd UyThe variation is shown in fig. 9.
FIG. 9 contains 4 subgraphs of a, b, c and d, which respectively show the strategy transition probability W along with U under 4 conditions of 0.1, 0.5, 1 and 5 lambda valuesxAnd UyA change in situation.
Analyzing the common variation trend of the 4 sub-graphs, and obtaining a relation conclusion about the relative value of W and the income of the local: u shapexThe lower the height of UyThe larger W is; u shapexHigher than UyThe smaller W. The simulation experiment result accords with the evolution rule that the network attack and defense confronts the gradual transition of the low income strategy to the high income strategy of the people in the actual central office.
Comparing different variation trends of the 4 sub-graphs, a conclusion about the relation between W and the random noise coefficient lambda can be obtained: the higher the lambda, the closer W is to 0.5 and the relatively smaller the volatility, the more random the strategy transfer of the man node in the office is. The simulation result is consistent with the phenomenon of irrational random selection of people in the actual central office for network attack and defense confrontation.
Through a strategy transfer probability simulation experiment of the local man nodes, the learning mechanism can be obtained to accord with the actual network attack and defense and the evolution rule, and the correctness and the effectiveness of the learning mechanism in the evolutionary network game model are verified.
B. Defensive party group strategy evolution process simulation experiment
Due to the countermeasure characteristic of network attack and defense, the strategy evolution process of the defense party group is influenced by the attack and defense strategy. In order to analyze the evolution situation of the group strategy of the defense party, the change rate of the strategy density of the defense party along with the time is researched
Figure BDA0002347570290000171
The defense strategy density p,And (4) developing a simulation experiment of the strategy evolution process of the defense party group by the relationship of the strategy density q of the attacking party.
The offensive and defensive benefits involved in the offensive and defensive game tree of fig. 4 are assigned according to expert experience and historical data and with reference to the above document [4] [5], as shown in table 3.
TABLE 3 attack and defense profit valuation
Serial number Aggressor revenue Yield of defense sector
1 UA1=2213 UD1=2140
2 UA2=1970 UD2=1880
3 UA3=2250 UD3=2020
4 UA4=2150 UD4=2060
Equilibrium solution F of network game according to attack and defense income value and evolution5P can be calculated*0.7 and q*=0.6。In the simulation experiment, the value intervals of p and q are set to be (0, 1), the change interval is 0.025, and 1600 different p and q combination conditions can be simulated. Matlab2018 software is adopted to simulate the time-varying rate of population strategy density of defensive party
Figure BDA0002347570290000181
The variation with p and q is shown in FIG. 10.
FIG. 10 is
Figure BDA0002347570290000182
Regarding the three-dimensional views of p and q, for the convenience of observation and analysis, (1) and (2) in the figures are 2 different sides of the three-dimensional views, respectively.
The defense group strategy density p and the attack group strategy density q jointly form the initial state (p, q) of the game model. According to (p, q) and (p)*,q*) The evolution process of the group strategy of the 4 defense parties can be distinguished:
case ①: p<p*And q is<q*. At this time
Figure BDA0002347570290000183
It is shown that in this case, in the continuous adversarial evolution, the defender population middle office selects strategy SD1The probability is higher and higher, and strategy S is selectedD2The probability is lower and lower, and gradually converges to p ═ p*. At the same time, as p is gradually increased,
Figure BDA0002347570290000184
the value gradually increases, indicating that in this case, p increases at an increasingly faster rate.
Case ②: p<p*And q is>q*. At this time
Figure BDA0002347570290000185
It is shown that in this case, in the continuous adversarial evolution, the defender population middle office selects strategy SD1The probability is higher and higher, and strategy S is selectedD2The probability is lower and lower, and gradually converges to p ═ p*. At the same time, as p is gradually increased,
Figure BDA0002347570290000186
the values decrease progressively, indicating that in this case, the rate at which p increases is progressively slower.
Case ③: p>p*And q is<q*. At this time
Figure BDA0002347570290000187
It is shown that in this case, in the continuous adversarial evolution, the defender population middle office selects strategy SD1With lower and lower probability, selecting strategy SD2The probability is higher and higher, and gradually converges to p ═ p*. At the same time, as p is gradually decreased,
Figure BDA0002347570290000191
the values decrease progressively, indicating that in this case, p decreases at a slower and slower rate.
Case ④: p>p*And q is>q*. At this time
Figure BDA0002347570290000192
It is shown that in this case, in the continuous adversarial evolution, the defender population middle office selects strategy SD1With lower and lower probability, selecting strategy SD2The probability is higher and higher, and gradually converges to p ═ p*. At the same time, as p is gradually decreased,
Figure BDA0002347570290000193
the value gradually increased, indicating that in this case, p decreased at an increasingly faster rate.
Through simulation experiments and result analysis, the regularity cognition about the evolution process of the defense party group strategy can be obtained: the defense party group strategy evolution process is closely related to game income and is jointly influenced by the initial state of the strategy probability density of the attack and defense group. Therefore, the defense strategy selection problem must be researched from the attack and defense countermeasure perspective, and the optimal defense strategy is determined through the evolutionary network game analysis.
The network aggressor and the defending party are all social people with limited rationality, and the network aggressor and defending problem is researched based on the game theory, so that the assumption of completely rationality of people in the traditional game theory must be broken through. The evolutionary game theory breaks through the complete rational limit of people in the game, and the game balance is regarded as a result of gradual optimization of the learning evolution of the people in the game. Aiming at the fact that the learning ability range of people in a network defense and attack bureau is limited, a learning mechanism is improved based on a network topological structure, a learning object set is established according to the learning range of people in the bureau, a Fermi function is adopted to calculate the transition probability of strategies to the learning objects, and random noise is utilized to describe the degree of irrational influence in the learning process. On the basis, a network attack and defense evolution network game model is constructed, an attack and defense strategy evolution process is analyzed, and evolution balance is solved. The effectiveness of a learning mechanism is verified by a local man strategy transfer probability simulation experiment; and the evolution rules of the defense group strategies under different attack and defense strategy densities are obtained through simulation experiments of the evolution process of the defense group strategies. The model can more accurately describe and depict the dynamic evolution, diffusion and stable trend process of the attack and defense strategy in the group confrontation scene, the modeling analysis result is closer to the essential rule of network attack and defense, and the network attack and defense behavior can be more realistically explained and predicted to guide the defense practice.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Those of ordinary skill in the art will understand that: all or part of the steps for realizing the method embodiments can be completed by hardware related to program instructions, the program can be stored in a computer readable storage medium, and the program executes the steps comprising the method embodiments when executed; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
Finally, it is to be noted that: the above description is only a preferred embodiment of the present invention, and is only used to illustrate the technical solutions of the present invention, and not to limit the protection scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (10)

1. An optimal defense strategy selection method based on an evolutionary network game is characterized by comprising the following steps:
establishing a learning object set according to the learning range of people in the bureau, calculating the transition probability of a strategy to a learning object by adopting a Fermi function, and describing the degree of irrational influence in the learning process by utilizing random noise;
based on the method, a network attack and defense evolution network game model is constructed, and an attack and defense strategy evolution process is analyzed;
and solving the evolutionary network game balance, and selecting a defense strategy by a defense party.
2. The optimal defense strategy selection method based on the evolutionary network game is characterized in that learning object nodes jointly form a learning object set of people in a bureau; the learning object node is a node which can interact with a local person and carry out strategy transfer according to a specific probability within the learning ability range of the local person.
3. The optimal defense strategy selection method based on the evolutionary network game as claimed in claim 2, wherein a person in a local exchanges only with the learning object node, and transfers the strategy to the dominant strategy with a certain probability after comparing the income, and then the learning process is expressed by a fermi function:
Figure FDA0002347570280000011
wherein, UxAnd UyRespectively representing the gains of node x and node y, λ being the random noise figure and λ>0; in the network attack and defense evolution network game, the degree of irrational influence in the learning process is represented by a random noise coefficient lambda, and the larger the lambda is, the larger the lambda isThe greater the degree of irrational nature of the person in the office; w(s) when λ → + ∞x←sy) → 0.5, indicating that the node x policy is completely randomly transferred to the node y policy; when lambda → 0, it shows that the evolutionary network game is transformed into a complete rational game, and no learning mechanism exists.
4. The optimal defense strategy selection method based on the evolutionary network game of claim 1, characterized in that a network attack and defense evolutionary network game model is constructed, and the evolutionary network game model is represented by quintuple: ENGM ═ N, S, P, λ, U, where N denotes local human space, S denotes strategy space, P denotes belief space, λ denotes random noise figure, and U denotes revenue space.
5. The optimal defense strategy selection method based on the evolutionary network game as claimed in claim 3, wherein the process of solving the evolutionary network game balance is as follows:
calculating the income U of the man node in the attack side bureauA
Calculating the income U of the man node in the defense bureauD
Calculating expected income of the human nodes in the attacking and defending party bureau according to the attacking party policy density and the defending party policy density;
calculating the group trend node income of the attacking and defending party;
adopting a Fermi function to calculate the strategy transfer probability of the man-in-office node;
solving an attack and defense group strategy evolution equation to obtain a balance point;
refining the balance points and screening an evolution stability strategy.
6. The optimal defense strategy selection method based on evolutionary network game as claimed in claim 5, wherein the income U of the human node in the attack side bureauADL-AC; defend side office man node income UDDL-DC; where DL represents defense loss, AC represents attack cost, and DC represents defense cost.
7. According to claim6, the optimal defense strategy selection method based on the evolutionary network game is characterized in that the strategy selectable by the human node in the attack bureau is assumed to be an enhanced attack strategy and a common attack strategy which are marked as SA=(SA1,SA2) The optional strategies of the human nodes in the defense bureau are an enhanced defense strategy and a common defense strategy which are marked as SD=(SD1,SD2);
Setting up a defensive party central office man node selection SD1The density of the strategy is p,
Figure FDA0002347570280000021
wherein n is the selection SD1Selecting S if the number of local personnel nodes of the strategy is N is the total number of local personnel nodes in the defense groupD2The strategy density of (1) is 1-p;
similarly, set up the man node in the attack side office to select SA1The strategy density is q and the strategy density is q,
Figure FDA0002347570280000022
wherein m is the selection SA1Selecting S if the number of local man nodes of the strategy is M is the total number of local man nodes in the attack party groupA2The strategy density of (1) to (q).
8. The optimal defense strategy selection method based on evolutionary network game as claimed in claim 7, wherein S is selected for the human nodes in the defense bureauD1The expected benefit of the strategy is Ud=qUD1+(1-q)UD2(ii) a For the man node in the attack side office, S is selectedA1The expected benefit of the strategy is Ua=pUA1+(1-p)UA3
9. The optimal defense strategy selection method based on the evolutionary network game is characterized in that a trend node represents the general trend and direction of the evolution of an attack and defense group;
aggressor group trend node
Figure FDA0002347570280000031
And (4) yield:
Figure FDA0002347570280000032
defensive party group trend node
Figure FDA0002347570280000033
And (4) yield:
Figure FDA0002347570280000034
10. the optimal defense strategy selection method based on the evolutionary network game is characterized in that the learning behavior of the person nodes in the bureau causes the group strategy density to dynamically change along with time, and the dynamic change rate of the group strategy density can represent the group evolution state; respectively mixing SD1Policy densities p and SA1The strategy density q is derived from the time t and is defined as an attack and defense group strategy evolution equation, and the equation set is formed in a simultaneous mode as follows:
Figure FDA0002347570280000035
when the change rate of the strategy density in the attack and defense group is 0, the game process reaches an evolution stable state, and at the moment, the equation set meets the following conditions:
Figure FDA0002347570280000036
through calculation, the equation set has five groups of solutions, wherein an evolution stable equilibrium point exists.
CN201911401396.XA 2019-12-31 2019-12-31 Optimal defense strategy selection method based on evolutionary network game Expired - Fee Related CN111224966B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911401396.XA CN111224966B (en) 2019-12-31 2019-12-31 Optimal defense strategy selection method based on evolutionary network game

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911401396.XA CN111224966B (en) 2019-12-31 2019-12-31 Optimal defense strategy selection method based on evolutionary network game

Publications (2)

Publication Number Publication Date
CN111224966A true CN111224966A (en) 2020-06-02
CN111224966B CN111224966B (en) 2021-11-02

Family

ID=70830947

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911401396.XA Expired - Fee Related CN111224966B (en) 2019-12-31 2019-12-31 Optimal defense strategy selection method based on evolutionary network game

Country Status (1)

Country Link
CN (1) CN111224966B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112182485A (en) * 2020-09-22 2021-01-05 华中师范大学 Online knowledge sharing dynamic rewarding method based on evolutionary game
CN113132398A (en) * 2021-04-23 2021-07-16 中国石油大学(华东) Array honeypot system defense strategy prediction method based on Q learning
CN113225326A (en) * 2021-04-28 2021-08-06 浙江大学 Network attack strategy generator, terminal and storage medium based on specific consumption
CN113315763A (en) * 2021-05-21 2021-08-27 中国人民解放军空军工程大学 Network security defense method based on heterogeneous group evolution game
CN113515675A (en) * 2021-07-26 2021-10-19 中国人民解放军国防科技大学 Method, device and equipment for analyzing and visualizing conflict game based on graph model
CN115017464A (en) * 2022-06-10 2022-09-06 中国南方电网有限责任公司 Risk assessment method and device for power grid suffering from external attack and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2271047A1 (en) * 2009-06-22 2011-01-05 Deutsche Telekom AG Game theoretic recommendation system and method for security alert dissemination
CN105682174A (en) * 2016-01-15 2016-06-15 哈尔滨工业大学深圳研究生院 Opportunity network evolution algorithm and device for promoting node cooperation
CN106936855A (en) * 2017-05-12 2017-07-07 中国人民解放军信息工程大学 Network security defence decision-making based on attacking and defending differential game determines method and its device
CN106953879A (en) * 2017-05-12 2017-07-14 中国人民解放军信息工程大学 The cyber-defence strategy choosing method of best response dynamics Evolutionary Game Model
CN107135224A (en) * 2017-05-12 2017-09-05 中国人民解放军信息工程大学 Cyber-defence strategy choosing method and its device based on Markov evolutionary Games
CN107483486A (en) * 2017-09-14 2017-12-15 中国人民解放军信息工程大学 Cyber-defence strategy choosing method based on random evolution betting model
CN107566387A (en) * 2017-09-14 2018-01-09 中国人民解放军信息工程大学 Cyber-defence action decision method based on attacking and defending evolutionary Game Analysis
CN108833401A (en) * 2018-06-11 2018-11-16 中国人民解放军战略支援部队信息工程大学 Network active defensive strategy choosing method and device based on Bayes's evolutionary Game
EP3528462A1 (en) * 2018-02-20 2019-08-21 Darktrace Limited A method for sharing cybersecurity threat analysis and defensive measures amongst a community
CN110460572A (en) * 2019-07-06 2019-11-15 中国人民解放军战略支援部队信息工程大学 Mobile target defence policies choosing method and equipment based on Markov signaling games

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2271047A1 (en) * 2009-06-22 2011-01-05 Deutsche Telekom AG Game theoretic recommendation system and method for security alert dissemination
CN105682174A (en) * 2016-01-15 2016-06-15 哈尔滨工业大学深圳研究生院 Opportunity network evolution algorithm and device for promoting node cooperation
CN106936855A (en) * 2017-05-12 2017-07-07 中国人民解放军信息工程大学 Network security defence decision-making based on attacking and defending differential game determines method and its device
CN106953879A (en) * 2017-05-12 2017-07-14 中国人民解放军信息工程大学 The cyber-defence strategy choosing method of best response dynamics Evolutionary Game Model
CN107135224A (en) * 2017-05-12 2017-09-05 中国人民解放军信息工程大学 Cyber-defence strategy choosing method and its device based on Markov evolutionary Games
CN107483486A (en) * 2017-09-14 2017-12-15 中国人民解放军信息工程大学 Cyber-defence strategy choosing method based on random evolution betting model
CN107566387A (en) * 2017-09-14 2018-01-09 中国人民解放军信息工程大学 Cyber-defence action decision method based on attacking and defending evolutionary Game Analysis
EP3528462A1 (en) * 2018-02-20 2019-08-21 Darktrace Limited A method for sharing cybersecurity threat analysis and defensive measures amongst a community
CN108833401A (en) * 2018-06-11 2018-11-16 中国人民解放军战略支援部队信息工程大学 Network active defensive strategy choosing method and device based on Bayes's evolutionary Game
CN110460572A (en) * 2019-07-06 2019-11-15 中国人民解放军战略支援部队信息工程大学 Mobile target defence policies choosing method and equipment based on Markov signaling games

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
XIAOHU LIU , HENGWEI ZHANG ,YUCHEN ZHANG,AND LULU SHAO: "Attack Defense Differential Game Model for Network Defense Strategy Selection", 《IEEE ACCESS》 *
李涛: "基于动态博弈模型的网络防御策略选取方法", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112182485A (en) * 2020-09-22 2021-01-05 华中师范大学 Online knowledge sharing dynamic rewarding method based on evolutionary game
CN112182485B (en) * 2020-09-22 2023-08-18 华中师范大学 Online knowledge sharing dynamic rewarding method based on evolution game
CN113132398A (en) * 2021-04-23 2021-07-16 中国石油大学(华东) Array honeypot system defense strategy prediction method based on Q learning
CN113132398B (en) * 2021-04-23 2022-05-31 中国石油大学(华东) Array honeypot system defense strategy prediction method based on Q learning
CN113225326A (en) * 2021-04-28 2021-08-06 浙江大学 Network attack strategy generator, terminal and storage medium based on specific consumption
CN113225326B (en) * 2021-04-28 2022-05-27 浙江大学 Network attack strategy generator, terminal and storage medium based on specific consumption
CN113315763A (en) * 2021-05-21 2021-08-27 中国人民解放军空军工程大学 Network security defense method based on heterogeneous group evolution game
CN113515675A (en) * 2021-07-26 2021-10-19 中国人民解放军国防科技大学 Method, device and equipment for analyzing and visualizing conflict game based on graph model
CN113515675B (en) * 2021-07-26 2023-06-06 中国人民解放军国防科技大学 Conflict game analysis visualization method, device and equipment based on graph model
CN115017464A (en) * 2022-06-10 2022-09-06 中国南方电网有限责任公司 Risk assessment method and device for power grid suffering from external attack and storage medium
CN115017464B (en) * 2022-06-10 2024-05-03 中国南方电网有限责任公司 Risk assessment method, device and storage medium for power grid suffering from external attack

Also Published As

Publication number Publication date
CN111224966B (en) 2021-11-02

Similar Documents

Publication Publication Date Title
CN111224966B (en) Optimal defense strategy selection method based on evolutionary network game
Bingul Adaptive genetic algorithms applied to dynamic multiobjective problems
CN110099045B (en) Network security threat early warning method and device based on qualitative differential gaming and evolutionary gaming
Raikov Holistic discourse in the network cognitive modeling
CN110417733B (en) Attack prediction method, device and system based on QBD attack and defense random evolution game model
Evangeline et al. Wind farm incorporated optimal power flow solutions through multi-objective horse herd optimization with a novel constraint handling technique
CN114519190B (en) Multi-target network security dynamic evaluation method based on Bayesian network attack graph
Liu et al. Optimal network defense strategy selection method based on evolutionary network game
Kumar et al. Information diffusion model for spread of misinformation in online social networks
Xiao et al. Modeling and simulation of opinion natural reversal dynamics with opinion leader based on HK bounded confidence model
Żychowski et al. Addressing expensive multi-objective games with postponed preference articulation via memetic co-evolution
CN110766125A (en) Multi-target weapon-target allocation method based on artificial fish swarm algorithm
Yilmaz et al. Misinformation propagation in online social networks: game theoretic and reinforcement learning approaches
CN115037553A (en) Information security monitoring model construction method and device, information security monitoring model application method and device, and storage medium
Wu et al. A game-based approach for designing a collaborative evolution mechanism for unmanned swarms on community networks
Tian et al. A method based on cloud model and FCM clustering for risky large group decision making
CN105469644B (en) Solving Flight Conflicts method and apparatus
Kirimtat et al. Evolutionary algorithms for designing self-sufficient floating neighborhoods
Gu et al. Social network public opinion evolution model based on node intimacy
Law et al. Placement matters in making good decisions sooner: the influence of topology in reaching public utility thresholds
Nogales et al. Replicator based on imitation for finite and arbitrary networked communities
Nardin et al. Scale and topology effects on agent-based simulation: A trust-based coalition formation case study
Chen et al. A game theoretic approach for modeling privacy settings of an online social network
Suzuki The unique value of gaming simulation as a research method for sustainability-related issues
CN114239833B (en) Military countermeasure rule confidence coefficient calculation method and device based on probability soft logic

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20211102

CF01 Termination of patent right due to non-payment of annual fee