CN112926739A - Network countermeasure effectiveness evaluation method based on neural network model - Google Patents

Network countermeasure effectiveness evaluation method based on neural network model Download PDF

Info

Publication number
CN112926739A
CN112926739A CN202110265382.0A CN202110265382A CN112926739A CN 112926739 A CN112926739 A CN 112926739A CN 202110265382 A CN202110265382 A CN 202110265382A CN 112926739 A CN112926739 A CN 112926739A
Authority
CN
China
Prior art keywords
layer
neural network
network
optimal
index
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110265382.0A
Other languages
Chinese (zh)
Other versions
CN112926739B (en
Inventor
张茜
温泉
王晓菲
姜国庆
李宁
杨华
王亚洲
王崇维
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Computer Technology and Applications
Original Assignee
Beijing Institute of Computer Technology and Applications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Computer Technology and Applications filed Critical Beijing Institute of Computer Technology and Applications
Priority to CN202110265382.0A priority Critical patent/CN112926739B/en
Publication of CN112926739A publication Critical patent/CN112926739A/en
Application granted granted Critical
Publication of CN112926739B publication Critical patent/CN112926739B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention relates to a network countermeasure efficiency evaluation method based on a neural network model, and relates to the technical field of network security. The method constructs a two-stage neural network antagonistic effect evaluation model, avoids the complex relationship in a carding index system, has strong self-learning, self-organizing and adaptive capabilities, and can continuously and dynamically learn and train the model through training samples. Through accumulation of historical samples, the antagonism efficacy evaluation model has higher accuracy. In the learning of the neural network, the artificial intelligence algorithm-cuckoo algorithm is adopted to search for the optimal weight, the global search capability is strong, the selected parameters are few, the convergence rate is extremely high, and the construction of the countermeasure efficiency evaluation model has higher efficiency.

Description

Network countermeasure effectiveness evaluation method based on neural network model
Technical Field
The invention relates to the technical field of network security, in particular to a network countermeasure performance evaluation method based on a neural network model.
Background
With the continuous evolution and revolution of the informatization war, the network countermeasure is a novel fighting force which plays an increasingly important role in the modern battlefield. The network countermeasure is aimed at available computer network environment, and uses all elements in information system as main object of battle, and uses advanced information technology as basic means to attain the goal of breaking down and destroying enemy information system and protecting self-side information system. The network space fighting efficiency is comprehensively, reasonably and effectively evaluated, the improvement of the weak link of the client is facilitated, and the overall fighting capacity of the network space of the client is improved.
At present, methods for evaluating the network space countermeasure performance mainly include social network analysis, complex networks, analytic hierarchy process, artificial neural network method, and the like. Establishing a network space countermeasure efficiency evaluation index system by using a social network analysis method, wherein the importance degree sequence of each index can be obtained, but the influence weight value of each index on an evaluation target cannot be calculated; a networking index system framework is provided by applying a complex network theory, the aggregation relation between basic indexes and capacity effects can be analyzed, and the final evaluation result cannot be obtained quantitatively; evaluating the network space countermeasure efficiency by using an analytic hierarchy process, wherein when the indexes are excessive, the data statistics is large, and the weight is difficult to calculate; when an evaluation model is constructed by using an artificial neural network method, a gradient descent method is usually adopted to correct the weight coefficient, the convergence speed of the method is low in the learning process, and the method is easy to fall into local optimum in the training process.
Disclosure of Invention
Technical problem to be solved
The technical problem to be solved by the invention is as follows: how to design a method for constructing a network space countermeasure performance evaluation model, so that the model construction is faster and more accurate.
(II) technical scheme
In order to solve the above technical problem, the present invention provides a network countermeasure performance evaluation method based on a neural network model, comprising the following steps:
step 1, performing multi-level decomposition on the comprehensive effectiveness index of the network confrontation to construct a secondary neural network confrontation effectiveness evaluation model;
and 2, training a secondary neural network countermeasure effectiveness evaluation model based on a cuckoo algorithm.
Preferably, step 1 specifically comprises:
(1) construction of network countermeasure performance evaluation index system
Carrying out layering operation on the network countermeasure efficacy, dividing the network countermeasure efficacy into a comprehensive efficacy layer, a capability element layer and an index element layer to form a network countermeasure efficacy evaluation index system frame, wherein the uppermost layer is the comprehensive efficacy layer, namely the network countermeasure comprehensive efficacy; the middle layer is a capability element layer, namely the main capability decomposition of the comprehensive effect of the network countermeasure; the lowest layer is an index element layer, namely, each index on which each capability of the network for resisting comprehensive efficiency depends is judged;
(2) establishing a secondary neural network efficiency evaluation model framework
Converting the network antagonistic performance evaluation index system framework into a secondary neural network antagonistic performance evaluation model, wherein each level of neural network comprises an input layer, a hidden layer and an output layer; each neuron has an input connection and an output connection, each connection having a weight; the input layer of the first-stage neural network corresponds to the index element layer, and the output layer corresponds to the capacity element layer; the input layer of the second-level neural network corresponds to the capability element layer, and the output layer corresponds to the comprehensive performance layer.
Preferably, the step of constructing the secondary neural network performance evaluation model framework specifically includes:
(21) construction of first-level neural network antagonistic performance evaluation model framework
The input layer of the first-level neural network corresponds to an index element layer of the performance evaluation, and the vector of the input layer is defined as x1,x2,...,xMThe hidden layer vector is defined as h1,h2,...,hL
Figure BDA0002971421110000021
Wherein a isijI.e. connecting input layer neurons xiAnd hidden layer neurons hjThe weighting coefficients between i 1., M, j 1., L, M, L are positive integers, and the output function is defined as
Figure BDA0002971421110000022
Wherein j 1.. and L, then the layer vector is output
Figure BDA0002971421110000023
Wherein i 1, 1., M, j 1., L, k 1., N is a positive integer, bjkWeights from a hidden layer to an output layer in the first-level neural network model;
(22) construction of second-level neural network antagonistic performance evaluation model framework
The input layer of the second-level neural network corresponds to the capability element layer of the performance evaluation, and the input layer vector is defined as y1,y2,...,yNThe hidden layer vector is defined as g1,g2,...,gP
Figure BDA0002971421110000031
Wherein c iskrI.e. connecting input layer neurons ykAnd hidden layer neuron grA weight coefficient between k and k 1., N, r 1., P is a positive integer, and an output function is defined as
Figure BDA0002971421110000032
Wherein r 1, P, then the layer vector is output
Figure BDA0002971421110000033
Wherein k 1, N, r 1, P, drIs the weight from the hidden layer to the output layer in the second-level neural network model.
Preferably, step 2 specifically comprises:
(1) pretreating the original sample
The original sample can be used as a training sample after being preprocessed, and the original sample is subjected to line normalization processing by using a linear variation method;
(2) cuckoo algorithm optimization training
Initializing a target function, a nest position and a minimum error; inputting a training sample obtained after preprocessing in a neural network antagonistic performance evaluation model, searching an optimal bird nest position by using a cuckoo algorithm, optimizing and iterating to generate new weights by using Levy flight of the cuckoo algorithm, finishing training when the absolute error between an actual output value and an expected value is smaller than a set minimum error, and keeping the current optimal weight to obtain the optimal antagonistic performance evaluation model, wherein the optimal position is the optimal weight of the antagonistic performance evaluation model.
Preferably, the normalization processing of the original sample by using the linear variation method specifically includes: let the original sample of the index element layer be x', when the index value is larger, the antagonistic effect is better, the training sample after normalization
Figure BDA0002971421110000034
x′min,x′maxThe minimum and maximum values in x' respectively, when the index value is larger and the antagonistic effect is worse, the normalized training sample
Figure BDA0002971421110000035
Setting the original sample of the capability element layer as y', when the index value is larger and the antagonistic effect is better, the training sample after normalization
Figure BDA0002971421110000036
y′min、y′maxThe minimum and maximum values in y' are respectively, when the index value is larger and the antagonistic effect is worse, the normalized training sample
Figure BDA0002971421110000037
Let the original sample of the comprehensive efficiency layer be E', and the normalized training sample be
Figure BDA0002971421110000038
E′min、E′maxRespectively the minimum and maximum values in E'.
Preferably, the specific steps of the cuckoo algorithm optimization training are as follows:
1) initializing an objective function
Figure BDA0002971421110000041
Where E is the actual output of the second stage neural network, EdIs the expected value of the integrated performance layer, ykIs the actual output of the first-stage neural network,
Figure BDA0002971421110000042
initializing a cast-out probability P for an expected value of a capability element layer, wherein the P belongs to [0,1 ]];
Initializing the positions of n nests:
ωs (0)=[a(0) 11,a(0) 12,..,a(0) ML,b(0) 11,b(0) 12,..,b(0) LN,..,c(0) 11,c(0) 12,..,c(0) NP,d(0) 1,d(0) 2,..,d(0) P]T,s=1,...,n;
2) calculating an objective function value of each nest position, and selecting a nest with the optimal current objective function;
3) keeping the position of the nest with the optimal objective function of the previous generation, and updating the position of the nest by using a Laiwei flying type;
the update formula of the bird nest position of cuckoo is omegas (t+1)=ωs (t)+ α. L (β), where ωs (t)Representing the position of the s-th nest at the t-th iteration; alpha represents a step size; l (β) obeys the lavi distribution:
Figure BDA0002971421110000043
0<β≤2,
wherein u, v follow a normal distribution,
Figure BDA0002971421110000044
Figure BDA0002971421110000045
ωi' (t)、ωj' (t)is the position of any two nests at the time of the t-th iteration;
4) comparing the current position function value with the function value of the optimal bird nest position of the previous generation, if the current position function value is more optimal, updating the current position function value to be the current optimal function value, and if not, keeping the optimal function value of the previous generation;
5) after the position is updated, a number r is generated randomly and belongs to [0,1 ]]If r is>P, to omegas (t+1)Continuously updating, comparing the updated functional values of the positions of the nests, and calculating the global optimal position;
6) judging whether the maximum iteration times or the minimum error requirement is met, if so, outputting a global optimal position, namely each connection weight of the countermeasure efficacy evaluation model, and if not, returning to the step 2) to continue iteration;
7) and substituting the optimal weight into the neural network model to obtain an optimal secondary neural network countermeasure performance evaluation model.
Preferably, when a network countermeasure efficiency evaluation index system is constructed, the network countermeasure efficiency is decomposed into various ability elements such as network reconnaissance, network attack, network defense and command decision based on various common network information equipment combat requirements in the field of network space attack and defense, and then the ability elements are refined and divided into a plurality of index elements.
Preferably, the value range of the comprehensive efficiency E is 0-1.
Preferably, α is taken to be 1.
The invention also provides an application of the method in the technical field of network security.
(III) advantageous effects
The method constructs a two-stage neural network antagonistic effect evaluation model, avoids the complex relationship in a carding index system, has strong self-learning, self-organizing and adaptive capabilities, and can continuously and dynamically learn and train the model through training samples. Through accumulation of historical samples, the antagonism efficacy evaluation model has higher accuracy.
In the learning of the neural network, the artificial intelligence algorithm-cuckoo algorithm is adopted to search for the optimal weight, the global search capability is strong, the selected parameters are few, the convergence rate is extremely high, and the construction of the countermeasure efficiency evaluation model has higher efficiency.
Drawings
FIG. 1 is a topological structure diagram of a two-stage neural network performance evaluation model according to the present invention;
FIG. 2 is a general flow chart of the construction of the secondary neural network evaluation model in the present invention.
Detailed Description
In order to make the objects, contents, and advantages of the present invention clearer, the following detailed description of the embodiments of the present invention will be made in conjunction with the accompanying drawings and examples.
The invention provides a network countermeasure performance evaluation method based on a neural network model. The method constructs an efficiency evaluation model through two levels of neural networks, wherein each level of neural network comprises three layers of structures, namely an input layer, a hidden layer and an output layer. The second-stage neural network takes the effective value output by the first-stage neural network as the network input, and the final output value is the comprehensive efficiency of the network countermeasure. And training the training samples continuously by using a cuckoo algorithm, thereby continuously optimizing the weight of the neural network and enabling the final evaluation model to be more accurate. The method is used for evaluating the comprehensive efficiency of the common red and blue parties in the network defense and attack confrontation.
The technical scheme for solving the technical problem generally comprises the following two steps: firstly, a secondary neural network antagonistic performance evaluation model is constructed, network antagonistic comprehensive performance indexes are subjected to multi-level decomposition and are divided into a comprehensive performance layer, a capacity element layer and an index element layer, wherein the index element layer corresponds to an input value of a first-level neural network; the capability element layer corresponds to the output value of the first-level neural network and is also the input value of the second-level neural network; the comprehensive efficiency layer corresponds to the output value of the second-level neural network. The topological structure of the two-level neural network performance evaluation model is shown in fig. 1. And secondly, training a secondary neural network antagonistic performance evaluation model, learning a training sample by using a cuckoo algorithm, and continuously optimizing and iterating the weight until the optimized weight can meet the condition that the error between the actual output and the target value of the neural network is smaller than the expected error.
Fig. 2 is a general flow chart of the construction of the secondary neural network evaluation model, and the method specifically comprises the following steps:
step 1, constructing a secondary neural network antagonistic effect evaluation model
(1) Construction of network countermeasure performance evaluation index system framework
And classifying and layering the network antagonistic effect into a comprehensive effect layer, a capacity element layer and an index element layer to form a network antagonistic effect evaluation index system framework. The top layer is a comprehensive efficiency layer, namely the comprehensive efficiency of the network countermeasure; the middle layer is a capability element layer, namely the main capability decomposition of the comprehensive effect of the network countermeasure; the lowest layer is an index element layer, namely, each index on which each capability of the network for resisting comprehensive performance depends is judged.
In this embodiment, by investigating various common network information equipment combat requirements in the field of network space defense and attack, the network defense efficiency is decomposed into various ability elements such as network reconnaissance, network attack, network defense, and command decision, and then each ability element is refined and divided into a plurality of index elements. Classification is not needed, only layering is needed, and the problems of complexity and unclear classification between the capability element layer and the index element are avoided.
(2) Establishing a secondary neural network efficiency evaluation model framework
And converting the network antagonistic performance evaluation index system framework into a secondary neural network antagonistic performance evaluation model. Each level of neural network comprises three layers, namely an input layer, a hidden layer and an output layer; each neuron has an input connection and an output connection, each connection having a weight; the input layer of the first-stage neural network corresponds to the index element layer, and the output layer corresponds to the capacity element layer; the input layer of the second-level neural network corresponds to the capability element layer, and the output layer corresponds to the comprehensive performance layer.
The method specifically comprises the following steps:
(21) and (5) constructing an antagonistic performance evaluation model framework of the first-stage neural network.
The input layer of the first-level neural network corresponds to an index element layer of the performance evaluation, and the vector of the input layer is defined as x1,x2,...,xMThe hidden layer vector is defined as h1,h2,...,hL
Figure BDA0002971421110000071
Wherein a isijI.e. connecting input layer neurons xiAnd hidden layer neurons hjThe weighting coefficients between i 1., M, j 1., L, M, L are positive integers, and the output function is defined as
Figure BDA0002971421110000072
Wherein j 1.. and L, then the layer vector is output
Figure BDA0002971421110000073
Wherein i 1, 1., M, j 1., L, k 1., N is a positive integer, bjkFor the weight from hidden layer to output layer in the first-level neural network model, the value output by each neuron needs to be multiplied by the weight and then summed into the next neuron.
(22) And constructing a framework of the second-level neural network antagonistic performance evaluation model.
The input layer of the second-level neural network corresponds to the capability element layer of the performance evaluation, and the input layer vector is defined as y1,y2,...,yNThe hidden layer vector is defined as g1,g2,...,gP
Figure BDA0002971421110000081
Wherein c iskrI.e. connecting input layer neurons ykAnd hidden layer neuron grA weight coefficient between k and k 1., N, r 1., P is a positive integer, and an output function is defined as
Figure BDA0002971421110000082
Wherein r 1, P, then the layer vector is output
Figure BDA0002971421110000083
Wherein k 1, N, r 1, P, drIs the weight from the hidden layer to the output layer in the second-level neural network model.
Step 2, training the secondary neural network antagonistic effect evaluation model
(1) Pretreating the original sample
The original sample needs to be preprocessed to be used as a training sample. The network confrontation efficiency evaluation index system has different measurement units of each parameter, and the original samples are subjected to normalization processing by using a linear variation method.
The normalization processing of the original sample by adopting a linear variation method specifically comprises the following steps: let the original sample of the index element layer be x', when the index value is larger, the antagonistic effect is better, the training sample after normalization
Figure BDA0002971421110000084
x′min,x′maxThe minimum and maximum values in x' respectively, when the index value is larger and the antagonistic effect is worse, the normalized training sample
Figure BDA0002971421110000085
Setting the original sample of the capability element layer as y', when the index value is larger and the antagonistic effect is better, the training sample after normalization
Figure BDA0002971421110000091
y′min、y′maxThe minimum and maximum values in y' respectively, and the countermeasure is effective when the index value is largerNormalized training samples when the difference is larger
Figure BDA0002971421110000092
Let the original sample of the comprehensive efficiency layer be E', and the normalized training sample be
Figure BDA0002971421110000093
E′min、E′maxThe minimum value and the maximum value in E' are respectively, the value range of the comprehensive efficiency E is 0-1, and the closer the value of E is to 1, the better the comprehensive efficiency is represented.
(2) Cuckoo algorithm optimization training
Initializing a target function, a nest position and a minimum error; inputting the training samples obtained after the processing in the neural network antagonistic performance evaluation model, searching an optimal bird nest position by using a cuckoo algorithm, optimizing and iterating to generate new weights by using Levy flight of the cuckoo algorithm and a random walk mode of alternating short-distance exploration and occasional long-distance walking, finishing the training when the absolute error between an actual output value and an expected value is smaller than a set minimum error, and keeping the current optimal weight to obtain the optimal antagonistic performance evaluation model, wherein the optimal position is the optimal weight of the antagonistic performance evaluation model.
The specific steps of the cuckoo algorithm optimization training are as follows:
1) initializing an objective function
Figure BDA0002971421110000094
Where E is the actual output of the second stage neural network, EdIs the expected value of the integrated performance layer, ykIs the actual output of the first-stage neural network,
Figure BDA0002971421110000095
for the expected value of the capability element layer, a rejection probability P (probability of finding a new bird egg and rejecting the bird egg by the host) is initialized, P ∈ [0,1 ]]。
Initializing the positions of n nests:
ωs (0)=[a(0) 11,a(0) 12,..,a(0) ML,b(0) 11,b(0) 12,..,b(0) LN,..,c(0) 11,c(0) 12,..,c(0) NP,d(0) 1,d(0) 2,..,d(0) P]T,s=1,...,n。
2) and calculating an objective function value of each nest position (the objective function value can be changed along with the change of the weight in the objective function), and selecting the nest optimal to the current objective function.
3) And keeping the position of the nest with the optimal last generation of the objective function, and updating the position of the nest by using a Laevir flying type.
The update formula of the bird nest position of cuckoo is omegas (t+1)=ωs (t)+ α. L (β), where ωs (t)Representing the position of the s-th nest at the t-th iteration; α represents the step size, and is usually taken as α ═ 1; l (β) obeys the lavi distribution:
Figure BDA0002971421110000101
wherein u, v follow a normal distribution,
Figure BDA0002971421110000102
Figure BDA0002971421110000103
ωi' (t)、ωj' (t)is the position of any two nests at the time of the t-th iteration.
4) And comparing the current position function value with the function value of the optimal bird nest position of the previous generation, if the current position function value is more optimal, updating the current position function value to be the current optimal function value, and if not, keeping the previous generation optimal function value.
5) After the position is updated, a number r is generated randomly and belongs to [0,1 ]]If r is>P, to omegas (t+1)And continuously updating, comparing the updated functional values of the positions of the nests, and calculating the global optimal position at the moment.
6) And judging whether the maximum iteration number or the minimum error requirement is met, and if so, outputting a global optimal position, namely each connection weight of the countermeasure performance evaluation model. If not, returning to the step 2) to continue the iteration.
7) And substituting the optimal weight into the neural network model to obtain an optimal secondary neural network countermeasure performance evaluation model.
The above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, several modifications and variations can be made without departing from the technical principle of the present invention, and these modifications and variations should also be regarded as the protection scope of the present invention.

Claims (10)

1. A network countermeasure effectiveness evaluation method based on a neural network model is characterized by comprising the following steps:
step 1, performing multi-level decomposition on the comprehensive effectiveness index of the network confrontation to construct a secondary neural network confrontation effectiveness evaluation model;
and 2, training a secondary neural network countermeasure effectiveness evaluation model based on a cuckoo algorithm.
2. The method according to claim 1, wherein step 1 specifically comprises:
(1) construction of network countermeasure performance evaluation index system
Carrying out layering operation on the network countermeasure efficacy, dividing the network countermeasure efficacy into a comprehensive efficacy layer, a capability element layer and an index element layer to form a network countermeasure efficacy evaluation index system frame, wherein the uppermost layer is the comprehensive efficacy layer, namely the network countermeasure comprehensive efficacy; the middle layer is a capability element layer, namely the main capability decomposition of the comprehensive effect of the network countermeasure; the lowest layer is an index element layer, namely, each index on which each capability of the network for resisting comprehensive efficiency depends is judged;
(2) establishing a secondary neural network efficiency evaluation model framework
Converting the network antagonistic performance evaluation index system framework into a secondary neural network antagonistic performance evaluation model, wherein each level of neural network comprises an input layer, a hidden layer and an output layer; each neuron has an input connection and an output connection, each connection having a weight; the input layer of the first-stage neural network corresponds to the index element layer, and the output layer corresponds to the capacity element layer; the input layer of the second-level neural network corresponds to the capability element layer, and the output layer corresponds to the comprehensive performance layer.
3. The method of claim 2, wherein the step of constructing the secondary neural network performance evaluation model framework specifically comprises:
(21) construction of first-level neural network antagonistic performance evaluation model framework
The input layer of the first-level neural network corresponds to an index element layer of the performance evaluation, and the vector of the input layer is defined as x1,x2,...,xMThe hidden layer vector is defined as h1,h2,...,hL
Figure FDA0002971421100000011
Wherein a isijI.e. connecting input layer neurons xiAnd hidden layer neurons hjThe weighting coefficients between i 1., M, j 1., L, M, L are positive integers, and the output function is defined as
Figure FDA0002971421100000021
Wherein j 1.. and L, then the layer vector is output
Figure FDA0002971421100000022
Wherein i 1, 1., M, j 1., L, k 1., N is a positive integer, bjkWeights from a hidden layer to an output layer in the first-level neural network model;
(22) construction of second-level neural network antagonistic performance evaluation model framework
The input layer of the second-level neural network corresponds to the capability element layer of the performance evaluation, and the input layer vector is defined as y1,y2,...,yNThe hidden layer vector is defined as g1,g2,...,gP
Figure FDA0002971421100000023
Wherein c iskrI.e. connecting input layer neurons ykAnd hidden layer neuron grA weight coefficient between k and k 1., N, r 1., P is a positive integer, and an output function is defined as
Figure FDA0002971421100000024
Wherein r 1, P, then the layer vector is output
Figure FDA0002971421100000025
Wherein k 1, N, r 1, P, drIs the weight from the hidden layer to the output layer in the second-level neural network model.
4. The method according to claim 3, wherein step 2 specifically comprises:
(1) pretreating the original sample
The original sample can be used as a training sample after being preprocessed, and the original sample is subjected to line normalization processing by using a linear variation method;
(2) cuckoo algorithm optimization training
Initializing a target function, a nest position and a minimum error; inputting a training sample obtained after preprocessing in a neural network antagonistic performance evaluation model, searching an optimal bird nest position by using a cuckoo algorithm, optimizing and iterating to generate new weights by using Levy flight of the cuckoo algorithm, finishing training when the absolute error between an actual output value and an expected value is smaller than a set minimum error, and keeping the current optimal weight to obtain the optimal antagonistic performance evaluation model, wherein the optimal position is the optimal weight of the antagonistic performance evaluation model.
5. The method of claim 4, wherein the normalization of the raw samples by the linear variation method is specifically: let the original sample of the index element layer be x', when the index value is larger, the antagonistic effect is better, the training sample after normalization
Figure FDA0002971421100000026
x′min,x′maxThe minimum and maximum values in x' respectively, when the index value is larger and the antagonistic effect is worse, the normalized training sample
Figure FDA0002971421100000031
Setting the original sample of the capability element layer as y', when the index value is larger and the antagonistic effect is better, the training sample after normalization
Figure FDA0002971421100000032
y′min、y′maxThe minimum and maximum values in y' are respectively, when the index value is larger and the antagonistic effect is worse, the normalized training sample
Figure FDA0002971421100000033
Let the original sample of the comprehensive efficiency layer be E', and the normalized training sample be
Figure FDA0002971421100000034
E′min、E′maxRespectively the minimum and maximum values in E'.
6. The method of claim 5, wherein the optimization training of the cuckoo algorithm comprises the following specific steps:
1) initializing an objective function
Figure FDA0002971421100000035
Where E is the actual output of the second stage neural network, EdIs the expected value of the integrated performance layer, ykIs the actual output of the first-stage neural network,
Figure FDA0002971421100000036
initializing a cast-out probability P for an expected value of a capability element layer, wherein the P belongs to [0,1 ]];
Initializing the positions of n nests:
ωs (0)=[a(0) 11,a(0) 12,..,a(0) ML,b(0) 11,b(0) 12,..,b(0) LN,..,c(0) 11,c(0) 12,..,c(0) NP,d(0) 1,d(0) 2,..,d(0) P]T,s=1,...,n;
2) calculating an objective function value of each nest position, and selecting a nest with the optimal current objective function;
3) keeping the position of the nest with the optimal objective function of the previous generation, and updating the position of the nest by using a Laiwei flying type;
the update formula of the bird nest position of cuckoo is omegas (t+1)=ωs (t)+ α. L (β), where ωs (t)Representing the position of the s-th nest at the t-th iteration; alpha represents a step size; l (β) obeys the lavi distribution:
Figure FDA0002971421100000037
wherein u, v follow a normal distribution,
Figure FDA0002971421100000038
Figure FDA0002971421100000041
ωi' (t)、ωj' (t)is the position of any two nests at the time of the t-th iteration;
4) comparing the current position function value with the function value of the optimal bird nest position of the previous generation, if the current position function value is more optimal, updating the current position function value to be the current optimal function value, and if not, keeping the optimal function value of the previous generation;
5) after the position is updated, a number r is generated randomly and belongs to [0,1 ]]If r is>P, to omegas (t+1)Continuously updating, comparing the updated functional values of the positions of the nests, and calculating the global optimal position;
6) judging whether the maximum iteration times or the minimum error requirement is met, if so, outputting a global optimal position, namely each connection weight of the countermeasure efficacy evaluation model, and if not, returning to the step 2) to continue iteration;
7) and substituting the optimal weight into the neural network model to obtain an optimal secondary neural network countermeasure performance evaluation model.
7. The method of claim 2, wherein when constructing the index system for evaluating network countermeasure efficacy, the network countermeasure efficacy is decomposed into various ability elements of network reconnaissance, network attack, network defense and command decision based on various common network information equipment combat requirements in the field of network space attack and defense, and each ability element is refined and divided into a plurality of index elements.
8. The method of claim 5, wherein the combined efficiency E is in the range of 0 to 1.
9. The method of claim 6, wherein α is 1.
10. Use of the method according to any one of claims 1 to 9 in the field of network security technology.
CN202110265382.0A 2021-03-11 2021-03-11 Network countermeasure effectiveness evaluation method based on neural network model Active CN112926739B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110265382.0A CN112926739B (en) 2021-03-11 2021-03-11 Network countermeasure effectiveness evaluation method based on neural network model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110265382.0A CN112926739B (en) 2021-03-11 2021-03-11 Network countermeasure effectiveness evaluation method based on neural network model

Publications (2)

Publication Number Publication Date
CN112926739A true CN112926739A (en) 2021-06-08
CN112926739B CN112926739B (en) 2024-03-19

Family

ID=76172648

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110265382.0A Active CN112926739B (en) 2021-03-11 2021-03-11 Network countermeasure effectiveness evaluation method based on neural network model

Country Status (1)

Country Link
CN (1) CN112926739B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113420293A (en) * 2021-06-22 2021-09-21 北京计算机技术及应用研究所 Android malicious application detection method and system based on deep learning
CN117235477A (en) * 2023-11-14 2023-12-15 中国电子科技集团公司第十五研究所 User group evaluation method and system based on deep neural network

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107153869A (en) * 2017-03-29 2017-09-12 南昌大学 A kind of Diagnosis Method of Transformer Faults based on cuckoo chess game optimization neutral net
CN107222333A (en) * 2017-05-11 2017-09-29 中国民航大学 A kind of network node safety situation evaluation method based on BP neural network
CN107919983A (en) * 2017-11-01 2018-04-17 中国科学院软件研究所 A kind of space information network Effectiveness Evaluation System and method based on data mining
CN108337223A (en) * 2017-11-30 2018-07-27 中国电子科技集团公司电子科学研究院 A kind of appraisal procedure of network attack
WO2019002603A1 (en) * 2017-06-30 2019-01-03 Royal Holloway And Bedford New College Method of monitoring the performance of a machine learning algorithm
CN109241591A (en) * 2018-08-21 2019-01-18 哈尔滨工业大学 Anti-ship Missile Operational Effectiveness assessment and aid decision-making method
CN109547431A (en) * 2018-11-19 2019-03-29 国网河南省电力公司信息通信公司 A kind of network security situation evaluating method based on CS and improved BP
CN110930054A (en) * 2019-12-03 2020-03-27 北京理工大学 Data-driven battle system key parameter rapid optimization method
CN111163487A (en) * 2019-12-31 2020-05-15 上海微波技术研究所(中国电子科技集团公司第五十研究所) Method and system for evaluating comprehensive transmission performance of communication waveform

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107153869A (en) * 2017-03-29 2017-09-12 南昌大学 A kind of Diagnosis Method of Transformer Faults based on cuckoo chess game optimization neutral net
CN107222333A (en) * 2017-05-11 2017-09-29 中国民航大学 A kind of network node safety situation evaluation method based on BP neural network
WO2019002603A1 (en) * 2017-06-30 2019-01-03 Royal Holloway And Bedford New College Method of monitoring the performance of a machine learning algorithm
CN107919983A (en) * 2017-11-01 2018-04-17 中国科学院软件研究所 A kind of space information network Effectiveness Evaluation System and method based on data mining
CN108337223A (en) * 2017-11-30 2018-07-27 中国电子科技集团公司电子科学研究院 A kind of appraisal procedure of network attack
CN109241591A (en) * 2018-08-21 2019-01-18 哈尔滨工业大学 Anti-ship Missile Operational Effectiveness assessment and aid decision-making method
CN109547431A (en) * 2018-11-19 2019-03-29 国网河南省电力公司信息通信公司 A kind of network security situation evaluating method based on CS and improved BP
CN110930054A (en) * 2019-12-03 2020-03-27 北京理工大学 Data-driven battle system key parameter rapid optimization method
CN111163487A (en) * 2019-12-31 2020-05-15 上海微波技术研究所(中国电子科技集团公司第五十研究所) Method and system for evaluating comprehensive transmission performance of communication waveform

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
ZHANG GUANGHUI 等: "Evaluation of NCM Effectiveness Base on SOM-BP Cloud Neural Networks", 《APPLIED MECHANICS AND MATERIALS》, vol. 241, no. 03, pages 1779 - 1784 *
周兴旺 等: "基于BN-and-BP神经网络融合的陆空联合作战效能评估", 《火力与指挥控制》, vol. 43, no. 04, pages 3 - 8 *
徐志明 等: "空间信息网络对抗效能评估指标体系研究", 《计测技术》, no. 02, pages 11 - 13 *
李雄伟 等: "网络对抗效能评估的指标体系研究", 《无线电工程》, no. 03, pages 14 - 16 *
王劲松 等: "网络空间信息防御作战指挥效能评估研究", 《现代防御技术》, vol. 46, no. 05, pages 143 - 151 *
谢丽霞 等: "基于布谷鸟搜索优化BP神经网络的网络安全态势评估方法", 《计算机应用》, vol. 37, no. 07, pages 1926 - 1930 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113420293A (en) * 2021-06-22 2021-09-21 北京计算机技术及应用研究所 Android malicious application detection method and system based on deep learning
CN117235477A (en) * 2023-11-14 2023-12-15 中国电子科技集团公司第十五研究所 User group evaluation method and system based on deep neural network
CN117235477B (en) * 2023-11-14 2024-02-23 中国电子科技集团公司第十五研究所 User group evaluation method and system based on deep neural network

Also Published As

Publication number Publication date
CN112926739B (en) 2024-03-19

Similar Documents

Publication Publication Date Title
Hassib et al. An imbalanced big data mining framework for improving optimization algorithms performance
Feng et al. Multi-layered gradient boosting decision trees
CN110197282B (en) Threat estimation and situation assessment method based on genetic fuzzy logic tree
CN112445823A (en) Searching method of neural network structure, image processing method and device
CN112926739A (en) Network countermeasure effectiveness evaluation method based on neural network model
Putra et al. Estimation of parameters in the SIR epidemic model using particle swarm optimization
Su et al. ACGT-Net: Adaptive cuckoo refinement-based graph transfer network for hyperspectral image classification
CN109670660A (en) A kind of fleet dynamic air defense threat estimating method based on intuitionistic fuzzy TOPSIS
CN109768989A (en) Networks security situation assessment model based on LAHP-IGFNN
Hatim et al. Addressing challenges and demands of intelligent seasonal rainfall forecasting using artificial intelligence approach
CN114494771B (en) Federal learning image classification method capable of defending back door attack
Li et al. Using sparrow search hunting mechanism to improve water wave algorithm
Mythili et al. Deep learning with particle swarm based hyper parameter tuning based crop recommendation for better crop yield for precision agriculture
Chatterjee et al. Non-dominated sorting genetic algorithm—II supported neural network in classifying forest types
CN116051924B (en) Divide-and-conquer defense method for image countermeasure sample
CN115909027B (en) Situation estimation method and device
CN117113274A (en) Heterogeneous network data-free fusion method and system based on federal distillation
Filgöz et al. Applying novel adaptive activation function theory for launch acceptability region estimation with neural networks in constrained hardware environments: Performance comparison
Qi et al. Battle damage assessment based on an improved Kullback-Leibler divergence sparse autoencoder
Lu et al. Dynamic evolution analysis of desertification images based on BP neural network
Ventresca et al. Improving gradient-based learning algorithms for large scale feedforward networks
Santosa et al. A robust feature construction for fish classification using grey Wolf optimizer
Mahdi et al. Development of a method for training artificial neural networks for intelligent decision support systems
Sova et al. Development of a methodology for training artificial neural networks for intelligent decision support systems
Christila et al. Multi-layer ensemble deep reinforcement learning based DDoS attack detection and mitigation in cloud-SDN environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant