CN116319025B - Zero-trust network trust evaluation method based on machine learning - Google Patents

Zero-trust network trust evaluation method based on machine learning Download PDF

Info

Publication number
CN116319025B
CN116319025B CN202310294329.2A CN202310294329A CN116319025B CN 116319025 B CN116319025 B CN 116319025B CN 202310294329 A CN202310294329 A CN 202310294329A CN 116319025 B CN116319025 B CN 116319025B
Authority
CN
China
Prior art keywords
trust
neural network
model
particle
vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310294329.2A
Other languages
Chinese (zh)
Other versions
CN116319025A (en
Inventor
肖鹏
胡健
张振红
王海林
李寒箬
谢林江
杭菲璐
张逸彬
耿贞伟
赵晓平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Information Center of Yunnan Power Grid Co Ltd
Original Assignee
Information Center of Yunnan Power Grid Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Information Center of Yunnan Power Grid Co Ltd filed Critical Information Center of Yunnan Power Grid Co Ltd
Priority to CN202310294329.2A priority Critical patent/CN116319025B/en
Publication of CN116319025A publication Critical patent/CN116319025A/en
Application granted granted Critical
Publication of CN116319025B publication Critical patent/CN116319025B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/10Network architectures or network communication protocols for network security for controlling access to devices or network resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Hardware Design (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a machine learning-based zero trust network trust evaluation method, which comprises the following steps: the first stage comprises data preprocessing and structural design of a selective neural network integrated model, wherein the concentrated weight of the neural network integrated model is a randomly defined vector; a second stage, optimizing the integration weight according to the search of the particle swarm optimization algorithm by using the neural network integration model designed in the first stage; and thirdly, constructing an optimized selective neural network integration model by using the integration weight optimized in the second stage, and predicting the credibility of the access subject by using the optimized selective integration model. The method is suitable for trust evaluation of a zero trust network architecture, adopts selective integrated learning, uses a back propagation neural network as a basic classifier, adopts a particle swarm optimization algorithm to obtain an optimal aggregate weight vector, realizes prediction of access subject trusted components, has higher robustness, solves the problems of zero knowledge and cold start, and has better accuracy.

Description

Zero-trust network trust evaluation method based on machine learning
Technical Field
The invention relates to a machine learning-based zero trust network trust evaluation method, and belongs to the technical field of network security.
Background
With the high-speed development of new generation technology, security risks and attack events under the brand new technological situation are continuously emerging. The drawbacks of the original security framework, such as the boundary security model, are more prominent. For example, once an attacker takes access control rights of a certain host computer in the intranet, and the existing defense strategy does not strictly control the rights of the intranet, the attacker can realize transverse movement in the intranet through a series of operations, and finally control the whole network. In this case, the network has no way to defend against attacks even with a perfect boundary security model. The root is due to the trust of the security system to the intranet users. In this case, zero trust is a new network security technology architecture. Under a zero trust system, by default, the intranet user, the computer and the application are not trusted, and all accesses need to be authenticated and authorized, i.e., any device or user is not authorized to enter the network.
In the zero-trust network architecture, the trust evaluation engine serves as a core component thereof and serves as a numerical evaluation for the risk of network requests and activities, and the access control engine makes a further authorization decision based on the risk evaluation to determine whether to allow the access request. How to reasonably trust and evaluate network requests and activities is a problem to be solved first, and the zero trust technology falls to the ground.
In the existing literature of trust evaluation, a traditional method for trust evaluation based on direct or indirect interaction experience between an access subject and an access object is mostly adopted. However, conventional trust evaluation methods are not as good when there is no interactive experience between the accessing host and the object. Meanwhile, the situation that data for trust evaluation is incomplete and other valuable data are ignored in the evaluation process exists, so that the accuracy of trust evaluation is greatly influenced. In addition, in the conventional trust evaluation method, trust is determined by aggregating trust factors through weighting and the like Guan Ji, but determination of weights is difficult, and accuracy of evaluation is difficult to ensure.
Disclosure of Invention
Traditional trust assessment algorithms use direct historical interaction information and indirect recommendation information to calculate trust values, but when the accessing principal is a new person, these information do not exist, which results in the traditional approach becoming ineffective. In order to solve the problems of cold start and zero knowledge of the traditional method, the invention provides a machine learning-based zero trust network trust evaluation method, which predicts the trusted score of an access subject by using trust characteristics.
The method can be used for zero trust network, and the access subject is continuously and dynamically subjected to trust evaluation, and the obtained trusted score is used for subsequent access control.
In order to solve the technical problems, the technical scheme adopted by the invention is as follows:
a machine learning-based zero-trust network trust evaluation method comprises the following steps:
the first stage comprises data preprocessing and structural design of a selective neural network integrated model, wherein the concentration weight of the integrated model is a randomly defined vector;
a second stage of optimizing the integration weight according to a search of a particle swarm optimization algorithm (PSO) by using the neural network integration model designed in the first stage;
and thirdly, constructing an optimized selective neural network integration model by using the integration weight optimized in the second stage, and predicting the credibility of the access subject by using the optimized selective integration model.
In the first stage, according to the requirements of the trust score prediction problem, designing a network structure and an input/output format of a back propagation neural network, and preprocessing a request record by a data normalization technology in the stage, wherein the normalization calculation mode is as follows:
wherein q is max And q min The minimum value and the maximum value of the j-th trust feature are respectively 1.ltoreq.j.ltoreq.mq ij Is the j-th trust feature value, q 'of the i-th request' ij The j-th trust characteristic value of the i-th access after normalization processing;
the request record of the ith access subject is recorded as follows:
exam i =((q i1 ,q i2 ,...,q im ),y i )
where m is the number of trust features, y i Trust score q for the ith request ij J is more than or equal to 1 and less than or equal to m, and is the j trust characteristic value of the i-th request;
D={exam 1 ,exam 2 ,...,exam n dividing the request record set into three subsets, namely training sets, wherein the training sets are continuously divided into d training subsets through random sampling and are used for training each back propagation neural network; the verification set is used for guiding a selective neural network integration model based on a particle swarm optimization algorithm to search an optimal aggregation weight vector; the test set is used for evaluating the performance of the trust score prediction model;
each normalized sample (q' i1 ,q′ i2 ,...,q′ im ) The input used as the basic Back Propagation Neural Network (BPNN), the output is the trust score of the corresponding access subject; and d normalization training subsets are respectively used for training d back propagation neural networks, the particles in the particle swarm optimization algorithm are used for representing the integrated weight vectors, and the prediction results of the d back propagation neural networks are integrated.
The integration of the prediction results of the d back propagation neural networks in the first stage is used as the input of the second stage, the purpose of the second stage is to optimize the integration weights of the d back propagation neural networks by using a particle swarm optimization algorithm, and the second stage comprises the following steps:
step 201, mapping the weight of the current selective neural network integrated model to the position vectors of the particles in the particle swarm optimization algorithm, and randomly initializing the position vectors of s particles;
step 202, sampling d training subsets from a complete training set by using a bootstrap strategy, and for each training subset, training an artificial neural network by using back propagation, wherein the iteration number t of particles is initialized to 0;
step 203, each particle p k Decoding the position vector of (k=1, 2,., s) into weights of the selective neural network ensemble learning model, generating the selective neural network ensemble learning model by integrating d basic back propagation neural networks trained respectively by the training subsets;
particle p in particle swarm optimization algorithm k =(p k1 ,p k2 ,...,p kd ) Representing the weight of a set of selective Back Propagation Neural Networks (BPNNs), where d is the number of back propagation neural networks, s is the population size, 1+.k+.s;
calculating a prediction error on the verification set as the fitness of each particle, and optimizing the integrated learning model by using the verification set;
fitness (pk) is the fitness function, which measures the prediction error on the validation set, n is the number of samples in the validation set, (p) k1 ,p k2 ,...,p kd ) Is a vector of the positions of the particles,trust score r for learning prediction for selective neural network integration i For true trust score, ++>A trust score predicted for the back propagation neural network learner;
step 204, using each pelletApplicability of sub-to update personal best value pbest k And a global optimum gbest, if the fitness is better than the fitness of the personal best particle, designating the location vector as personal best; if the applicability is better than the applicability of the global optimal particle, designating the current position vector as global optimal;
having evaluated the applicable values for all particles, step 205, the velocity vector and position vector for each particle in the population is updated according to the following equation:
using v k And p k Respectively representing the speed and the position of the kth particle, and adjusting the speed of the particle according to personal optimal record and global optimal dynamic state of the particle in the optimization process;
v k (t+1)=λv k (k)+c 1 r 1k (pbest k -p k (t))+c 2 r 2k (gbest-p k (t))
p k (t+1)=p k (t)+v k (t+1)
wherein lambda is inertia weight for controlling the influence of the previous generation speed on the current generation speed, and the value is related to the iteration times; parameter c 1 And c 2 To learn factors, reflect the best pdest of individuals k And the effect of global optimum gbest on particle velocity; parameter r 1k And r 2k Is in the range of [0,1 ]]T is the current iteration number;
wherein t is max Is the maximum iteration number, t is the current iteration number, lambda max And lambda (lambda) min Is the maximum and minimum weight, set to 0.95 and 0.25 respectively;
in step 206, the iteration number is increased by 1, and when the termination condition is satisfied, the whole optimization process is ended.
In the second phase, the detailed process of training the neural network using back propagation at step 202 is as follows:
step a, model determination, rootDetermining the number of input layer elements n based on the input and output vectors 1 Number of hidden layers m, number of hidden layer units n 2 And number of output layer units n 3 Taking an input layer as a 0 th layer, taking hidden layers as 1 to m layers, taking an output layer as m+1 layers, setting the number of input layer units as 13, setting the number of hidden layers as 2, setting the number of hidden layers as 14, and setting the number of output layer units as 1;
the n-th layer output vector is expressed asThe calculation formula of the ith cell of the nth layer is:
g (z) is the activation function,characteristic parameter vector for the ith element of the nth layer,>bias value for the ith cell of the nth layer,/->For the n-1 layer output vector, the output of the output layer is the predictive trust score a [m+1]
Step b, compiling a model, setting a loss function, wherein the loss function of the jth sample is as follows:
the cost function is:
sum is the number of samples in the training subset,is the predictive trust score of the jth sample, y j Is the true trust score of the j-th sample, < ->Is a feature parameter vector, ">Is a bias value vector;
step c, model training, updating weights by back propagation, by minimizing the cost function value (J min ),
Repetition {
Until convergence. And establishing a final selective neural network integrated learning model by using the selective integrated weight after the second-stage optimization, wherein in the third stage, each request record in the test set is used as the input of the model, and the output of the model is the predicted trust score.
The method is suitable for the zero trust network architecture.
The above employs selective ensemble learning, in which a Back Propagation Neural Network (BPNN) is used as a basic classifier. d base learners are selectively integrated together to predict trust scores for access request activities. Since different base learners have different learning capabilities, they are selectively combined by setting different weights so as to maximize the learning capability. These weights are obtained using a PSO search algorithm under the direction of the validation set, i.e., an optimal aggregate weight vector is obtained using a particle swarm optimization algorithm (PSO).
The technology not mentioned in the present invention refers to the prior art.
The invention relates to a machine learning-based trust evaluation method of a zero trust network, which is suitable for the trust evaluation method of a zero trust network architecture, and adopts selective ensemble learning, wherein a Back Propagation Neural Network (BPNN) is used as a basic classifier, and a Particle Swarm Optimization (PSO) is adopted to obtain an optimal aggregate weight vector, so that the prediction of access subject trusted components is realized, and the method has higher robustness; selecting relevant attributes of a user and equipment as trust characteristics by adopting a fuzzy linear regression method, then establishing a fuzzy linear regression equation by using a training data set to express a functional relation between the trust characteristics and the trusted components, inputting numerical data, and outputting fuzzy data by a model; the problems of zero knowledge and cold start are solved, and better accuracy is achieved.
Drawings
FIG. 1 is a zero trust authorization system in accordance with the present invention;
FIG. 2 is a schematic diagram of an optimization algorithm for learning an integrated neural network based on a particle swarm optimization algorithm according to the present invention;
FIG. 3 is a training process of the base learner of the present invention;
Detailed Description
For a better understanding of the present invention, the following examples are further illustrated, but are not limited to the following examples.
Fig. 1 is a zero trust authorization system in accordance with the present invention.
The access agent in zero trust is not a user or a device, but a user-device pair. When the trust evaluation is carried out on the access subject, firstly, related information of a user and equipment is required to be obtained from a trusted environment sensing system, and the information is stored into a data storage system after being processed; second, the trust engine uses the data in the data storage system to calculate a trust score. Based on the trusted score and the user role, it is derived by the policy engine whether this access request can be allowed.
The trust characteristics required for the access subject trust evaluation in the present invention are as follows.
The user: user identification, user location, user authentication, user enhanced authentication, number of user authentication failures, user activity, user request frequency, user override access.
The device comprises: device identification, device type, disinfection protection component, high-risk vulnerability count, and operating system version.
In the first stage, according to the requirements of trust score prediction, designing a network structure and an input/output format of a back propagation neural network, and preprocessing a request record by a data normalization technology in the stage, wherein the normalization calculation mode is as follows:
wherein q max And q min The minimum value and the maximum value of the j-th trust feature are respectively 1.ltoreq.j.ltoreq.m, q ij Is the j-th trust feature value, q 'of the i-th request' ij The j-th trust characteristic value of the i-th access after normalization processing;
the request record of the ith access subject is recorded as follows:
exam i =((q i1 ,q i2 ,...,q im ),y i )
where m is the number of trust features, y i Trust score for the ith request, q im Is the m trust characteristic value of the i-th request, and j is more than or equal to 1 and less than or equal to m;
D={exam 1 ,exam 2 ,...,exam n dividing the request record set into three subsets, namely training sets, wherein the training sets are continuously divided into d training subsets through random sampling and are used for training each back propagation neural network; the verification set is used for guiding a selective neural network integration model based on a particle swarm optimization algorithm to search an optimal aggregation weight vector; the test set is used for evaluating the performance of the trust score prediction model;
each normalized sample is processedBook (q' i1 ,q′ i2 ,...,q′ im ) The input used as the basic Back Propagation Neural Network (BPNN), the output is the trust score of the corresponding access subject; and d normalization training subsets are respectively used for training d back propagation neural networks, the particles in the particle swarm optimization algorithm are used for representing the integrated weight vectors, and the prediction results of the d back propagation neural networks are integrated.
The integration of the prediction results of the d back propagation neural networks in the first stage is used as the input of the second stage, and the second stage optimizes the integration weights of the d back propagation neural networks by using a particle swarm optimization algorithm, as shown in fig. 2, and the second stage comprises the following steps:
step 201, mapping the weight of the current selective neural network integrated model to the position vectors of the particles in the particle swarm optimization algorithm, and randomly initializing the position vectors of s particles;
step 202, sampling d training subsets from a complete training set using a bootstrap strategy, for each training subset, training an artificial neural network using back propagation. Initializing the iteration times t of the particles to 0;
step 203, each particle p k Decoding the position vector of (k=1, 2,., s) into weights of the selective neural network ensemble learning model, generating the selective neural network ensemble learning model by integrating d basic back propagation neural networks trained respectively by the training subsets;
particle p in particle swarm optimization algorithm k =(p k1 ,p k2 ,...,p kd ) (1. Ltoreq.k. Ltoreq.s), representing the weight of the selective set of BPNNs (where d is the number of BPNNs and s is the population size). Calculating a prediction error on the verification set as the fitness of each particle, and optimizing a selective neural network integration model by using the verification set;
wherein fitness (pk) is the fitness function, which measures the prediction error on the validation set, n is the number of samples in the validation set, (p) k1 ,p k2 ,...,p kd ) Is a vector of the positions of the particles,trust score r for selective neural network integrated learning model prediction i For true trust score, ++>A trust score predicted for the back propagation neural network learner;
step 204, updating the personal best value pbest using the fitness of each particle k And a global optimum gbest, if the fitness is better than the fitness of the personal best particle, designating the location vector as personal best; if the applicability is better than the applicability of the global optimal particle, designating the current position vector as global optimal;
having evaluated the applicable values for all particles, step 205, the velocity vector and position vector for each particle in the population is updated according to the following equation:
using v k And p k Indicating the velocity and position of the kth particle, respectively.
The optimization process involves dynamically adjusting the velocity of the particles according to their own personal best record and global best. In solving the practical problem, if the particle velocity is too high, the optimal position is easily missed, so it is necessary to limit the velocity to [ -V max ,V max ]The range is as follows: if V is k <-V max Let v k =-v max The method comprises the steps of carrying out a first treatment on the surface of the In addition, if V k >V max Let v k =v max . Wherein v is k Is the velocity of the kth (1. Ltoreq.k. Ltoreq.s) particle.
v k (t+1)=λv k (k)+c 1 r 1k (pbesr k -p k (t))+c 2 r 2k (gbest-p k (t))
p k (t+1)=p k (t)+v k (t+1)
Wherein lambda is inertial weight for controlling the influence of the previous generation speed on the current generation speed, and the value is related to the iteration times. Parameter c 1 And c 2 To learn factors, reflect the best pdest of individuals k And the effect of the global optimum gbest on particle velocity. Parameter r 1k And r 2k Is in the range of [0,1 ]]T is the current iteration number.
Wherein t is max Is the maximum number of iterations, and t is the current number of iterations. Lambda (lambda) max And lambda (lambda) min Is the maximum and minimum weight, set to 0.95 and 0.25, respectively.
In step 206, the iteration number is increased by 1, and when the termination condition is satisfied, the whole optimization process is ended.
As shown in fig. 3, in step 202, the step of training the artificial neural network using back propagation is:
step a, determining a model, namely determining the number n of input layer units according to the input and output vectors 1 Number of hidden layers m, number of hidden layer units n 2 And number of output layer units n 3 Taking an input layer as a 0 th layer, taking hidden layers as 1 to m layers, taking an output layer as m+1 layers, setting the number of input layer units as 13, setting the number of hidden layers as 2, setting the number of hidden layers as 14, and setting the number of output layer units as 1;
the n-th layer output vector is expressed asThe calculation formula of the ith cell of the nth layer is:
g (z) is the activation function,characteristic parameter vector for the ith element of the nth layer,>bias value for the ith cell of the nth layer,/->For the n-1 layer output vector, the output of the output layer is the predictive trust score a [m+1]
Step b, compiling a model, setting a loss function, wherein the loss function of the jth sample is as follows:
the cost function is:
sum is the number of samples in the training subset,is the predictive trust score of the jth sample, y j Is the true trust score of the j-th sample, < ->Is a feature parameter vector, ">Is a bias value vector;
step c, model training, updating weights by back propagation, by minimizing the cost function value (J min ),
Repetition {
Until convergence.
And thirdly, establishing a final selective neural network integrated model by using the selective integration weight after the optimization in the second stage, taking each request record in the test set as the input of the model, and obtaining the model output as the predicted trust score.
The machine learning-based zero trust network trust evaluation method is suitable for the trust evaluation method of the zero trust network architecture, and the method adopts selective ensemble learning, wherein a Back Propagation Neural Network (BPNN) is used as a basic classifier, and a Particle Swarm Optimization (PSO) is adopted to obtain an optimal aggregate weight vector, so that the reliability of the access subject is predicted, and the method has higher robustness; selecting relevant attributes of a user and equipment as trust characteristics by adopting a fuzzy linear regression method, then establishing a fuzzy linear regression equation by using a training data set to express a functional relation between the trust characteristics and the trusted components, inputting numerical data, and outputting fuzzy data by a model; the problems of zero knowledge and cold start are solved, and better accuracy is achieved.
The invention provides a trust evaluation method thought suitable for zero trust based on machine learning, and the method and the way for realizing the technical scheme are numerous, the above is only a preferred embodiment of the invention, and it should be noted that, for those skilled in the art, several improvements and modifications can be made without departing from the principle of the invention, and the improvements and modifications should be regarded as the protection scope of the invention. The components not explicitly described in this embodiment can be implemented by using the prior art.

Claims (3)

1. A machine learning-based zero-trust network trust evaluation method is characterized by comprising the following steps of: comprising the following steps:
the first stage comprises data preprocessing and structural design of a selective neural network integration model, wherein the integration weight of the neural network integration model is a randomly defined vector;
a second stage, optimizing the integration weight according to the search of the particle swarm optimization algorithm by using the neural network integration model designed in the first stage;
thirdly, constructing an optimized selective neural network integration model by using the integration weight optimized in the second stage, and predicting the trust score of the access subject by using the optimized selective neural network integration model;
in the first stage, according to the requirements of trust score prediction, designing a network structure and an input/output format of a back propagation neural network, and preprocessing a request record by a data normalization technology in the stage, wherein the normalization calculation mode is as follows:
wherein q is max And q min The maximum value and the minimum value of the j-th trust feature are respectively 1.ltoreq.j.ltoreq.m, q ij Is the j-th trust feature value, q 'of the i-th request' ij Is the j-th trust characteristic value of the ith access after normalization processing;
the request record of the ith access subject is recorded as follows:
exam i =((q i1 ,q i2 ,...,q im ),y i )
where m is the number of trust features, y i Trust score for the ith request, q im Is the m trust characteristic value of the i-th request, and j is more than or equal to 1 and less than or equal to m;
D={exam 1 ,exam 2 ,…,exam n ) For a request record set, the request record set is divided into three subsets, respectively: the training set is continuously divided into d training subsets through random sampling and is used for training each back propagation neural network; the verification set is used for guiding a selective neural network integration model based on a particle swarm optimization algorithm to search an optimal integration weight vector; the test set is used for evaluating the performance of the trust score prediction model;
each normalized sample (q' i1 ,q′ i2 ,…,q′ im ) The output is a trust score of the corresponding access subject; for d normalized training subsets, respectively training d back propagation neural networks, using particles in a particle swarm optimization algorithm to represent an integrated weight vector, and integrating the prediction results of the d back propagation neural networks;
the integration of the prediction results of the d back propagation neural networks in the first stage is used as the input of the second stage, the second stage optimizes the integration weight of the d back propagation neural networks by using a particle swarm optimization algorithm, and the second stage comprises the following steps:
step 201, mapping the weight of the current selective neural network integrated model to the position vectors of the particles in the particle swarm optimization algorithm, and randomly initializing the position vectors of s particles;
step 202, sampling d training subsets from a complete training set by using a bootstrap strategy, and for each training subset, training an artificial neural network by using back propagation, wherein the iteration number t of particles is initialized to 0;
step 203, each particle p k Decoding the location vectors of (k=1, 2,., s) into weights of a selective neural network integration model, generating the selective neural network integration model by integrating d basic back propagation neural networks respectively trained by the training subsets;
particle p in particle swarm optimization algorithm k =(p k1 ,p k2 ,…,p kd ) The weight of the selective back propagation neural network set is represented, wherein d is the number of the back propagation neural networks, s is the population size, and k is more than or equal to 1 and less than or equal to s;
calculating a prediction error on the verification set as the fitness of each particle, and optimizing a selective neural network integration model by using the verification set;
wherein fitness (pk) is the fitness function, which measures the prediction error on the validation set, n is the number of samples in the validation set, (p) k1 ,p k2 ,…,p kd ) Is a vector of the positions of the particles,trust score, r, for model prediction integration for selective neural networks i For true trust score, ++>Trust scores for the back propagation neural network predictions;
step 204, updating the personal best value pbest using the fitness of each particle k And a global optimum gbest, if the fitness is better than the fitness of the personal best particle, designating the location vector as personal best; if the applicability is better than the applicability of the global optimal particle, designating the current position vector as global optimal;
having evaluated the suitability of all particles, the velocity vector and position vector for each particle in the population is updated 205 according to the following equation:
using v k And p k Respectively representing the velocity and position of the kth particle, the optimization process being based on the particle itselfThe personal best record and global best dynamic adjustment of the particle velocity;
v k (t+1)=λv k (k)+c 1 r 1k (pbest k -p k (t))+c 2 r 2k (gbest-p k (t))
p k (t+1)=p k (t)+v k (t+1)
wherein lambda is inertia weight for controlling the influence of the previous generation speed on the current generation speed, and the value is related to the iteration times; parameter c 1 And c 2 To learn factors, reflect the best pdest of individuals k And the effect of global optimum gbest on particle velocity; parameter r 1k And r 2k Is in the range of [0,1 ]]T is the current iteration number;
wherein t is max Is the maximum iteration number, t is the current iteration number, lambda max And lambda (lambda) min Is the maximum and minimum weight, set to 0.95 and 0.25 respectively;
in step 206, the iteration number is increased by 1, and when the termination condition is satisfied, the whole optimization process is ended.
2. The assessment method according to claim 1, wherein: in step 202, the step of training the artificial neural network using back propagation is:
step a, determining a model, namely determining the number n of input layer units according to the input and output vectors 1 Number of hidden layers m, number of hidden layer units n 2 And number of output layer units n 3 Taking an input layer as a 0 th layer, taking hidden layers as 1 to m layers, taking an output layer as m+1 layers, setting the number of input layer units as 13, setting the number of hidden layers as 2, setting the number of hidden layers as 14, and setting the number of output layer units as 1;
the n-th layer output vector is expressed asThe calculation formula of the ith cell of the nth layer is:
g (z) is the activation function,characteristic parameter vector for the ith element of the nth layer,>bias value for the ith cell of the nth layer,/->For the n-1 layer output vector, the output of the output layer is the prediction trust score alpha [m+1]
Step b, compiling a model, setting a loss function, wherein the loss function of the jth sample is as follows:
the cost function is:
sum is the number of samples in the training subset,is the predictive trust score of the jth sample, yj is the true trust score of the jth sample,/>Is a feature parameter vector, ">Is a bias value vector;
step c, model training, updating the weights through back propagation, minimizing the cost function value through the following method,
repetition {
Until convergence.
3. The assessment method according to claim 2, wherein: and thirdly, establishing a final selective neural network integrated model by using the optimized integrated weight in the second stage, taking each request record in the test set as the input of the model, and obtaining the model output as the predicted trust score.
CN202310294329.2A 2023-03-22 2023-03-22 Zero-trust network trust evaluation method based on machine learning Active CN116319025B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310294329.2A CN116319025B (en) 2023-03-22 2023-03-22 Zero-trust network trust evaluation method based on machine learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310294329.2A CN116319025B (en) 2023-03-22 2023-03-22 Zero-trust network trust evaluation method based on machine learning

Publications (2)

Publication Number Publication Date
CN116319025A CN116319025A (en) 2023-06-23
CN116319025B true CN116319025B (en) 2024-01-26

Family

ID=86825495

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310294329.2A Active CN116319025B (en) 2023-03-22 2023-03-22 Zero-trust network trust evaluation method based on machine learning

Country Status (1)

Country Link
CN (1) CN116319025B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117254981B (en) * 2023-11-17 2024-02-02 长扬科技(北京)股份有限公司 Industrial control network security situation prediction method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103294601A (en) * 2013-07-03 2013-09-11 中国石油大学(华东) Software reliability forecasting method based on selective dynamic weight neural network integration
CN112016669A (en) * 2019-05-31 2020-12-01 辉达公司 Training neural networks using selective weight updates
CN114465807A (en) * 2022-02-24 2022-05-10 重庆邮电大学 Zero-trust API gateway dynamic trust evaluation and access control method and system based on machine learning
CN115131131A (en) * 2022-07-06 2022-09-30 浙江财经大学 Credit risk assessment method for unbalanced data set multi-stage integration model

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220345484A1 (en) * 2021-04-21 2022-10-27 ANDRO Computation Solutions, LLC Zero trust architecture for networks employing machine learning engines
US20230044102A1 (en) * 2021-08-02 2023-02-09 Noblis, Inc. Ensemble machine learning models incorporating a model trust factor

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103294601A (en) * 2013-07-03 2013-09-11 中国石油大学(华东) Software reliability forecasting method based on selective dynamic weight neural network integration
CN112016669A (en) * 2019-05-31 2020-12-01 辉达公司 Training neural networks using selective weight updates
CN114465807A (en) * 2022-02-24 2022-05-10 重庆邮电大学 Zero-trust API gateway dynamic trust evaluation and access control method and system based on machine learning
CN115131131A (en) * 2022-07-06 2022-09-30 浙江财经大学 Credit risk assessment method for unbalanced data set multi-stage integration model

Also Published As

Publication number Publication date
CN116319025A (en) 2023-06-23

Similar Documents

Publication Publication Date Title
CN111914256B (en) Defense method for machine learning training data under toxic attack
Zhao et al. Shielding collaborative learning: Mitigating poisoning attacks through client-side detection
Hayes et al. Contamination attacks and mitigation in multi-party machine learning
Alghanam et al. An improved PIO feature selection algorithm for IoT network intrusion detection system based on ensemble learning
Ma et al. Learn to forget: Machine unlearning via neuron masking
US20210160247A1 (en) Real-time entity anomaly detection
CN116319025B (en) Zero-trust network trust evaluation method based on machine learning
CN115037553B (en) Information security monitoring model construction method and device, information security monitoring model application method and device, and storage medium
Cheng et al. Long-term effect estimation with surrogate representation
Meng et al. The effect of adaptive mechanism on behavioural biometric based mobile phone authentication
Wang et al. Defense strategies toward model poisoning attacks in federated learning: A survey
Shang et al. Network security situation prediction based on long short-term memory network
Han et al. Sparse auto-encoder combined with kernel for network attack detection
CN113938291B (en) Encrypted traffic analysis defense method and system based on anti-attack algorithm
Manavi et al. A new intrusion detection system based on gated recurrent unit (GRU) and genetic algorithm
Singh et al. User behaviour based insider threat detection using a hybrid learning approach
Wei et al. Multi-objective evolving long–short term memory networks with attention for network intrusion detection
US20210192032A1 (en) Dual-factor identification system and method with adaptive enrollment
Liao et al. Server-based manipulation attacks against machine learning models
Yong et al. Hybrid firefly and black hole algorithm designed for XGBoost tuning problem: An application for intrusion detection
CN112668697B (en) Fuzzy test method and system for flight control parameters of unmanned aerial vehicle
Cai et al. Ensemble-in-one: ensemble learning within random gated networks for enhanced adversarial robustness
Wu et al. Practical and efficient model extraction of sentiment analysis APIs
Dong et al. Towards Intrinsic Adversarial Robustness Through Probabilistic Training
Zhu et al. Aec_gan: unbalanced data processing decision-making in network attacks based on ACGAN and machine learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant