CN101141248A - Neural network weight synchronization based lightweight key negotiation method - Google Patents

Neural network weight synchronization based lightweight key negotiation method Download PDF

Info

Publication number
CN101141248A
CN101141248A CNA2007101562203A CN200710156220A CN101141248A CN 101141248 A CN101141248 A CN 101141248A CN A2007101562203 A CNA2007101562203 A CN A2007101562203A CN 200710156220 A CN200710156220 A CN 200710156220A CN 101141248 A CN101141248 A CN 101141248A
Authority
CN
China
Prior art keywords
weight
isnn
neural network
synchronization
hash
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA2007101562203A
Other languages
Chinese (zh)
Other versions
CN100566241C (en
Inventor
陈铁明
黄鸿岛
蔡家楣
江颉
陈波
王小号
张旭东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CNB2007101562203A priority Critical patent/CN100566241C/en
Publication of CN101141248A publication Critical patent/CN101141248A/en
Application granted granted Critical
Publication of CN100566241C publication Critical patent/CN100566241C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Computer And Data Communications (AREA)

Abstract

A neural network weight value synchronization-based lightweight key consultation method of which two neural networks with the same input perceptions are interacted, namely, the continuous renew of the weight value vector output, so as to realize the weight value of two neural networks. Based on the traditional random number generator LFSR and Hash algorithm SHA 1, the perception neural network is discretized and extended to the multilayered model, while the synchronic property of weight value is unchanged. Thus, the synchronic weight value can be mapped to the conversation key, which means the key coordination and renew can be applied. The utility model provides a synchronic lightweight key coordination method which is based on the neural network weight value and suitable to flush-type environment. Such method is provided with low operand and low computing resource requirement.

Description

Lightweight key negotiation method based on neural network weight synchronization
Technical Field
The invention belongs to a key agreement method, in particular to a lightweight key agreement method.
Background
Communication encryption is a core technology for guaranteeing network security, and session key agreement between two communication parties is a key technology for realizing communication encryption. At present, key agreement methods between two communication parties can be mainly divided into two major categories: firstly, a session key is generated on one side by a certain communication party and is safely distributed to the other party, so that key agreement between the two parties is completed; and the other is calculated by both communication parties, the two communication parties respectively calculate the same information as a negotiated key, and any third party cannot calculate the final key.
Currently, the second method is commonly used in the industry and implemented by using conventional cryptography, i.e. based on DH public key algorithm. DH is a basic cryptographic algorithm based on discrete logarithm problem, SSL standard safety protocol adopts DH algorithm to negotiate conversation key. With the development of the elliptic curve public key technology, the DH method has been extended to the DH problem on the elliptic curve, the DH problem of group key negotiation, and so on. The method has the advantages of simple realization, high safety and large calculation consumption, and is particularly not suitable for the embedded environment with limited calculation resources when the key negotiation or updating frequency is high. Therefore, aiming at the rapid development of embedded network application, a key agreement protocol based on the traditional cryptographic technology cannot meet the application requirement, and the search for a novel, safe and efficient lightweight key agreement method has become a hot spot of current research.
Disclosure of Invention
In order to overcome the defects of high calculation consumption, high requirement on calculation resources and unsuitability for an embedded environment of the conventional key agreement method, the invention provides the lightweight key agreement method which is low in calculation amount, low in requirement on calculation resources and suitable for the embedded environment and is based on neural network weight synchronization.
The technical scheme adopted by the invention for solving the technical problems is as follows:
a lightweight key negotiation method based on neural network weight synchronization comprises the following steps:
(1) According to the network parameter values input into the ISNN, the network parameter values comprise an input vector X, a space dimension N of a weight vector W, the number K and a positive integer L of the perceptron, and a weight vector element value W ij The value space of (a) is the interval [ -L, + L [ -L [, + L ]]The integer in (2) is determined by simulation to obtain probability distribution of learning steps required by weight synchronization, and the step number required by synchronization with P =95% probability is recorded as S P
(2) Initializing various parameters N, K and L for an ISNN (integrated neural network) A and an ISNN B, and generating identical input vectors;
(3) ISNN network A and ISNN network B randomly generate weight vectors, and execute S P Step-by-step interactive learning, the updated weight vector is W A (S P ) And W B (S P );
(4) ISNN network A, ISNN network B exchange the Hash value of the weight vector Hash (W) A (S P )),Hash((W B (S P ));
(5) If Hash (W) A (S))=Hash(W B (S)), go to (7);
(6) If Hash (W) As )≠Hash((W Bs ) Go to (3);
(7) The weight synchronization of the ISNN A and the ISNN B is determined, and the final weight update is respectively executed: w A/B (S P +1)=W A/B (S P )+X A/B (S P +1),W A/B (S P + 1) is the determined synchronization weight;
(8) And uniformly mapping the synchronized weight vector to be used as a session key negotiated by the two parties.
As a preferred solution: the network parameter values further include a threshold T, and the activation function input to the synchronous neural network ISNN is as follows:
Figure A20071015622000061
the technical conception of the invention is as follows: two perceptron neural networks with completely same input (note that the activation function, the updating formula and the like of the perceptron are well-known technologies) can learn each other through output, namely, continuously updating respective weight vectors through whether the output is equal, and finally, the weight synchronization of the two neural networks can be realized. Discretizing the perceptron neural network and expanding the discretized perceptron neural network to a multilayer model (specific description will be given below), keeping the attribute of weight synchronization unchanged, and mapping the synchronized weight to a session key, namely applying to key agreement and updating.
The basic principle of a neural network weight synchronization model. The core of the model is a Neural Network structure in which a plurality of sensor output bits are superposed by symbols, and Input Synchronization of two Neural networks is required when mutual learning is performed, so that the Neural Network is called an Input Synchronization Neural Network (ISNN).
Referring to fig. 1, n represents the spatial dimensions of an input vector X and a weight vector W, K represents the number of sensors, and Sign represents a simple activation function (taking a value of +1 or-1). Input vector element value x ij The output value sigma of a single sensor is +1 or-1, and the weight vector element value w ij The value space of (a) is the interval [ -L, + L]Is a positive integer, and τ is the final output value of ISNN (the result is +1 or-1). Note: sigma is a continuous addition symbol, and pi is a continuous multiplication symbol.
The activation function is as follows:
Figure A20071015622000062
below we give two ISNN neural network models through output-based interactive learningThe process: when the outputs of the two ISNN neural network models (A and B) are equal (τ) A =τ B ) For both models A, B, all satisfied outputs are chosen to be equal to τ AB ) Sensor Pi (i.e. sigma) Pi =τ A Where P stands for a or B, i stands for 1,2,. K), the weights are updated as follows:
W Pi (t+1)=W Pi (t)-X Pi σ P
wherein, the value of the weight vector element is kept in the interval [ -L, L ], namely:
Figure A20071015622000071
to sigma Pi ≠τ A The perceptron does not update the weight value, the weight value is kept unchanged, and the next step of interactive learning is carried out. Before each learning step starts, the two neural networks update the input vector at the same time, but always keep the same (note: this is the basic structural feature of the input synchronous neural network ISNN).
Theoretical analysis and experiments prove that after two ISNNs are subjected to finite step interactive learning, weight synchronization (initial weights are different) can be realized, and the basic flow is shown in FIG. 2: the two input synchronous neural networks with the same structure give the same input vector, the respective weight vector is continuously updated under the weight updating rule according to the output values of the two parties, and complete synchronization of the weight vectors can be finally realized after finite step interactive learning, namely W is obtained A =W B . And uniformly mapping the synchronized weight vector to be used as a session key negotiated by the two parties.
The invention has the following beneficial effects: 1. the method does not relate to large number operation, the program only needs to execute simple addition operation and exclusive or operation, and the execution speed is high. After the threshold value T is added in the activation function, the operation speed is obviously improved; 2. the software implementation of the method has low requirement on hardware operation, and is suitable for various embedded devices; 3. the weight value is synchronously judged by adopting an exchange hayes value, and a third party cannot obtain the weight value; the simulation shows that the third-party neural network can obtain completely same structural parameters and can intercept and capture any output of the two parties negotiating, and the learning step number required by the third-party neural network to obtain the same weight by executing the same learning update is far beyond the learning step number of the two parties negotiating, so the method is safe in key negotiation; 4. the DH and other classical key agreement methods do not support the identity authentication of both parties, and have the problems of being attacked by a man-in-the-middle and the like; the method is based on weight synchronization, and if the input of the neural network is different, the weights cannot be synchronized. When the secret key is negotiated, the secret shared in advance is mapped into the same input sequence, and the same input sequence cannot realize weight synchronization with any third party without the shared secret through mutual learning, so that the method has an identity authentication function; 5. in the prior art, key updating is realized, which is usually equivalent to the overhead of key negotiation, namely, one-time key updating is completed through algorithms such as DH and the like; after the two input synchronous neural networks realize weight synchronization, the output and the weight can be kept completely consistent. Therefore, the neural networks with the synchronized weights can update the respective weights according to respective outputs without interaction, and the updated weights can be mapped into new negotiation keys, so that the quick off-line key updating is realized.
Drawings
Fig. 1 is a basic configuration diagram of an ISNN network.
Fig. 2 is a flowchart of weight synchronization of the ISNN interactive learning model.
Fig. 3 is a diagram showing the relationship between the learning step number and the frequency.
Fig. 4 is a flowchart of key agreement based on neural network weight synchronization.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
Referring to fig. 1 to 4, a lightweight key agreement method based on neural network weight synchronization includes the following steps:
(1) According to the network parameter values input into the ISNN, the network parameter values comprise an input vector X, a space dimension N of a weight vector W, the number K and a positive integer L of the perceptron, and a weight vector element value W ij The value space of (a) is the interval [ -L, + L]By an integer ofThe probability distribution of the learning steps needed for achieving weight synchronization is determined through simulation, and the steps needed for achieving synchronization with the probability of P =95% are recorded as S P
(2) Initializing various parameters N, K and L for an ISNN (integrated neural network) A and an ISNN B, and generating identical input vectors;
(3) Generating weight vector at random by ISNN network A and ISNN network B, executing S P Step-by-step interactive learning, the updated weight vector is W A (S P ) And W B (S P );
(4) ISNN network A, ISNN network B exchange the Hash value of the weight vector Hash (W) A (S P )),Hash((W B (S P ));
(5) If Hash (W) A (S))=Hash(W B (S)), go to (7);
(6) If Hash (W) As )≠Hash((W Bs ) Go to (3);
(7) The weight synchronization of the ISNN A and the ISNN B is determined, and the final weight updating is respectively executed: w A/B (S P +1)=W A/B (S P )+X A/B (S P +1),W A/B (S P + 1) is the determined synchronization weight;
(8) And uniformly mapping the synchronized weight vector to be used as a session key negotiated by the two parties.
In this embodiment, for a given value of a neural network parameter N, K, L, a probability distribution of learning steps required for achieving weight synchronization is determined through simulation, and the number of steps required for achieving synchronization with a probability of P =95% is taken as S P . The process of weight determination is as follows:
(1) According to the ISNN network parameter value and the simulation experiment, the learning step number S required for realizing weight synchronization with the probability of 95 percent is determined P (ii) a (Note: according to the commonly used N, K, L parameter combinations (a threshold parameter T is also mentioned below), S under different parameter combinations can be recorded in advance through simulation P A value; here, the learning step number determined with a probability of about 95% can be calculated by a bin frequency histogram method, for example, as shown in fig. 3, for a parameter combination of N =100, l =3, and k =3, weight synchronization can be achieved with a probability of 95% after learning with about 400 steps. Thus, S P The value is only oneProbability value, i.e. by S P The value cannot be accurately judged whether to be synchronous or not, and the following steps are carried out to accurately judge the weight value
(2) The ISNN networks A and B initialize various parameters N, K and L to generate identical input vectors;
(3) A, B randomly generates weight vector, and executes S P Step interactive learning, the updated weight vector is W A (S P ) And W B (S P );
(4) Hash value Hash (W) of A and B exchange weight vectors A (S P )),Hash((W B (S P ));
(5) If Hash (W) A (S))=Hash(W B (S)), go to (7);
(6) If Hash (W) As )≠Hash((W Bs ) Go to (3);
(7) The synchronization of the A and B weights is determined, and the final weight updating is respectively executed: w is a group of A/B (S P +1) =W A/B (S P )+X A/B (S P +1),
W A/B (S P + 1) is the finally determined synchronization weight;
(8) And uniformly mapping the synchronized weight vector to be used as a session key negotiated by the two parties.
Based on ISNN weight synchronization and its determination method, in combination with a conventional random number generator LFSR (LFSR is a well-known technique in cryptography) and a hash algorithm SHA1 (SHA 1 hash algorithm is also a well-known technique in cryptography), we provide a specific key agreement framework as shown in fig. 4. It is assumed here that both ISNN entities already possess pre-shared secret information.
According to the illustrated process, an LFSR random number generator is mainly utilized, secret information pre-shared by two network parties is used as a seed, the same binary random sequence is generated, 0 is converted into-1, and an input vector applicable to the ISNN neural network is obtained. In addition, we adopt SHA1 Hash algorithm (standard Hash algorithm in the current industry, aiming at any plaintext, a Hash value with a fixed binary length of 160 bits can be generated), and map the synchronization weight finally determined in the step (7) in the weight determination process into a binary 160 bit string as the session key to be finally negotiated.
Aiming at the efficiency optimization problem of interactive learning, the simulation finds that the learning speed of weight synchronization can be rapidly improved by adding the threshold T in the activation function, namely:
Figure A20071015622000111
according to simulation experiments, when N =100, K =3 and L =3, the weight synchronization takes about 400 steps; when T =20 was increased, the number of experimental steps in the same case was reduced to about 120 steps.
Note: different T can be selected according to different parameters N, K and L. In practical application, the values N < 200, K =3, L =3 and T =20 can be taken.

Claims (2)

1. A lightweight key negotiation method based on neural network weight synchronization is characterized in that: the key agreement method comprises the following steps:
(1) According to the network parameter values input into the ISNN, the network parameter values comprise an input vector X, a space dimension N of a weight vector W, the number K and a positive integer L of the perceptron, and a weight vector element value W ij The value space of (a) is the interval [ -L, + L [ -L [, + L ]]The integer in (1) is determined by simulation to obtain probability distribution of learning steps required by weight synchronization, and the step number required by synchronization with probability of P =95% is recorded as S P
(2) Initializing various parameters N, K and L for an ISNN (inverse neural network) A and an ISNN (inverse neural network) B to generate completely same input vectors;
(3) ISNN network A and ISNN network B randomly generate weight vectors, and execute S P Step-by-step interactive learning, the updated weight vector is W A (S P ) And W B (S P );
(4) ISNN network A, ISNN network B exchange the Hash value of the weight vector Hash (W) A (S P )),Hash((W B (S P ));
(5) If Hash (W) A (S))=Hash(W B (S)), go to (7);
(6) If Hash (W) As )≠Hash((W Bs ) Go to (3);
(7) The weight synchronization of the ISNN A and the ISNN B is determined, and the final weight update is respectively executed: w A/B (S P +1)=W A/B (S P )+X A/B (S P +1),W A/B (S P + 1) is the determined synchronization weight;
(8) And performing unified mapping on the synchronized weight vectors to serve as a session key negotiated by the two parties.
2. The lightweight key agreement method based on neural network weight synchronization of claim 1, characterized in that: the network parameter values further include a threshold T, and the activation function input to the synchronous neural network ISNN is as follows:
Figure A2007101562200003C1
CNB2007101562203A 2007-09-30 2007-09-30 Based on the synchronous lightweight key negotiation method of neural network weight Expired - Fee Related CN100566241C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2007101562203A CN100566241C (en) 2007-09-30 2007-09-30 Based on the synchronous lightweight key negotiation method of neural network weight

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2007101562203A CN100566241C (en) 2007-09-30 2007-09-30 Based on the synchronous lightweight key negotiation method of neural network weight

Publications (2)

Publication Number Publication Date
CN101141248A true CN101141248A (en) 2008-03-12
CN100566241C CN100566241C (en) 2009-12-02

Family

ID=39193017

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2007101562203A Expired - Fee Related CN100566241C (en) 2007-09-30 2007-09-30 Based on the synchronous lightweight key negotiation method of neural network weight

Country Status (1)

Country Link
CN (1) CN100566241C (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101459516B (en) * 2009-02-20 2010-12-08 浙江工业大学 Dynamic password safe login method
CN105760932A (en) * 2016-02-17 2016-07-13 北京物思创想科技有限公司 Data exchange method, data exchange device and calculating device
CN112543097A (en) * 2020-09-23 2021-03-23 西南大学 Neural network key negotiation method based on error prediction
CN112751671A (en) * 2020-12-30 2021-05-04 华南农业大学 Novel key exchange method based on tree parity machine

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090240949A9 (en) * 2004-04-23 2009-09-24 Kitchens Fred L Identity authentication based on keystroke latencies using a genetic adaptive neural network
US7620819B2 (en) * 2004-10-04 2009-11-17 The Penn State Research Foundation System and method for classifying regions of keystroke density with a neural network
CN1881874A (en) * 2006-04-26 2006-12-20 集美大学 Public key cipher encrypting and decrypting method based on nerval network chaotic attractor

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101459516B (en) * 2009-02-20 2010-12-08 浙江工业大学 Dynamic password safe login method
CN105760932A (en) * 2016-02-17 2016-07-13 北京物思创想科技有限公司 Data exchange method, data exchange device and calculating device
CN112543097A (en) * 2020-09-23 2021-03-23 西南大学 Neural network key negotiation method based on error prediction
CN112751671A (en) * 2020-12-30 2021-05-04 华南农业大学 Novel key exchange method based on tree parity machine

Also Published As

Publication number Publication date
CN100566241C (en) 2009-12-02

Similar Documents

Publication Publication Date Title
Xiong et al. Toward lightweight, privacy-preserving cooperative object classification for connected autonomous vehicles
Liu et al. An encryption scheme based on synchronization of two-layered complex dynamical networks
CN108111295B (en) Homomorphic encryption method based on analog-to-analog operation
US11599832B2 (en) Systems, circuits and computer program products providing a framework for secured collaborative training using hyper-dimensional vector based data encoding/decoding and related methods
Agrawal et al. Function projective synchronization between four dimensional chaotic systems with uncertain parameters using modified adaptive control method
CN104270247A (en) Efficient generic Hash function authentication scheme suitable for quantum cryptography system
WO2015103932A1 (en) Hypersphere-based multivariable public key signature/verification system and method
CN109617671B (en) Encryption and decryption methods, encryption and decryption devices, expansion methods, encryption and decryption systems and terminal
CN102263636A (en) Stream cipher key control method for fusing neural network with chaotic mappings
Kalaria et al. A Secure Mutual authentication approach to fog computing environment
Iqbal et al. A provable and secure key exchange protocol based on the elliptical curve diffe–hellman for wsn
Mall et al. A lightweight secure communication protocol for IoT devices using physically unclonable function
US20220166614A1 (en) System and method to optimize generation of coprime numbers in cryptographic applications
Li et al. A new image encryption algorithm based on optimized Lorenz chaotic system
JP7312293B2 (en) Digital signature method, signature information verification method, related device and electronic device
CN101141248A (en) Neural network weight synchronization based lightweight key negotiation method
CN113141247A (en) Homomorphic encryption method, device and system and readable storage medium
US11101981B2 (en) Generating a pseudorandom number based on a portion of shares used in a cryptographic operation
Ma et al. Attribute-based blind signature scheme based on elliptic curve cryptography
Luo et al. RUAP: Random rearrangement block matrix-based ultra-lightweight RFID authentication protocol for end-edge-cloud collaborative environment
CN105049206A (en) Method employing SM2 elliptical curve algorithm to achieve encryption in OpenSSL
CN107947944B (en) Incremental signature method based on lattice
CN113761570B (en) Data interaction method for privacy intersection
Shukla et al. Secure communication and image encryption scheme based on synchronisation of fractional order chaotic systems using backstepping
Alain et al. A secure communication scheme using generalized modified projective synchronization of coupled Colpitts oscillators

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20091202

Termination date: 20130930