AU2021104458A4 - a high-consistency physical key generation method based on neural network - Google Patents

a high-consistency physical key generation method based on neural network Download PDF

Info

Publication number
AU2021104458A4
AU2021104458A4 AU2021104458A AU2021104458A AU2021104458A4 AU 2021104458 A4 AU2021104458 A4 AU 2021104458A4 AU 2021104458 A AU2021104458 A AU 2021104458A AU 2021104458 A AU2021104458 A AU 2021104458A AU 2021104458 A4 AU2021104458 A4 AU 2021104458A4
Authority
AU
Australia
Prior art keywords
neural network
source node
node
destination node
physical key
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
AU2021104458A
Inventor
Tao Dong
Sicheng Wang
Zhuoxian ZHANG
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest University
Original Assignee
Southwest University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest University filed Critical Southwest University
Priority to AU2021104458A priority Critical patent/AU2021104458A4/en
Application granted granted Critical
Publication of AU2021104458A4 publication Critical patent/AU2021104458A4/en
Ceased legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W12/00Security arrangements; Authentication; Protecting privacy or anonymity
    • H04W12/04Key management, e.g. using generic bootstrapping architecture [GBA]
    • H04W12/041Key generation or derivation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/10Protecting distributed programs or content, e.g. vending or licensing of copyrighted material ; Digital rights management [DRM]
    • G06F21/107License processing; Key processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/08Key distribution or management, e.g. generation, sharing or updating, of cryptographic keys or passwords
    • H04L9/0861Generation of secret information including derivation or calculation of cryptographic keys or passwords

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Technology Law (AREA)
  • Computer Hardware Design (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a high-consistency physical key generation method based on neural network. The generation method comprises the following steps. Step 10): In a wireless physical key generation model, there is a source node S, a destination node D and an eavesdropping node E, and all nodes work in TDD mode. At time t, the channel coefficient between the source node S and the destination node D is denoted as hS (t). The channel coefficient between the source node S and the eavesdropping node E is denoted as hSE (t) , and the channel coefficient between the destination node D and the eavesdropping node E is denoted as hSE (t) . Step 20): Obtain training samples. Step 30): Establish the neural network model. Step 40): Training parameters. Step 50): Generate the key. Step 60): Perform a consistency check. The method uses the time correlation between the detection results of the source node and the destination node, and obtains the parameters of neural network which can be effectively predicted by training the samples. The trained neural network is used to generate the physical key to improve the consistency.

Description

1. Technical Field
The present invention relates to the field of communications, in particular to a high-consistency
physical key generation method based on neural network.
2. Background
Seeking a new security technology to make up for the shortcomings of the existing wireless
encryption mechanism and achieving higher security guarantee is a subject of great research value.
Physical layer security opens up a new way to solve this problem. Its principle is based on information
theory, and it uses the physical characteristics of wireless channel to solve the communication security
problem instead of increasing the computational complexity. Specifically, according to the different
ways of using wireless channel characteristics, physical layer security solutions can be divided into the
following two categories. The first category is to use the channel difference to construct the Wire-tap
channel to directly carry out the secure transmission of confidential information. Based on the Wire-tap
eavesdropping channel model proposed by Wyner, as long as the main channel (the channel between the
source node and the destination node) is better than the eavesdropping channel (the channel between the
source node and the eavesdropping node). The source node and the destination node can achieve absolute
security in the sense of Shannon information theory (also known as "unconditional safety"). This kind
of scheme usually requires the source node to have the channel state information of the main channel
and the eavesdropping channel at the same time, which can carry out the Wyner security coding required
for secure transmission. Therefore, its application is limited in the actual scene. The second is the physical
key generation scheme based on the random dynamic characteristics of the wireless channel.
From the point of view of physical characteristics, the wireless fading channel presents random
dynamic and unique characteristics. The uniqueness of the wireless channel ensures that there must be a
difference between the main channel and the eavesdropping channel, and its random dynamic
characteristics make the difference can be updated in real time. This makes it possible to make use of the
physical characteristics of wireless channel to generate dynamic key in real time. Especially in TDD
(Time Division Duplexing) systems, the channel from source node to destination node and the channel
from destination node to source node have the characteristics of short-term reciprocity. This means that
the legitimate communication parties can share the channel information (such as the amplitude and phase
information of channel impulse response, etc.) which is not known to the eavesdropping node. In the
process of communication, the legal communication parties estimate the main channel independently, so that the characteristic information of the main channel can be extracted, and the consistent key which cannot be obtained by the eavesdropping node can be generated. This key is shared by the source node and the destination node and changes dynamically with the main channel, which can realize the secure communication of "one time one secret". Compared with the traditional key system, the physical keys generated based on the physical characteristics of the wireless channel have the following advantages.1)
Online generation and distribution. The two parties of legitimate communication independently extract
the physical key from the channel, without the need of additional key distribution center or user
authentication center, which avoids the security problems in the traditional wireless key distribution
process. 2) Real-time dynamic update. Random dynamic time-varying characteristics of wireless
channels ensure the real-time dynamic update of physical keys. This is conducive to the realization of "one encryption at a time", and it greatly improves the wireless security. For example, in the process of
cracking KASUMI, it is necessary to send millions of pieces of plaintext encrypted by the operator's
network, and then intercept the ciphertext and plaintext for comparative analysis to crack. If the physical
key is "one at a time", this comparative analysis will be invalid.
Roughly speaking, the generation process of physical key can be divided into three stages: 1) The
channel detection of both communication parties; 2) the extraction and quantization of channel feature; 3)
Key consistency verification. One of the most basic characteristics of wireless channel is channel impulse
response, whose random variation of amplitude and phase provides a source for generating key
online. However, since both source node and destination node work in TDD mode, wireless transceiver
cannot be carried out at the same time, and there is a delay in channel detection between them. The delay
results in errors between the detection results of the destination node and the detection results of the
source node.
3. The Invention Content
Technical problem: The technical problem to be solved by the present invention is to propose a
high-consistency physical key generation method based on neural network, which uses the time
correlation of the detection results of the source node and the destination node. The parameters of neural
network which can be effectively predicted are obtained by training the samples, and then the trained
neural network is used to generate the physical key to improve the consistency.
Technical scheme: In order to solve the above-mentioned technical problems, the technical scheme
adopted in the invention is as follows:
A high-consistency physical key generation method based on neural network, the generation method
includes the following steps:
Step 10): In a wireless physical key generation model, there is a source node S, a destination node
D and an eavesdropping node E, and all nodes work in TDD mode. Suppose at time t, the channel
coefficient between the source node S and the destination node D is recorded ashsD(t). The channel
coefficient between the source node S and the eavesdropping node E is recorded ashSE(t), and the
channel coefficient between the destination node D and the eavesdropping node E is recorded as hDE(t)
Step 20): Obtain training samples.
Step 30): Establish the neural network model.
Step 40): Train parameters.
Step 50): Generate the key.
Step 60): Perform a consistency check.
As a preferred example, the step 20) includes: limited by the TDD mode, the source node S and the
destination node D cannot send and receive signals at the same time. Therefore, the two nodes will
alternately carry out channel detection. Suppose that from tj to t, , the amplitude value obtained by the
source node S through channel detection is a, = [a (t ), a (t2 ),..,a(t, )]. Each time the channel detection
of destination node D is delayed A than the source node S, and the obtained amplitude value is
aD = (t+A),.na(t. "(t'+A), + A)].Then, the destination node D sends aD to the source node S,
and the source node S trains the parameters of neural network. Before training the neural network, the
source node S normalizes (asaD) , and it can obtain (as,dD) , where
, =[d(tl),d(t 2 ),---,d(t)],dD=[a(t 1 +A)I"(t 2 +A),._W(t +A)]. Take (2s, nD) as the training
samples.
As a preferred example, the step 30) includes: the neural network model includes input layer, hidden
layer and output layer, where
Input layer: cs , that is, the channel detection result of the normalized source node is the input vector.
Suppose that the number of neurons in the input layer is q,.
Hidden layer: Suppose that the number of neurons in hidden layer neurons is q2 , the input vector of hidden layer is a , the output vector of hidden layer is b , the threshold vector of hidden layer is y and the connection matrix between the input layer and the hidden layer is V.
Output layer: The number ofneurons in the output layer is 1, and the connection vector between the
hidden layer and the output layer is co . The input value of the output layer is fl. The threshold of output
layer is 0, and the output of the output layer is the predicted value about ,Dwhich is obtained by the
source node based onds .
Through the neural network model, the source node uses its own channel detection results from
ti tot , which can predict channel detection results of the destination node at t 4 +A
. 4
As a preferred example, the step 40) includes: taking the normalized channel coefficient amplitude
value (s, ED)instep 20) as the training set, and training the neural network model of step 30).
The source node groups ch into n-4 groups: a-"=[d(t 1 ),d(t 2 ),...,d(t5 )]isthefirstgroup,
a =[d(t2 ),d(t3 ),...,d(t6 )] is the second group,..., 4-4 =[d(t_ 4 ), d(t_ 3),..., t(t,)] is the n -4
group.
The source node inputs the above groups into the neural network model which is established in step
). After the calculation of the hidden layer, the output results of the neural network model are
denormalized to obtain the prediction vector =r5D (t 5+ ' +A) D.,'(t (t +A)] of the
detection result of the destination node after t .
The difference between the predicted value 5(t 5 +A) and the true value a(t+A) is the
prediction error ,, which is shown in equation (1):
J,=a(t+A)-5(t+A)(1)
The source node uses the gradient descent algorithm to update the parameters V, y,t» and 0of the
neural network. And the remaining groups are successively input into the neural network to obtain the
predicted value and prediction error of the detection result of the destination node at subsequent moments.
After all the groups are input, the cumulative error E is shown in equation (2) n-4
E =Y (2)
Repeat the above-mentioned training process until the decrease of the cumulative prediction error
is less than 0.0001, then the training of the neural network is completed.
As a preferred example, the step 50) includes: the source node and the destination node alternately
detect hSD(t) again, and it can obtain the amplitude value a (t) of hD (t). The destination node quantifies
the detection result aD which is obtained by itself to obtain the physical key KD of the destination node.
And the neural network which is trained by the source node through step 40) predicts aD based on as
, and quantifies the prediction result to obtain physical key K, of the source node.
As a preferred example, the step 60) includes: the destination node takes KD as the input to obtain
the Hash function value HD . The source node takes KS as the input to obtain the Hash function value
HS . The destination node sends HD to the source node, then the source node checks whether HS is
equal to HD . If it is equal, it indicates that the source node and the destination node have generated a
consistent key, if it is not equal, it indicates that the key has inconsistent bits. Then return to step 50). If
the re-comparison between Hs and H D are still not equal, return to step 20)
. As a preferred example, in step 20) , q, =5 .
As a preferred example, in step 20) , q 2 = 10.
Beneficial effects: Compared with the prior art, the invention has the following beneficial effects:
1. Online generation. The communication parties generate physical keys online based on the joint
observation of the physical characteristics of the wireless channel, and it does not require additional
nodes for distribution management. In the present invention, the legal communication parties
respectively detect the wireless channel between each other, and quantify the observed value of the
channel to obtain the physical key. In the process of key generation, there is no need for the intervention
and help of other nodes, but the reciprocity of the wireless channel is used to enable both parties to obtain
a consistent key. In other words, in order to obtain the physical key, it can be obtained as long as both
parties perform channel detection. It can be obtained as long as the two parties carry out channel detection.
It is not distributed by other institutions or nodes, but generated online by the communicating parties.
2. The key is automatically updated. The physical key is automatically updated with the dynamic
changes of the wireless channel, and the security is good. The source of the randomness of the physical
key comes from the time-varying dynamic randomness of the wireless channel itself Therefore, the
results of each channel detection between the legal communication parties are random changes. The
physical key quantified based on the random change of the channel detection result also changes
dynamically. In other words, the characteristics of the wireless channel itself enable the physical key to be updated automatically.
3. Good consistency. Based on its own channel detection results, the source node predicts the
detection results of the destination node through the neural network, which improves the consistency of
the physical keys of both parties. Although the detection results of the source node and the destination
node have time delay, they are still correlated. In the invention, the source node trains the neural network
through the training sample, so that it can predict the detection value of the destination node with time
delay according to its own detection value at the previous moment. The physical key obtained by
quantifying the prediction result of the source node is closer to the physical key of the destination node
than the physical key which is obtained by directly quantifying the detection result of the source node,
that is, the consistency is better.
4. Instruction With Pictures
Fig. 1 is a wireless physical key generation model of an embodiment of the present invention. There
is a source node S, a destination node D, and an eavesdropping node E, and all nodes work in a TDD
(Time Division Half Duplex) mode. At time t, the channel coefficient between the source node S and the
destination node D is recorded as hSD (t). The magnitude ofhSD(t) is denoted as a(t).The channel
coefficient between the source node S and the eavesdropping node E is recorded as hSE (t), and the
channel coefficient between the destination node D and the eavesdropping node E is recorded as
hDE(t) .The source node and the destination node obtain a and a, respectively through channel
detection.
Fig. 2 is a schematic diagram of the acquisition process of training samples in an embodiment of
the present invention, and the detection time of the destination node is lagging behind the source node
by A.
Fig. 3 is a diagram of a neural network model in an embodiment of the present invention. The input
vector of the input layer is ds , and the input vector of the hidden layer is a. The output vector of the
hidden layer isb, and the threshold vector of the hidden layer is y.The connection matrix between the
input layer and the hidden layer is V .The connection vector between the hidden layer and the output
layer is co . The input value of the output layer is / . The threshold of output layer is 0, and the output
of the output layer is the predicted value about TDwhich is obtained by the source node based on as
Fig. 4 is a schematic diagram of grouping and prediction of normalized detection result by source nodes in an embodiment of the present invention.
Fig. 5 is a flow chart in an embodiment of the present invention.
Fig. 6 is a prediction diagram of the channel detection value in the embodiment of the present
invention.
Fig. 7 is a diagram of the consistency probability of a physical key generated by the source node
and the destination node in an embodiment of the present invention.
5. Specific Implementation Method
The following is a detailed description of the technical scheme of the invention in combination with
the attached drawings.
As shown in Fig.5, an embodiment of the invention is a high-consistency physical key generation
method based on a neural network, which includes the following steps:
Step 10): As shown in Fig.1, in a wireless physical key generation model, there is a source node S,
a destination node D and an eavesdropping node E, and all nodes work in TDD mode. Suppose at time t,
the channel coefficient between the source node S and the destination node D is recorded as hSD (t). The
channel coefficient between the source node S and the eavesdropping node E is recorded as hE (t) . The
channel coefficient between the destination node D and the eavesdropping node E is recorded as hDE (t).
Both the source node and the destination node can obtain the amplitude information a(t) of
hSD(t)through channel detection, and quantize the bits of a(t) respectively to generate the source
node physical key K, and the destination node physical key KD.On the other hand, due to the spatial
independence of the wireless channel, the eavesdropping node cannot obtain hSD(t), and it can only
obtain hSE (t) orhDE(t). This guarantees the security of the physical key.
Step 20): Obtain training samples. As shown in Fig.2, limited by the TDD mode, the source node S
and the destination node D cannot send and receive signals at the same time. Therefore, the two nodes
will alternately carry out channel detection. Suppose that from t, to t, , the amplitude value obtained by
the source node S through channel detection is a, =[a(t,),a(t2 ),...,a(t,)] . Each time the channel
detection of destination node D is delayed A than the source node S, and the obtained amplitude value
is aD =a 1 +A),a(t2 +A),..., (t +A)].Then, the destination node D sends aDtothesourcenodeS, and the source node S trains the parameters of neural network. Before training the neural network, the source node S normalizes (asaD) , and it can obtain (7SaD) , where
, = [(tl), (t2 ),..., (t,. -SdrD) Take)D2 as the training
samples.
Step 30): Establish the neural network model. As shown in Fig.3, the neural network model includes
input layer, hidden layer and output layer
Input layer: Ws , that is, the channel detection result of the normalized source node is the input
vector, and the number of neurons in the input layer is q . Preferably, ql=5. Input the detection value
of 5 source nodes each time as one group.
Hidden layer: Suppose that the number of neurons in hidden layer neurons is q2 , and the input vector
of hidden layer is a . The output vector of hidden layer is b . The threshold vector of hidden layer is y and
the connection matrix between the input layer and the hidden layer is V. The element vj in matrix V
is the connection weight between the i-th input neuron and the j-th hidden layer neuron. Preferably,
q 2 =10.
Output layer: The number of neurons in the output layer is 1,and the connection vector between the
hidden layer and the output layer is a . The input value of the output layer is # .The threshold of output
layer is 0, and the output of the output layer is the predicted value about WD which is obtained by the
source node based on Us
Through the neural network model, the source node uses its own channel detection results from t,
to ti, which can predict channel detection results of the destination node at t+ + A .
Step 40): Train parameters. The source node takes the normalized channel coefficient amplitude
value (, instep 20) as the training set, and training the neural network model of step 30).
First, the source node groups cs into n-4 groups: d4 1 =[d(t),(t 2 ),.-,d(t)]isthe first
group, - =[d(t2 ), d(t 3),..,(t)]is the second group ....... = d(t 4 ), d(t ,...,d(t)] is
the n -4 group. As shown in Fig. 4, in the method proposed in the present invention, the source node
predicts the normalized channel detection result at the time t 4 + A of the destination node through the
i-th group of normalized channel detection results. Subsequently, the parameters V,y,wo and 0ofthe
neural network are initialized randomly.
Start training, the source node will input the first group a)=[d(t1 ),d(t 2 ),--.,d(t5 )] into the
input layer of neural network. Through the operation of the connection weight matrix, the input vector
of hidden layer is obtained as follows:
a(, =d 'V
Where, a() - (t )vj is the input of the j-th hidden layer neuron. Assuming that the threshold
vector of hidden layer isy, then the input is the first group, and the output vector of hidden layer b
can be calculated by the activation function of the difference between the hidden layer input vector and
the threshold vector, as follows:
b = f(a) - y)
Where, the activation function f uses a sigmoid function, and its expression is as follows:
1 f (x) = 1+exp(-x)
Subsequently, the input vector of hidden layer is calculated by connecting the weight vector
. When the first group is input, then the input value #(') of the output layer is:
fl(l) =b
The threshold of the output layer is 0 , then the first group is input. The output yN( of the output
layer is the activation function value of the difference between ,(')and9, thatis:
y (1)=f( )-0)
y(') is the predicted value of normalized a(t+A) obtained by the neural network when the
source node enters the first group.
Finally, a(t5 +A) can be obtained by denormalizing y() . Here a(t5 +A) is the source node
based on the self-detection result 5 = [Z(t. ), (t2 ),... (t, )] and the predicted value of the target node
of the detection result isa(t5 +A) atthetimet,+A.
Hence, the difference between the predicted value a(t5 +A) and the true valuea(t5 +A)is the
prediction error , which is shown in equation (1):
J, =a a(t, +A) -5(t, + A) (1)
Then, the source node uses the gradient descent algorithm to update the parameters V, y,tWand0
of the neural network. And the remaining groups are successively input into the neural network to obtain
the predicted value and prediction error of the detection result of the destination node at subsequent
moments. After all the groups are input, the cumulative error is shown in equation (2) n-4
E= Z,
Repeat the above-mentioned training process until the decrease (The difference between the two
cumulative prediction errors) of the cumulative prediction error is less than 0.0001. That is, the predicted
performance improvement is extremely small and the training of the neural network is completed.
Step 50): Generate the secret key. The trained neural network is used to generate physical key. The
source node and the destination node alternately detect hSD (t) again, and obtain the amplitude value
a(t)ofhSD (t). The destination node quantifies the detection result aD which is obtained by itself to
obtain the physical key KD of the destination node. And the neural network which is trained by the source
node through step 40) predicts aD based on as , and quantifies the prediction result to obtain the source
node physical key Ks .
Step 60): Perform a consistency check. The destination node takes KD as the input to obtain the
Hash function value HD . The source node takes KS as the input to obtain the Hash function value Hs.
The destination node sends HD to the source node, then the source node checks whether Hs is equal
to HD . If it is equal, it indicates that the source node and the destination node have generated a consistent
key, if it is not equal, it indicates that the key has inconsistent bits. Then return to step 50). If the re
comparison between Hs and HD are still not equal, return to step 20).If the recalculated Hs and HD
are still inconsistent after returning to step 20), then the loop will continue until Hs is consistent with
HD or artificially terminated.
The present invention uses a neural network to train the detection results of the source node, which
makes it close to the detection result of the destination node with delay, and it improves the consistency
of the physical keys generated by the two nodes. The invention uses the detection result samples to train
the neural network, so that the source node can predict the detection result of the destination node with
time delay according to its own detection result. The source node can more effectively approximate the bit quantization of the destination node by quantizing the bit of the prediction result, thereby it can effectively improve the consistency of the physical key. The invention predicts the channel detection result of the source node through the trained neural network, which makes it close to the channel detection result of the destination node and improves the consistency of the physical keys generated by the two nodes.
An example is given below.
The channel amplitude a, which is detected by the source node is:
2.71355 2.26221 1.42825 1.55014 1.57919 0.88455 0.12629 0.52005 1.38515
2.30015 ......
The normalized value of the above-mentioned channel amplitude ds is:
0.82613 0.68539 0.42535 0.46336 0.472410.255810.019370.142150.411910.69723......
The channel amplitude aD detected by the destination node is:
2.61238 1.79190 1.38329 1.66082 1.29735 0.44759 0.26060 0.89365
1.89715 2.48108 .....
The normalized value WD of the above-mentioned channel amplitude a is:
0.75954 0.51094 0.38714 0.47123 0.36110 0.10364 0.04698 0.23879
0.54283 0.71975.....
The destination node sends its normalized detection value D to the source node for neural network
training. The parameters V, y,w and 0 of the neural network are randomly initialized as follows:
0.64630 0.84909 0.66846 0.66693 0.41705 0.45474 0.55828 0.20567 0.67323 0.716677 0.52120 0.37253 0.20678 0.93373 0.97179 0.24669 0.59887 0.89965 0.66428 0.28338 V= 0.37231 0.59318 0.65385 0.81095 0.98797 0.78442 0.14888 0.76259 0.12281 0.89620 0.93713 0.87255 0.07205 0.48455 0.86415 0.88284 0.89971 0.88249 0.40732 0.82658 0.82953 0.93350 0.40673 0.75675 0.38888 0.91371 0.45039 0.28495 0.27529 0.39003]
y =[0.49790 0.69481 0.83437? 0.60963? 0.57474? 0.32604 0.45642 0.71380 0.88441 0.72086]
o =[0.01861 0.67478 0.43851 0.43782 0.11704 0.81468 0.32486 0.24623 0.34271 0.37569],
0 = 0.54644.
The input vector of the neural network is the first group of the normalized detection value of the
source node, namely,
J4'=[0.82613 0.68539 0.42535 0.46336 0.47241].
The operation of the connection weight matrix, the input vector of hidden layer is:
= -4) aa(') =a =[1.87563 2.05440 1.197612.11790 2.01495 1.71913 1.56466 1.65441 1.38249 1.734751]
Therefore, the output vector bh of hidden layer is calculated as follows:
b =f(a) -)
=[0.79863 D.79569 0.58982 0.81880 0.80849 0.80108 0.75180 0.71922? 0.62201 0.73378]
Subsequently, the input vector of hidden layer is calculated by the connection weight vector O
, and the input value of the output layer is obtained:
p( = b =-o2.82633
Therefore, the output of output layer can be obtained as:
y() = f.(() -)= 0.90719
This is the normalized predicted value obtained by the neural network. By normalizing yN), we can
get 5(t,+A)= 3.09968 That is to say, based on the self-detection result
a'==-[0.82613 0.68539 0.42535 0.46336 0.47241], the source node obtains the predicted value
of the detection result a (t, + A)= 0.44759 of the destination node at the time t5 + A . Therefore, the
available prediction error , is:
J, = a (t, + A)- 5(t, + A)= 0.44759 - 3.09968 = -2.65209
Gradient descent method is used to update the neural network parameters, and then the neural
network is iteratively trained by input the subsequent groups of normalized detection values of source
nodes.
When the training is terminated, the parameters of neural network can be obtained as follows:
0.72702 -0.55346 -0.38878 0.25975 0.18074 0.47022 0.89030 1.51106 1.05332 -0.41381] -0.71318 3.43155 0.20103 0.67692 -0.04801 0.25003 0.31852 -3.38256 0.79307 -0.96728 V= 2.53214 -1.04743 -2.21002 -0.29479 0.47985 2.21901 0.41573 1.70148 -2.72307 4.38602 -0.79867 1.05862 1.54721 0.96769 -0.20128 -5.92383 0.35124 3.61540 0.30517 -1.46678 -2.20251 -0.80436 1.14768 -0.22858 0.83919 4.25558 0.68929 2.19633 2.77071 1.43558
y=[1.71148 2.24710 0.90296 1.33421 1.30246 1.38067 0.63635 0.85066 1.93470 2.14252],
o =[3.12595 3.12881 2.89899 1.13569 0.31809 4.91568 D.22669 3.26872 3.61801 4.21793]',
0 = 4.1809.
When the physical key is generated, as shown in Fig.6, the source node's detection value of the
channel amplitude is:
1.27060 1.46594 0.77329 0.89732 0.81821 0.37822
1.58067 2.05217 1.43653 0.79455 1.12711 0.71679
0.65033 1.88858 2.29474......
Based on the parameters of neural network obtained by training, the "parameter" is removed, and
the source node can calculate the predicted value as follows:
0.59406 0.58925 2.18644 1.96908 0.96366 0.99623
1.22656 0.50774 1.18771 2.32476 2.19536......
The detection value of the channel amplitude by the destination node is:
1.53835 1.14169 0.70377 0.98483 0.42096 0.99844
1.95866 1.85363 0.97691 0.97033 1.05380 0.25970
1.32505 2.23022 2.08048 ......
It should be noted that the source node uses its first five detection values to predict the fifth detection
value of the destination node. Therefore, the predicted value starts from the fifth detected value, and the
predicted value of the first four moments is not obtained. For example, the fifth detection value of the
source node is 0.81821, the source node's predicted value based on the neural network is 0.59406, and
the fifth detection value of the destination node is 0.42096.Therefore, the predicted value of the source
node based on the neural network is closer to the detection value of the destination node than its own
detection value.
The prediction situation of this example is shown in Fig.6. Fig.6 is a graph which is drawn from the
fifth detected value and the corresponding predicted value. It can be seen from Fig.6 that the source node
makes predictions based on its own detection results, which can more closely approximate the target
detection value, and the mean square error is reduced from 0.105 to 0.014.
The training process uses 10,000 detection value samples. After the neural network is trained
through 10,000 detection value samples, 100,000 new detection values are used to generate key.
In the generation process, the source node uses the neural network to predict the detection value of the
destination node based on its own detection value, and then the prediction result is quantified. While the
destination node directly quantifies its detection value. By comparing the quantization results of the source node and the destination node, the probability of inconsistent physical key bits between the two nodes is obtained, which is the dotted line in Fig.7. As shown in Fig.7, prediction by neural network can effectively reduce the probability of inconsistency of physical keys, and can effectively improve the consistency of the physical keys generated by the source node and the physical keys of the destination node.
The above shows and describes the basic principles, main features and advantages of the present
invention. Those skilled in the art should understand that the present invention is not limited by the
above-mentioned specific embodiments. The above specific embodiments and the description in the
specification are only to further illustrate the principle of the present invention. Without departing from
the spirit and scope of the present invention, the present invention will have various changes and
improvements, and these changes and improvements fall within the scope of the claimed invention. The
scope of the present invention is defined by the claims and their equivalents.

Claims (8)

1. A high-consistency physical key generation method based on neural network, its characteristics
are as follows:
Step 10): In a wireless physical key generation model, there is a source node S, a destination node
D and an eavesdropping node E, and all nodes work in TDD mode. Suppose at time t, the channel
coefficient between the source node S and the destination node D is recorded ashsD(t). The channel
coefficient between the source node S and the eavesdropping node E is recorded ashSE(t), and the
channel coefficient between the destination node D and the eavesdropping node E is recorded as hDE(t)
Step 20): Obtain training samples.
Step 30): Establish the neural network model.
Step 40): Train parameters.
Step 50): Generate the secret key.
Step 60): Perform a consistency check.
2. According to claim 1, a high-consistency physical key generation method based on neural
network has the following features:
The step 20) includes: limited by the TDD mode, the source node S and the destination node D
cannot send and receive signals at the same time, therefore, the two nodes will alternately carry out
channel detection. Suppose that from tj to t. , the amplitude value obtained by the source node S through
channel detection is a, = [a (t, ), a (t2 ), .-, a(t. )]. Each time the channel detection of destination node D
is delayed A than the source node S, and the obtained amplitude value is
aD = [a(t +A),a(t+A)_.,a(t. +A)].Then, the destination node D sends aD to the source node S,
and the source node S trains the parameters of neural network. Before training the neural network, the
source node S normalizes (as'aD) , and it can obtain (as,D) , where
W, =[d(tl),d(t 2 ),''-,d(t.)],D=[a(t 1 +A),a(t,+A),...,a(+A)].Take (1s, -D) as the training
samples.
3. According to claim 2, a high-consistency physical key generation method based on neural
network has the following features:
The step 30) includes: the neural network model includes input layer, hidden layer and output layer.
Input layer: ds , that is, the channel detection result of the normalized source node is the input vector.
Suppose that the number of neurons in the input layer is q,
. Hidden layer: Suppose that the number of neurons in hidden layer neurons is q2 , the input vector of
hidden layer isa ,the output vector of hidden layer is b,the threshold vector of hidden layer isy and
the connection matrix between the input layer and the hidden layer is V.
Output layer: The number ofneurons in the output layer is 1, and the connection vector between the
hidden layer and the output layer is o . The input value of the output layer is f . The threshold of output layer is 0, and the output of the output layer is the predicted value about ,Dwhich is obtained by the
source node based on cT .
Through the neural network model, the source node uses its own channel detection results from t
to t 4 , which can predict channel detection results of the destination node at t. +A
.
4. According to claim 3, a high-consistency physical key generation method based on neural
network has the following features:
The step 40) includes: taking the normalized channel coefficient amplitude value (aD)instep
) as the training set, and training the neural network model of step 30).
The source node groups tT into n -4 groups: J ()=[(t),(t 2 ),...,(t,)]is the first group,
S=[(t 2 ),d(t 3 ),...,(t)] is the second group,....., =[(t 4 ),d(t_ 3),..., (t,)] is the
n -4 group.
The source node inputs the above groups into the neural network model which is established in Step
). After the calculation of the hidden layer, the output results of the neural network model are
denormalized to obtain the prediction vector =5D (t 5+A),D (t 6 +A),...,D (t +A)] of the
detection result of the destination node after t .
The difference between the predicted value 5(t 5 +A) and the true value a(t+A) is the
prediction error ,, which is shown in equation (1):
,= a(t5 +A)-5(t,+A)(1)
The source node uses the gradient descent algorithm to update the parameters V, y,t and 0of the
neural network. And the remaining groups are successively input into the neural network to obtain the
predicted value and prediction error of the detection result of the destination node at subsequent moments.
After all the groups are input, the cumulative error E is shown in equation (2) n-4
E =Y (2)
Repeat the above-mentioned training process until the decrease of the cumulative prediction error
is less than 0.0001, and the training of the neural network is completed.
5. According to claim 4, a high-consistency physical key generation method based on neural
network has the following features:
The step 50) includes: the source node and the destination node alternately detect hD(t)again,and
obtain the amplitude value a(t) of hSD (t). The destination node quantifies the detection result aDwhich
is obtained by itself to obtain the physical key KD of the destination node. And the neural network which
is trained by the source node through step 40) predicts aD based on a , and quantifies the prediction
result to obtain the source node physical key Ks
6. According to claim 5, a high-consistency physical key generation method based on neural
network has the following features:
The step 60) includes: the destination node takes KD as the input to obtain the Hash function value
HD . The source node takes Ks as the input to obtain the Hash function value Hs. The destination node
sends HD to the source node, then the source node checks whether Hs is equal to HD. Ifitisequal,it
indicates that the source node and the destination node have generated a consistent key, if it is not equal,
it indicates that the key has inconsistent bits. Then return to step 50). If the recomparison between H,
and H D re still not equal, return to step 20).
7. According to claim 2, a high-consistency physical key generation method based on neural
network has the following features:
Instep20), q,=5.
8. According to claim 2, a high-consistency physical key generation method based on neural
network has the following features:
In step 20), q2 =10-
AU2021104458A 2021-07-22 2021-07-22 a high-consistency physical key generation method based on neural network Ceased AU2021104458A4 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2021104458A AU2021104458A4 (en) 2021-07-22 2021-07-22 a high-consistency physical key generation method based on neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
AU2021104458A AU2021104458A4 (en) 2021-07-22 2021-07-22 a high-consistency physical key generation method based on neural network

Publications (1)

Publication Number Publication Date
AU2021104458A4 true AU2021104458A4 (en) 2021-11-11

Family

ID=78480114

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2021104458A Ceased AU2021104458A4 (en) 2021-07-22 2021-07-22 a high-consistency physical key generation method based on neural network

Country Status (1)

Country Link
AU (1) AU2021104458A4 (en)

Similar Documents

Publication Publication Date Title
Thapa et al. Splitfed: When federated learning meets split learning
Kwabena et al. Mscryptonet: Multi-scheme privacy-preserving deep learning in cloud computing
Lin et al. Encryption and decryption of audio signal and image secure communications using chaotic system synchronization control by TSK fuzzy brain emotional learning controllers
CN114363043B (en) Asynchronous federal learning method based on verifiable aggregation and differential privacy in peer-to-peer network
Fagbohungbe et al. Efficient privacy preserving edge intelligent computing framework for image classification in iot
CN112862001A (en) Decentralized data modeling method under privacy protection
Hijazi et al. Secure federated learning with fully homomorphic encryption for iot communications
Lv et al. Digital twins based on quantum networking
Shafee et al. Privacy attacks against deep learning models and their countermeasures
Ding et al. Efficient BiSRU combined with feature dimensionality reduction for abnormal traffic detection
CN116168789A (en) Multi-center medical data generation system and method
Chen et al. Feddat: An approach for foundation model finetuning in multi-modal heterogeneous federated learning
Qu et al. IoMT-based smart healthcare detection system driven by quantum blockchain and quantum neural network
AU2021104458A4 (en) a high-consistency physical key generation method based on neural network
Arifeen et al. Autoencoder based consensus mechanism for blockchain-enabled industrial internet of things
Zhou et al. Novel defense schemes for artificial intelligence deployed in edge computing environment
Wu et al. [Retracted] Privacy Protection of Medical Service Data Based on Blockchain and Artificial Intelligence in the Era of Smart Medical Care
CN117391188A (en) Internet of things model training method based on federal AI calculation
CN117216788A (en) Video scene identification method based on federal learning privacy protection of block chain
Han et al. Adaptive Batch Homomorphic Encryption for Joint Federated Learning in Cross-Device Scenarios
Chandra et al. Applications of cascade correlation neural networks for cipher system identification
Srivastava et al. Enhancement of Authentication in the IoT Network
CN114997423A (en) Semi-centralized confrontation training method for federal learning
Li et al. VTFL: A blockchain based vehicular trustworthy federated learning framework
Liu et al. PPEFL: An Edge Federated Learning Architecture with Privacy‐Preserving Mechanism

Legal Events

Date Code Title Description
FGI Letters patent sealed or granted (innovation patent)
MK22 Patent ceased section 143a(d), or expired - non payment of renewal fee or expiry