CN109302309B - Passive sensing indoor positioning method based on deep learning - Google Patents
Passive sensing indoor positioning method based on deep learning Download PDFInfo
- Publication number
- CN109302309B CN109302309B CN201810977963.5A CN201810977963A CN109302309B CN 109302309 B CN109302309 B CN 109302309B CN 201810977963 A CN201810977963 A CN 201810977963A CN 109302309 B CN109302309 B CN 109302309B
- Authority
- CN
- China
- Prior art keywords
- hidden layer
- current value
- matrix
- neural unit
- random number
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 106
- 238000013135 deep learning Methods 0.000 title claims abstract description 9
- 238000012549 training Methods 0.000 claims abstract description 216
- 230000008569 process Effects 0.000 claims abstract description 64
- 238000003062 neural network model Methods 0.000 claims abstract description 11
- 230000001537 neural effect Effects 0.000 claims description 430
- 239000011159 matrix material Substances 0.000 claims description 355
- 230000004913 activation Effects 0.000 claims description 125
- 210000002569 neuron Anatomy 0.000 claims description 113
- 230000006870 function Effects 0.000 claims description 86
- 210000005036 nerve Anatomy 0.000 claims description 84
- 238000012360 testing method Methods 0.000 claims description 20
- 210000003061 neural cell Anatomy 0.000 claims description 19
- 238000012545 processing Methods 0.000 claims description 14
- 238000012544 monitoring process Methods 0.000 claims description 10
- 230000008859 change Effects 0.000 claims description 9
- 238000006243 chemical reaction Methods 0.000 claims description 8
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 claims description 3
- 238000013528 artificial neural network Methods 0.000 claims description 3
- 238000005259 measurement Methods 0.000 claims description 3
- 238000012163 sequencing technique Methods 0.000 claims description 3
- 230000001502 supplementing effect Effects 0.000 claims description 3
- 238000010276 construction Methods 0.000 claims description 2
- 238000005457 optimization Methods 0.000 abstract description 3
- 238000005516 engineering process Methods 0.000 description 3
- 210000004027 cell Anatomy 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 238000009825 accumulation Methods 0.000 description 1
- 230000019771 cognition Effects 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000005284 excitation Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/04—Network management architectures or arrangements
- H04L41/044—Network management architectures or arrangements comprising hierarchical management structures
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S5/00—Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
- G01S5/02—Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using radio waves
- G01S5/0252—Radio frequency fingerprinting
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/14—Network analysis or design
- H04L41/145—Network analysis or design involving simulating, designing, planning or modelling of a network
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/02—Services making use of location information
- H04W4/023—Services making use of location information using mutual or relative location information between multiple location based services [LBS] targets or of distance thresholds
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/30—Services specially adapted for particular environments, situations or purposes
- H04W4/33—Services specially adapted for particular environments, situations or purposes for indoor environments, e.g. buildings
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a passive sensing indoor positioning method based on deep learning, which comprises the steps of firstly, acquiring and generating CSI amplitude matrixes of all indoor positions, constructing a deep neural network model, dividing the deep neural network model into four restricted Boltzmann machines, then respectively training the four restricted Boltzmann machines and carrying out iterative optimization updating on parameters obtained by training on the basis of the CSI amplitude matrixes, obtaining fingerprints of all indoor positions as a fingerprint database, and finally realizing the positioning of indoor personnel on the basis of the obtained fingerprint database; the method has the advantages of simple positioning process, high positioning precision and lower cost.
Description
Technical Field
The invention relates to a passive sensing positioning method, in particular to a passive sensing indoor positioning method based on deep learning.
Background
With the rapid development of intelligent technology, various intelligent terminal devices enter people's lives. Meanwhile, various intelligent control methods or intelligent monitoring methods matched with the intelligent terminal devices are rapidly developed. The positioning method is an intelligent monitoring method, is mainly used for monitoring people or objects and the like, brings great changes to the life of people, and is widely applied. The existing positioning methods are mainly divided into two categories, namely indoor positioning methods and outdoor positioning methods according to different application areas of the positioning methods. At present, outdoor positioning methods are mature, and indoor positioning methods are generally used for high-precision positioning of indoor personnel, such as people who want to find themselves or lose children quickly in a strange large-scale shopping mall or an office; police seizes a prisoner and hopes to quickly confirm the relationship between a criminal and cognition; the real-time positions of the old and the children at home are remotely monitored, and the like. The application area is limited to a certain indoor space, and the indoor positioning method faces a great challenge due to the complex indoor environment.
With the development of the WIFI technology, channel State information CSI (channel State information) is successfully applied to the field of indoor wireless positioning, and a passive sensing indoor positioning method based on CSI is subject toThere is a wide range of concerns. The existing passive sensing indoor positioning method based on CSI mainly comprises two methods: the first method first divides the room to be monitored evenly into L zones (20)<L<30) Sampling, selecting the center of each region as its position, and recording the positions of the L regions as S1~SLConfiguring a router which is not provided with a password and is communicated with a network in a room to be monitored, configuring a computer in the monitoring room, wherein the computer is provided with a wireless network card intel 5300, a ubuntu12.04 system and a CSI Tools, and then a tester stands at a position S in sequence1~SLWhere another tester separately obtains the position S by computer1~SLThe computer acquires the test data at the current position in real time according to the movement of the personnel when the personnel are positioned indoors, compares the test data at the current position with each reference fingerprint in the indoor position fingerprint library, and takes the position corresponding to the reference fingerprint with the minimum Euclidean distance between the test data and the current test data as the current positioning position; wherein the position S is acquiredlThe specific procedure for the test data at (1, 2, …, L) is: one tester standing at position SlAnother worker collects N signal data packets through a ping router at a speed of 50 times per second of a computer and stores the N signal data packets in CSI Tools, wherein N is an integer larger than 800 and smaller than 1000, matlab software is used for opening a self-contained matlab folder in the CSI Tools, a read _ bf _ file function in the matlab folder is used for reading a CSI & dat file to obtain N signal data packets, then a get _ scaled _ CSI function is used for respectively opening N signal data packets to obtain N CSI data matrices, each CSI data matrix comprises 30 rows and 3 columns of CSI data, the CSI data are in a complex form, an abs function is used for respectively obtaining CSI amplitude matrices corresponding to the N CSI data matrices, each CSI amplitude matrix comprises 30 rows and 3 columns of CSI amplitude data, an average value of three CSI amplitude data in the same row in each CSI amplitude matrix is respectively obtained, and N CSI average amplitude matrices are obtained, the CSI average amplitude comprises 30 rows and 1 column of average amplitude data, averaging the magnitude moments of each CSIThe square accumulation of 30 CSI average amplitude data in the array is used as the power of each average amplitude matrix to obtain the power corresponding to N CSI average amplitude matrices, the average power corresponding to the N CSI average amplitude matrices is obtained and is used as the position SlThe test data at (a), i.e. the reference fingerprint. The second method is to construct a position fingerprint database by adopting the same steps as the first method, then to find K position fingerprints with the closest Euclidean distance by utilizing a K-nearest-neighbor method (K represents K positions with the closest characteristics), to mark the positions of the CSI amplitude matrixes of the K position fingerprints respectively, and to find the position with the largest quantity as the current positioning coordinate.
The first CSI-based passive sensing indoor positioning method directly compares the test data at the current position with each reference fingerprint in the indoor position fingerprint database to perform positioning, although the positioning process is simpler, the diversity of the data is also reduced, thereby resulting in low positioning accuracy. The second passive sensing indoor positioning method based on the CSI classifies CSI data by adopting a K-nearest-neighbor method, the calculated amount is small, the precision is improved compared with that of the first passive sensing indoor positioning method based on the CSI, all training values need to be stored, and the cost is greatly increased.
Disclosure of Invention
The invention aims to solve the technical problem of providing a passive sensing indoor positioning method based on deep learning, which has the advantages of simple positioning process, high positioning accuracy and lower cost.
The technical scheme adopted by the invention for solving the technical problems is as follows: a passive sensing indoor positioning method based on deep learning comprises the following steps:
firstly, data acquisition:
step 1: preparing a computer, inserting the commercial wireless network card intel 5300 into a wireless network card expansion slot of a computer host, installing an ubuntu12.04 system and a Linux 802.11n CSI Tool in the computer host, and placing the computer in a monitoring room;
step 2: the room to be positioned is divided into L areas along the transverse direction and the longitudinal direction,the transverse direction is divided into L1 parts along the maximum length direction, the longitudinal direction is divided into L2 parts along the maximum length direction, L1 xL 2, L1 and L2 are integers which are larger than 4 and smaller than 6 respectively, the center of each region is selected as the position of the region, if the region is a non-centrosymmetric region, the intersection point of the transverse sideline and the perpendicular bisector of the longitudinal sideline of the region is taken as the center, and the position of the first region is recorded as Sl1,2, ·, L; selecting any indoor point as an origin, constructing a two-dimensional coordinate system of the indoor area by taking the transverse direction as the x-axis direction and the longitudinal direction as the y-axis direction, and obtaining the first area S through measurementlCentral point L oflIs expressed as (x)l,yl) A router which is not provided with a password and is communicated with a network is arranged at a certain position in a room to be positioned;
step 3: opening the ubuntu12.04 system in the computer host, connecting the commercial wireless network card intel 5300 arranged on the computer host with the router through a wireless network, and sequentially standing one tester at the position S1~SLThe other tester obtains the position S in turn at the computer host1~SLThe specific process of the test data is as follows: one tester standing at position SlWhen the computer is in operation, another tester collects the position S through the ping router at the speed of 50 times per second of the computer hostlTakes N signal data packets as position SlTest data of (2), position SlSaving the test data of (a) as a file with a suffix name of dat, and naming the file as csil, wherein N is an integer greater than 800 and less than 1000;
processing data: installing matlab in the ubuntu12.04 system of the computer host, opening a self-contained matlab folder in a Linux 802.11n CSI Tool by using matlab software, and comparing the position S acquired in the step (i)1~SLRespectively processing the test data to obtain the position S1~SLThe specific process of the amplitude matrix is as follows: reading csil.dat file by adopting read _ bf _ file function in matlab folder, and acquiring position SlThe number N of signal data packets of (a),then respectively opening the position S by adopting a get _ scaled _ csi functionlN signal data packets of (A) to obtain a position SlN CSI data matrices, position SlEach CSI data matrix comprises 30 rows and 3 columns of CSI data, the CSI data are in a complex form, and the position S is respectively calculated by adopting an abs functionlObtaining a position S by using the CSI amplitude matrixes corresponding to the N CSI data matrixeslAt N CSI magnitude matrices, position SlEach CSI amplitude matrix respectively comprises 30 rows and 3 columns of CSI amplitude data;
constructing a deep neural network model: the neural network model comprises a visible layer, a first hidden layer, a second hidden layer, a third hidden layer and a fourth hidden layer which are sequentially arranged from top to bottom, wherein the visible layer is provided with K0 neural units, K0 is N30 3 12, and represents a multiplication symbol; said first hidden layer has K1 neural units, 300< K1<500, said second hidden layer has K2 neural units, 200< K2< K1; the third hidden layer has K3 nerve units, 100< K3< K2; the fourth hidden layer has K4 nerve units, 50< K4< K3; all the nerve units in the same layer are not connected with each other, all the nerve units in the two adjacent layers are connected with each other, and each nerve unit has two states: the neural network comprises an activated state and a closed state, wherein the state value of the neural unit in the activated state is 1, and the state value of the neural unit in the closed state is 0; the visible layer and the first hidden layer form a first limited Boltzmann machine, the first hidden layer and the second hidden layer form a second limited Boltzmann machine, the second hidden layer and the third hidden layer form a third limited Boltzmann machine, and the third hidden layer and the fourth hidden layer form a fourth limited Boltzmann machine;
extracting the position SlThe fingerprint of (2) is prepared by the following specific processes:
step C1, training the first limited Boltzmann machine:
c1-1, constructing a matrix for storing the state quantity of the visible layer, and marking the matrix as v; building a state quantity for storing a first hidden layerA matrix, which is marked as ha; constructing a matrix for storing the offset of the visible layer, and marking the matrix as a; constructing a matrix for storing the offset of the first hidden layer, and recording the matrix as b; constructing a weight matrix storing the connection weight between the first hidden layer and the visible layer, and recording the weight matrix as w1(ii) a Wherein v ═ v (v)1,v2,v3,...,vK0)T,vk0Represents the state quantity of the K0 th neural unit in the visible layer, the superscript T represents the transpose of the matrix, K0 is 1,2, K0; ha ═ ha (ha)1,ha2,ha3,...,haK1)T,hak1Represents the state quantity of the K1 th neural unit in the first hidden layer, K1 being 1,2.., K1; a ═ a1,a2,a3,...,aK0)T,ak0Representing the offset of the k0 th nerve unit in the visible layer, and adopting a random function to ak0Performing initialization to make ak0Is a random number between 0 and 0.1; b ═ b1,b2,b3,...,bK1)T,bk1Representing the offset of the k1 th neural unit in the first hidden layer by using a random function pair bk1Performing initialization to bk1Is a random number between 0 and 0.1; representing the connection weight between the k1 th nerve unit in the first hidden layer and the k0 th nerve unit in the visible layer by adopting a random function pairCarry out initialization toIs a random number between 0 and 0.1;
c1-2, defining the sequence of each CSI assigned data in the CSI amplitude matrix: locate the CSI amplitude matrix at 1 st column and 1 st rowCSI amplitude data from CSI amplitude data to a 30 th row are used as 1 st CSI amplitude data to a 30 th CSI amplitude data of the matrix, CSI amplitude data from a 2 nd row from the 1 st row CSI amplitude data to a 30 th row are used as 31 st CSI amplitude data to a 60 th CSI amplitude data of the matrix, and CSI amplitude data from a 3 rd row from the 1 st row CSI amplitude data to the 30 th row are used as 61 st CSI amplitude data to 90 CSI amplitude data of the matrix; determining a location SlThe 12-bit binary number corresponding to the qth CSI amplitude data of the Nth CSI amplitude matrix is as follows: determination of position SlIf the q-th CSI amplitude data of the N' -th CSI amplitude matrix is an integer or a decimal if the position SlThe q-th CSI amplitude data of the N' -th CSI amplitude matrix is an integer, the integer is converted into a binary number, 4 0S are supplemented to the right of the obtained binary number to obtain a new binary number, whether the number of the obtained new binary number is equal to 12 or not is judged, and if the number of the obtained new binary number is equal to 12, the new binary number is used as a position SlA 12-bit binary number corresponding to the q-th CSI amplitude data of the N' -th CSI amplitude matrix; if the position is more than 12, selecting 12 bits from right to left as the position SlA 12-bit binary number corresponding to the q-th CSI amplitude data of the N' -th CSI amplitude matrix; if the number of bits is less than 12, the left side of the new binary number is supplemented with 0 to 12 bits and then the result is used as the position SlA 12-bit binary number corresponding to the q-th CSI amplitude data of the N' -th CSI amplitude matrix; if at position SlIf the q-th CSI amplitude data of the N 'th CSI amplitude matrix is a decimal, converting an integer part and a decimal part of the N' th CSI amplitude matrix into binary numbers respectively, and if the number of bits of the binary numbers obtained by converting the integer part is less than 8, complementing 0 at a high bit of the binary numbers to change the binary numbers into 8 bits; if the number of the binary digits obtained by converting the integer part is more than 8, retaining 8 digits from right to left, and deleting other digits to obtain 8-digit binary digits; if the binary digit obtained by converting the integer part is equal to 8, keeping the binary digit unchanged; if the binary digit number obtained by fractional part conversion is equal to 4, the binary digit number is kept unchanged; if the digit of the binary number obtained by fractional part conversion is less than 4, 0 is complemented at the low bit of the binary number to change the binary number into a 4-bit binary number; number of binary digits obtained if fractional part is convertedIf the number is more than 4, remaining 4 bits from left to right and deleting other bits to obtain a 4-bit binary number, then using the 8-bit binary number obtained after the integer part is processed as a high 8 bit, using the 4-bit binary number obtained after the decimal part is processed as a low 4 bit and splicing the low 4 bit into a 12-bit binary number, and using the spliced 12-bit binary number as a position SlA 12-bit binary number corresponding to the q-th CSI amplitude data of the N' -th CSI amplitude matrix; n' ═ 1,2, ·, N; q ═ 1,2, ·, 90; assigning the value of the p-th bit from left to right of the 12-bit binary number corresponding to the q-th CSI amplitude data to v12*3*30*(N′-1)+(q-1)*12+pP1, 2, 12, thereby completing v1,v2,v3,...,vK0Initial assignment of (1);
c1-3. construct a first training matrix ha ', ha' (ha ') that holds the first training values of the state quantities of the k1 th neural unit in the first hidden layer'1,ha′2,…,ha′K1)T,ha′k1A first training value representing a state quantity of a k1 th neural unit in the first hidden layer; constructing a matrix pa storing the probability of first activation of the k1 th neural unit in the first hidden layer, pa ═ pa1,pa2,pa3,...,paK1)T,pak1Representing the probability of the first activation of the k1 th neural unit in the first hidden layer; determining a first training matrix ha' of a first state training value of a k1 th neural unit in the first hidden layer and a matrix pa of a first activation probability of a k1 th neural unit in the first hidden layer, wherein the specific process is as follows:
c1-3-1, recording the activation probability of the k1 th nerve unit in the first hidden layer as Pr (k1), and calculating by adopting the formula (1) to obtain Pr (k 1):
where exp denotes an exponential function, Σ being the accumulated sign, bk1The current value of the offset for the k1 th neural unit in the first hidden layer,for the current value of the connection weight between the k1 th neuron in the first hidden layer and the k0 th neuron in the visible layer, vk0The current value of the state quantity of the k0 th nerve unit in the visible layer;
c1-3-2, assigning the current value of Pr (k1) to pak1Generating a random number between 0 and 1 by using a random function, comparing the current value of Pr (k1) with the random number, and if the current value of Pr (k1) is greater than the random number, assigning 1 to ha'k1Assigning 0 to ha 'if the current value of Pr (k1) is not greater than the random number'k1;
C1-4. construct a second training matrix v ', v ═ v'1,v′2,…,v′K0)T,v′k0Training values which are state quantities of k0 th neural units in the visible layer; determining a training value v 'of a state quantity of a k0 th neural unit in a visible layer'k0The specific process is as follows:
c1-4-1, recording the activation probability of the k0 th nerve unit in the visible layer as Pr (k0), and calculating by adopting the formula (2) to obtain Pr (k 0):
wherein, ak0The current value of the offset for the k0 th neural cell in the visible layer,is the current value of the connection weight, ha 'between the k1 th neural unit in the first hidden layer and the k0 th neural unit in the visible layer'k1A first training value of state quantity of a k1 th neural unit in the first hidden layer;
c1-4-2, generating a random number between 0 and 1 by using a random function, comparing the current value of Pr (k0) with the random number, and if the current value of Pr (k0) is greater than the random number, assigning 1 to v'k0Assigning 0 to v 'if the current value of Pr (k1) is not greater than the random number'k0;
C1-5, constructing a third training matrix ha ", ha ″ (ha ″") storing second training values of state quantities of the k1 th neural unit in the first hidden layer1,ha″2,…,ha″K1)T,ha″k1A second training value representing a state quantity of a k1 th neural unit in the first hidden layer; constructing a matrix pa ', pa ═ of (pa'1,pa′2,…,pa′K1)T,pa′k1Representing a second probability of activation of the k1 th neural unit in the first hidden layer; determining a third training matrix ha 'of a second training value of the state quantity of the k1 th neural unit in the first hidden layer and a matrix pa' of a second activation probability of the k1 th neural unit in the first hidden layer, which comprises the following specific processes:
c1-5-1, the activation probability Pr (k1) of the k1 th nerve cell in the first hidden layer is updated by adopting the formula (3):
wherein, bk1The current value of the offset for the k1 th neural unit in the first hidden layer,for the current value of the connection weight between the k1 th neuron in the first hidden layer and the k0 th neuron in the visible layer, v′ k0Training values which are state quantities of k0 th neural units in the visible layer;
c1-5-2 assigning the current value of Pr (k1) to pa'k1Generating a random number between 0 and 1 by adopting a random function, comparing the current value of Pr (k1) with the random number, and assigning 1 to ha' if the current value of Pr (k1) is greater than the random numberk1And simultaneously assigns 1 to hak1(ii) a If the current value of Pr (k1) is not greater than the random number, then a value of 0 is assigned to ha ″k1And simultaneously assigns 0 to hak1;
C1-6. willAdded 0.01 x (pa) to the current value ofk1*vk0-pa′k1*v′k0) Is assigned toTo pairThe value of the weight matrix w is updated to obtain an updated weight matrix w1(ii) a A is tok0Current value of (v) plus 0.01 x (v)k0-v′k0) Is assigned to ak0To a, ak0Updating the value of the visible layer to obtain the updated offset a of the visible layer; b is tok1Added 0.01 x (pa) to the current value ofk1-pa′k1) Is assigned to bk1To b is pairedk1Is updated to obtain the updated offset b of the first hidden layer, wherein pak1、vk0、pa′k1And v'k0All the values of (a) are the current values thereof;
step C2: training the second limited Boltzmann machine:
c2-1, constructing a matrix for storing the state quantity of the second hidden layer, and marking the matrix as hb; constructing a matrix for storing the offset of the second hidden layer, and recording the matrix as c; constructing a weight matrix for storing a connection weight between the first hidden layer and the visible layer, and recording the weight matrix as w2;hb=(hb1,hb2,hb3,...,hbK2)T,hbk2A state value representing the K2 th neural unit in the second hidden layer, K2 ═ 1,2.., K2; c ═ c1,c2,c3,...,cK2)T,ck2Representing the offset of the k2 th neural unit in the second hidden layer; indicating first concealmentThe connection weight between the k1 th nerve unit in the layer and the k2 th nerve unit in the second hidden layer adopts a random function pair c respectivelyk2Andis initialized to ck2Andrespectively, random values of 0-0.1;
c2-2. construct a fourth training matrix hb ', hb ' ═ hb '1,hb′2,…,hb′K2)T,hb′k2A first training value representing a state quantity of a k2 th neural unit in the second hidden layer; constructing a matrix pb storing first activation probabilities for the k2 th neural unit in the second hidden layer (pb ═ pb)1,pb2,pb3,...,pbK2)T,pbk2Representing the probability of the first activation of the k2 th neural unit in the second hidden layer; determining a fourth training matrix hb' of the first training value of the state quantity of the k2 th neural unit in the second hidden layer and a matrix pb of the first activation probability of the k2 th neural unit in the second hidden layer, specifically comprising the following steps:
c2-2-1, recording the activation probability of the k2 th nerve unit in the second hidden layer as Pr (k2), and calculating by adopting a formula (4) to obtain Pr (k 2):
wherein, cx2The current value of the offset for the k2 th neural unit in the second hidden layer,is the current value, ha, of the connection weight between the k1 th neuron in the first hidden layer and the k2 th neuron in the second hidden layerk1Is the current value of the state quantity of the k1 th neural unit in the first hidden layer;
c2-2-2, assigning the current value of Pr (k2) to pbk2Generating a random number between 0 and 1 by using a random function, comparing the current value of Pr (k2) with the random number, and if the current value of Pr (k2) is greater than the random number, assigning 1 to hb'k2(ii) a If the current value of Pr (k2) is not greater than the random number, then 0 is assigned to hb'k2(ii) a Completing the assignment of a fourth training matrix hb';
c2-3, constructing a fifth training matrix ha '", ha'" for (ha "") storing third training values of state quantities of the k1 th neural unit in the first hidden layer1,ha″2,…,ha″K1)T,ha″k1A third training value representing a state quantity of a k1 th neural unit in the first hidden layer; determining a fifth training matrix ha' ″ storing a third training value of the state quantity of the k1 th neural unit in the first hidden layer by the following specific process:
c2-3-1, the activation probability Pr (k1) of the k1 th nerve cell in the first hidden layer is updated by adopting the formula (5):
wherein, bk1The current value of the offset for the k1 th neural unit in the first hidden layer,is the current value of the connection weight, hb ', between the k1 th neural unit in the first hidden layer and the k2 th neural unit in the second hidden layer'k2A first training value of state quantity of a k2 th neural unit in a second hidden layer;
c2-3-2, generating a random number between 0 and 1 by using a random function, comparing the current value of Pr (k1) with the random number, and if the current value of Pr (k1) is greater than the random number, assigning 1 to ha ″'k1(ii) a If the current value of Pr (k1) is not greater than the random number, then a value of 0 is assigned to ha ″'k1;
C2-4. constructing the second training value for storing the state quantity of the k2 th neural unit in the second hidden layerThe sixth training matrix hb ", hb ═ hb ″ (hb ″)1,hb″2,…,hb″K2)T,hb″k2A second training value representing a state quantity of a k2 th neural unit in the second hidden layer; constructing a matrix pb ', pb ' ═ b (pb '1,pb′2,…,pb′K2)T,pb′k2The method comprises the following steps of representing the second activation probability of the k2 th neural unit in the second hidden layer, determining a sixth training matrix hb 'of a second training value for storing the state quantity of the k2 th neural unit in the second hidden layer and a matrix pb' of the second activation probability of the k2 th neural unit in the second hidden layer, wherein the specific process is as follows:
c2-4-1, the activation probability Pr (k2) of the k2 th nerve cell in the second hidden layer is updated by adopting the formula (6):
wherein, ck2The current value of the offset for the k2 th neural unit in the second hidden layer,is the current value of the connection weight, ha ″, between the k1 th neural unit in the first hidden layer and the k2 th neural unit in the second hidden layer'k1A third training value of state quantities of a k1 th neural unit in the first hidden layer;
c2-4-2 assigning the current value of Pr (k2) to pb'k2Generating a random number between 0 and 1 by adopting a random function, comparing Pr (k2) with the current value with the random number, and if the current value of Pr (k2) is greater than the random number, assigning 1 to hb ″', respectivelyk2And hbk2If the current value of Pr (k2) is not greater than the random number, then 0 is assigned to hb ″', respectivelyk2And hbk2;
C2-5. willCurrent value of plus0.01*(pbk2*hak1-pb′k2*ha″′k1) Is assigned toTo pairUpdating to obtain updated weight matrix w2B is mixingk1Current value of (a) plus 0.01 (ha)k1-ha″′k1) Is assigned to bk1To b is pairedk1Updating to obtain the offset b of the updated first hidden layer, and updating ck2Current value of plus 0.01 (pb)k2-pb′k2) To ck2To c fork2Updating to obtain the offset c of the updated second hidden layer; wherein pbk2、hak1、pb′k2And ha'k1The values of (a) are their current values, respectively;
step C3: training the third limited Boltzmann machine:
c3-1, constructing a matrix for storing the state quantity of the third hidden layer, and marking the matrix as hc; constructing a matrix for storing the offset of the third hidden layer, and recording the matrix as d; constructing a weight matrix for storing the connection weight between the second hidden layer and the third hidden layer, and recording the weight matrix as w3;hc=(hc1,hc2,hc3,...,hcK3)T,hck3A state value representing the K3 th neural unit in the third hidden layer, K3 ═ 1,2.., K3; d ═ d (d)1,d2,d3,...,dK3)T,dk3Representing the offset of the k3 th neural unit in the third hidden layer; representing the connection weight between the k2 th neural unit in the second hidden layer and the k3 th neural unit in the third hidden layer respectivelyMachine function pair dk3Andinitializing to dk3Andrespectively, random values of 0-0.1;
c3-2. construct a seventh training matrix hc ', hc' ═ hc 'of the first training values of the state quantities of the k3 th neural unit in the third hidden layer'1,hc′2,…,hc′K3)T,hc′k3A first training value representing a state quantity of a k3 th neural unit in the third hidden layer; constructing a matrix pc storing the probability of first activation of the k3 th neural unit in the third hidden layer (pc ═ c)1,pc2,pc3,...,pcK3)T,pck3Represents the first activation probability of the k3 th neural unit in the third hidden layer; determining a seventh training matrix hc' storing the first training value of the state quantity of the k3 th neural unit in the third hidden layer and a matrix pc storing the first activation probability of the k3 th neural unit in the third hidden layer, wherein the specific process is as follows:
c3-2-1, recording the activation probability of the k3 th nerve unit in the third hidden layer as Pr (k3), and calculating by adopting a formula (7) to obtain Pr (k 3):
wherein d isk3The current value of the offset for the k3 th neural unit in the third hidden layer,for the current value of the connection weight, hb, between the k2 th neuron in the second hidden layer and the k3 th neuron in the third hidden layerk2Is the current value of the state value of the k2 th neural unit in the second hidden layer;
c3-2-2, assigning the current value of Pr (k3) to pck3Obtaining an updated matrix pc, usingThe machine function generates a random number between 0 and 1, compares the current value of Pr (k3) with the random number, and assigns 1 to hc 'if the current value of Pr (k3) is greater than the random number'k3(ii) a If the current value of Pr (k3) is not greater than the random number, then 0 is assigned to hc'k3;
C3-3 constructing an eighth training matrix hb '", hb'" 'of (hb' "') that deposits a third training value for the state quantity of the k2 th neural unit in the second hidden layer'1,hb″′2,…,hb″′K2)T,hb″′k2And determining an eighth training matrix hb' ″ for storing the third training value of the state quantity of the k2 th neural unit in the second hidden layer by using the third training value representing the state quantity of the k2 th neural unit in the second hidden layer, which comprises the following specific processes:
c3-3-1, updating the activation probability Pr (k2) of the k2 th neural unit in the second hidden layer by using the formula (8):
wherein, ck2The current value of the offset for the k2 th neural unit in the second hidden layer,is the current value of the connection weight, hc 'between the k2 th neural unit in the second hidden layer and the k3 th neural unit in the third hidden layer'k3A first training value of state quantity of a k3 th neural unit in a third hidden layer;
c3-3-2, generating a random number between 0 and 1 by using a random function, comparing the current value of Pr (k2) with the random number, and if the current value of Pr (k2) is greater than the random number, assigning 1 to hb ″'k2(ii) a If the current value of Pr (k2) is not greater than the random number, then 0 is assigned to hb ″'k2;
C3-4, constructing a ninth training matrix hc ", hc ═ hc ″ (hc ″") that stores the second training values of the state quantities of the k3 th neural unit in the third hidden layer1,hc″2,…,hc″K3)T,hc″k3A second training value representing a state quantity of a k3 th neural unit in the third hidden layer; constructing a matrix pc ', pc ═ c (pc'1,pc′2,…,pc′K3)T,pc′k3Represents the second activation probability of the k3 th neural unit in the third hidden layer; the ninth training matrix hc ″ storing the second training value of the state quantity of the k3 th neural unit in the third hidden layer and the matrix pc' of the second activation probability of the k3 th neural unit in the third hidden layer are determined by the following specific processes:
c3-4-1, recording the activation probability of the k3 th nerve unit in the third hidden layer as Pr (k3), and calculating by adopting the formula (9) to obtain Pr (k 3):
wherein d isk3The current value of the offset for the k3 th neural unit in the third hidden layer,is the current value of the connection weight, hb ″, between the k2 th neural unit in the second hidden layer and the k3 th neural unit in the third hidden layer'k2A third training value which is the state quantity of the k2 th neural unit in the second hidden layer;
c3-4-2 assigning the current value of Pr (k3) to pc'k3Generating a random number between 0 and 1 by adopting a random function, comparing the current value of Pr (k3) with the random number, and if the current value of Pr (k3) is greater than the random number, assigning 1 to hc ″', respectivelyk3And hc andk3(ii) a If the current value of Pr (k3) is not greater than the random number, then 0 is assigned to hc ″', respectivelyk3And hc andk3;
c3-5. willCurrent value of (c) plus 0.01 × (pc)k3*hbk2-pc′k3*hb″′k2) Is assigned toTo pairUpdating to obtain updated weight matrix w3(ii) a C is tok2Current value of plus 0.01 (hb)k2-hb″′k2) To ck2To c fork2Updating to obtain the offset c of the updated second hidden layer; will dk3Current value of (c) plus 0.01 × (pc)k3-pc′k3) Is assigned to dk3To d is pairedk3Updating to obtain the offset d of the updated third hidden layer, wherein pck3、hbk2、pc′k3And hb'k2The values of (a) are their current values, respectively;
step C4: training the fourth limited Boltzmann machine:
c4-1, constructing a matrix for storing the state quantity of the fourth hidden layer, and marking the matrix as hd; constructing a matrix for storing the offset of the fourth hidden layer, and recording the matrix as e; constructing a weight matrix storing the connection weight between the third hidden layer and the fourth hidden layer, and recording the weight matrix as w4;hd=(hd1,hd2,hd3,...,hdK4)T,hdk4A state value representing the K4 th neural unit in the fourth hidden layer, K4 ═ 1,2.., K4; e ═ e (e)1,e2,e3,...,eK4)T,ek4Represents the offset of the k4 th neural unit of the fourth hidden layer, representing the connection weight between the k3 th neural unit in the third hidden layer and the k4 th neural unit in the fourth hidden layer by adopting a random functionk4Andrespectively initializing to be random values of 0-0.1;
c4-2. construct a tenth training matrix hd 'that stores the first training values of the state quantities of the k4 th neural unit in the fourth hidden layer, hd ═ hd'1,hd′2,…,hd′K4)T,hd′k4A first training value representing a state quantity of a k4 th neural unit in the fourth hidden layer; constructing a matrix pd storing the probability of the first activation of the k4 th neural unit in the fourth hidden layer (pd ═ pd)1,pd2,pd3,...,pdK4)T, pdk4Represents the probability of the first activation of the k4 th neural unit in the fourth hidden layer; the specific process of determining the tenth training matrix hd' storing the first training value of the state quantity of the k4 th neural unit in the fourth hidden layer and the matrix pd storing the first activation probability of the k4 th neural unit in the fourth hidden layer is as follows:
c4-2-1, recording the activation probability of the k4 th nerve unit in the fourth hidden layer as Pr (k4), and calculating by adopting a formula (10) to obtain Pr (k 4):
wherein e isk4The current value of the offset for the k4 th neural unit of the fourth hidden layer,is the current value of the connection weight, hc, between the k3 th neuron in the third hidden layer and the k4 th neuron in the fourth hidden layerk3The current value of the state quantity of the k3 th neural unit in the third hidden layer;
c4-2-2, assigning the current value of Pr (k4) to pdk4Generating a random number between 0 and 1 by using a random function, comparing the current value of Pr (k4) with the random number, and if Pr (k4) is greater than the random number, assigning 1 to hd'k4Assigning 0 to hd 'if the current value of Pr (k4) is not greater than the random number'k4;
C4-3 constructing an eleventh training matrix hc '", hc'" ═ h ″ ', of third training values of state quantities of k3 th neural units in a third hidden layer'1,hc″′2,…,hc″′K3)T,hc″′k3And determining an eleventh training matrix hc' for storing the third training value of the state quantity of the k3 th neural unit in the third hidden layer by using the third training value representing the state quantity of the k3 th neural unit in the third hidden layer, wherein the specific process is as follows:
c4-3-1, the activation probability Pr (k3) of the k3 th nerve cell in the third hidden layer is updated by adopting the formula (11):
wherein d isk3The current value of the offset for the k3 th neural unit in the third hidden layer,is the current value of the connection weight, hd ', between the k3 th neural unit in the third hidden layer and the k4 th neural unit in the fourth hidden layer'k4A first training value of state quantity of a k4 th neural unit in a fourth hidden layer;
c4-3-2, generating a random number between 0 and 1 by using a random function, comparing the current value of Pr (k3) with the random number, and if the current value of Pr (k3) is greater than the random number, assigning 1 to hc ″'k3If the current value of Pr (k3) is not greater than the random number, then 0 is assigned to hc'k3;
C4-4, constructing a twelfth training matrix hd ", hd" ═ hd ″, which stores the second training values of the state quantities of the k4 th neural unit in the fourth hidden layer1,hd″2,…,hd″K4)T,hd″k4A second training value representing a state quantity of a k4 th neural unit in the fourth hidden layer; constructing a matrix pd ', pd' ═ of (pd ') of the second activation probability of the k4 th neural unit in the fourth hidden layer'1,pd′2,…,pd′K4)T, pd′k4Represents the second activation probability of the k4 th neural unit in the fourth hidden layer; determining a twelfth training matrix hd 'storing a second training value of the state quantity of the k4 th neural unit in the fourth hidden layer and a matrix pd' of a second activation probability of the k4 th neural unit in the fourth hidden layer, which comprises the following specific processes:
c4-4-1, updating the activation probability Pr (k4) of the k4 th neural unit in the fourth hidden layer by adopting the formula (12):
wherein e isk4The current value of the offset for the k4 th neural unit of the fourth hidden layer,is the current value, hc ″, of the connection weight between the k3 th neural unit in the third hidden layer and the k4 th neural unit in the fourth hidden layer'k3A third training value which is the state quantity of the k3 th neural unit in the third hidden layer;
c4-4-2 assigning the current value of Pr (k4) to pd'k4Generating a random number between 0 and 1 by adopting a random function, comparing the current value of Pr (k4) with the random number, and if the current value of Pr (k4) is greater than the random number, assigning 1 to hd ″', respectivelyk4And hc andk4if the current value of Pr (k4) is not greater than the random number, then 0 is assigned to hd ″', respectivelyk4And hc andk4;
c4-5. willCurrent value of plus 0.01 x (pd)k4*hck3-pd′k4*hc″′k3) Is assigned toTo pairUpdating to obtain updated weight matrix w4(ii) a Will dk3Current value of (d) plus 0.01 (hc)k3-hc″′k3) Is assigned to dk3To d is pairedk3Updating to obtain the updated offset d of the third hidden layer; e is to be43Current value of plus 0.01 x (pd)k4-pd′k4) To ek4To e is aligned withk4Updating to obtain the offset e of the updated fourth hidden layer; pdk therein4、hck3、pd′k4And hc'k3The values of (a) are their current values, respectively;
step C5: setting an iteration variable Y, and initializing the iteration variable Y to enable the initial value of the iteration variable Y to be 1;
and step C6, carrying out the Y-th iteration updating on the activation probability of each nerve unit in the visible layer, the first hidden layer, the second hidden layer, the third hidden layer and the fourth hidden layer, wherein the specific process is as follows:
c6-1, updating the activation probability Pr (k1) of the k1 th nerve cell in the first hidden layer by adopting the formula (13):
wherein, bk1The current value of the offset for the k1 th neural unit in the first hidden layer,for the current value of the connection weight between the k1 th neuron in the first hidden layer and the k0 th neuron in the visible layer, vk0The current value of the state quantity of the k0 th nerve unit in the visible layer;
c6-2, generating a random number between 0 and 1 by using a random function, comparing the current value of Pr (k1) with the random number, and if the current value of Pr (k1) is larger than the random number, assigning 1 to hak1Pair hak1Updating is carried out; if the current value of Pr (k1) is not greater than the random number, then a value of 0 is assigned to hak1Pair hak1Updating is carried out;
c6-3, updating the activation probability Pr (k2) of the k2 neural unit in the second hidden layer by adopting the formula (14):
wherein, ck2The current value of the offset for the k2 th neural unit in the second hidden layer,is the current value, ha, of the connection weight between the k1 th neuron in the first hidden layer and the k2 th neuron in the second hidden layerk1Is the current value of the state quantity of the k1 th neural unit in the first hidden layer;
c6-4, generating a random number between 0 and 1 by using a random function, comparing the current value of Pr (k2) with the random number, and if the current value of Pr (k2) is larger than the random number, assigning 1 to hdk2To hbk2Updating is carried out; if the current value of Pr (k2) is not greater than the random number, then a value of 0 is assigned to hbk2To hbk2Updating is carried out;
c6-5, updating the activation probability Pr (k3) of the k3 neural unit in the third hidden layer by adopting the formula (15):
wherein d isk3The current value of the offset for the k3 th neural unit in the third hidden layer,for the current value of the connection weight, hb, between the k2 th neuron in the second hidden layer and the k3 th neuron in the third hidden layerk2Is the current value of the state quantity of the k2 th nerve unit in the second hidden layer;
c6-6, generating a random number between 0 and 1 by using a random function, and comparing the current value of Pr (k3) with the random numberThe random numbers are compared and if the current value of Pr (k3) is greater than the random number, then 1 is assigned to hck3To hck3Updating is carried out; if the current value of Pr (k3) is not greater than the random number, then a value of 0 is assigned to hck3To hck3Updating is carried out;
c6-7, updating the activation probability Pr (k4) of the k4 neural cell in the fourth hidden layer by adopting the formula (16):
wherein e isk4The current value of the offset for the k4 th neural unit in the fourth hidden layer,is the current value of the connection weight, hc, between the k3 th neuron in the third hidden layer and the k4 th neuron in the fourth hidden layerk3The current value of the state quantity of the k3 th neural unit in the third hidden layer;
c6-8, generating a random number between 0 and 1 by using a random function, comparing the current value of Pr (k4) with the random number, and if the current value of Pr (k4) is larger than the random number, assigning 1 to hdk4To hdk4Updating is carried out; if the current value of Pr (k4) is not greater than the random number, then a value of 0 is assigned to hdk4To hdk4Updating is carried out;
c6-9, updating the activation probability Pr (k3) of the k3 th neural cell in the third hidden layer again by adopting the formula (17):
wherein d isk3The current value of the offset for the k3 th neural unit in the third hidden layer,the k3 th nerve cell in the third hidden layer and the fourth hidden layerCurrent value of connection weight between k4 neural units, hdk4Is the current value of the state quantity of the k4 th neural unit in the fourth hidden layer;
c6-10, generating a random number between 0 and 1 by using a random function, comparing the current value of Pr (k3) with the random number, and if the current value of Pr (k3) is larger than the random number, assigning 1 to hck3To hck3Updating is carried out; if the current value of Pr (k3) is not greater than the random number, then a value of 0 is assigned to hck3To hck3Updating is carried out;
c6-11, updating the activation probability Pr (k2) of the k2 neural cell in the second hidden layer again by adopting the formula (18):
wherein, ck2The current value of the offset for the k2 th neural unit in the second hidden layer,is the current value of the connection weight, hc, between the k2 th neuron in the second hidden layer and the k3 th neuron in the third hidden layerk3The current value of the state quantity of the k3 th neural unit in the third hidden layer;
c6-12, generating a random number between 0 and 1 by using a random function, comparing the current value of Pr (k2) with the random number, and if the current value of Pr (k2) is larger than the random number, assigning 1 to hbk2To hbk2Updating is carried out; if the current value of Pr (k2) is not greater than the random number, then a value of 0 is assigned to hbk2To hbk2Updating is carried out;
c6-13, updating the activation probability Pr (k1) of the k1 th neural cell in the first hidden layer again by adopting the formula (19):
wherein, bk1The current value of the offset for the k1 th neural unit in the first hidden layer,for the current value of the connection weight, hb, between the k1 th neuron in the first hidden layer and the k2 th neuron in the second hidden layerk2Is the current value of the state quantity of the k2 th nerve unit in the second hidden layer;
c6-14, generating a random number between 0 and 1 by using a random function, comparing the current value of Pr (k1) with the random number, and if the current value of Pr (k1) is larger than the random number, assigning 1 to hak1Pair hak1Updating is carried out; if the current value of Pr (k1) is not greater than the random number, then a value of 0 is assigned to hak1Pair hak1Updating is carried out;
c6-15, updating the activation probability Pr (k0) of the k0 th nerve cell in the visible layer by adopting the formula (20):
wherein, ak0The current value of the offset for the k0 th neural cell in the visible layer,is the current value of the connection weight, ha, between the k1 th neuron in the first hidden layer and the k0 th neuron in the visible layerk1Is the current value of the state quantity of the k1 th neural unit in the first hidden layer;
c6-16: generating a random number between 0 and 1 by adopting a random function, comparing the current value of Pr (k0) with the random number, and if the current value of Pr (k0) is greater than the random number, assigning 1 to v'k0To v'k0Updating is carried out; if the current value of Pr (k0) is not greater than the random number, then 0 is assigned to v'k0To v'k0Updating is carried out;
step C7: creating N reconstructed amplitude matrixes for storing reconstructed CSI amplitude data, wherein each reconstructed amplitude matrix isThe amplitude value matrix can store 30 rows and 3 columns of reconstructed CSI amplitude value data, the sequencing definition of the 30 rows and 3 columns of reconstructed CSI amplitude value data which are stored in each reconstructed amplitude value matrix is the same as that of the CS amplitude value matrix, and v't-v'11+tAccording to v'tv′t+1v′t+2v′t+3v′t+4…v′t+ 9v′t+10v′t+11The order combination of the 12 bit binary numbers is 12 bit binary numbers, t is 12N +1, N is 0, 1,2, …, K0/12-1, the 1 st bit to the 8 th bit binary numbers from left to right of the 12 bit binary numbers are used as corresponding decimal numbers obtained by calculating the integral part of the reconstructed CSI amplitude data, the 9 th bit to the 12 th bit are used as corresponding decimal numbers obtained by calculating the decimal part of the reconstructed CSI amplitude data, the decimal numbers corresponding to the integral part are used as integers, the decimal numbers corresponding to the decimal part are used as the decimal part combination of the decimal part as the decimal number corresponding to the 12 bit binary numbers, and the decimal numbers are used as the (t-1)/12) +1) -90 (N ' -1) th reconstructed CSI amplitude data of the N ' reconstructed amplitude matrix to be stored in the N ' reconstructed amplitude matrix, INT is an integer function, and N reconstructed amplitude matrixes respectively storing 30 rows and 3 columns of reconstructed CSI amplitude data are obtained;
step C8: calculating the Euclidean distance between the Nth 'reconstructed amplitude matrix and the Nth' CSI amplitude matrix, accumulating the Euclidean distance between the 1 st reconstructed amplitude matrix and the 1 st CSI amplitude matrix to the Euclidean distance between the Nth reconstructed amplitude matrix and the Nth CSI amplitude matrix, taking the obtained sum as an output error, and recording the output error as phi;
step C9: offset amounts for visible layer, first hidden layer, second hidden layer, third hidden layer and fourth hidden layer and w1,w2,w3And w4Respectively updating, and the specific process is as follows:
c9-1. construction of the k0 th of the visible layerMatrix δ a, δ a ═ of residual terms of the neural units (δ a)1,δa2,…,δaK0),δak0Residual terms for the k0 th neural unit in the visible layer; calculating a residual error term delta a of the k0 th nerve unit in the visible layer by adopting the formula (21)k0:
δak0=-(v′k0-vk0)*v′k0*(1-v′k0) (21)
Wherein, v'k0The current value of the training value, v, of the state quantity of the k0 th neural unit in the visible layerk0The current value of the state quantity of the k0 th nerve unit in the visible layer;
c9-2. construct a matrix δ b that holds the residual terms of the k1 th neural unit in the first hidden layer (δ b ═ b)1,δb2,…,δbK1),δbk1Residual terms for the k1 th neural unit in the first hidden layer; calculating a residual error term delta b of the k1 th nerve unit in the first hidden layer by adopting a formula (22)k1:
Wherein,is the current value of the connection weight, deltaa, between the k1 th neural unit in the visible layer and the first hidden layerk0Is the current value of the residual term for the k0 th neural unit in the visible layer, hak1Is the current value of the state quantity of the k1 th neural unit in the first hidden layer;
c9-3. construct a matrix δ C that holds the residual terms of the k2 th neural unit in the second hidden layer (δ C ═ δ C)1,δc2,…,δcK2),δck2Residual terms for the k2 th neural unit in the second hidden layer; calculating a residual error term deltac of the k2 th neural unit in the second hidden layer by adopting a formula (23)k2:
Wherein,is the current value of the connection weight between the k1 th neuron in the first hidden layer and the k2 th neuron in the second hidden layer, δ bk1Is the current value of the residual term of the k1 th neural unit in the first hidden layer, hbk2Is the current value of the state quantity of the k2 th nerve unit in the second hidden layer;
c9-4. construct a matrix δ d storing the residual terms of the k3 th neural unit in the third hidden layer (δ d ═ d)1,δd2,…,δdK3),δdk3Residual terms for the k3 th neural unit in the third hidden layer; calculating a residual error term delta d of the k3 th nerve unit in the third hidden layer by adopting a formula (24)k3:
Wherein,is the current value of the connection weight, deltac, between the k2 th neuron in the second hidden layer and the k3 th neuron in the third hidden layerk2Is the current value, hc, of the residual term of the k2 th neural unit in the second hidden layerk3The current value of the state quantity of the k3 th neural unit in the third hidden layer;
c9-5. construct a matrix δ e storing the residual terms of the k4 th neural unit in the fourth hidden layer (δ e ═ e1,δe2,…,δeK4),δek4Calculating a residual term delta e of the k4 neural unit in the fourth hidden layer by adopting a formula (25) for the residual term of the k4 neural unit in the fourth hidden layerk4:
Wherein,is the current value of the connection weight, δ d, between the k3 th neuron in the third hidden layer and the k4 th neuron in the fourth hidden layerk3Is the current value of the residual term, hd, for the k3 th neural unit in the third hidden layerk4Is the current value of the state quantity of the k4 th neural unit in the fourth hidden layer;
c9-6. willIs added to the current value of (d) by 0.5 x δ dk3hdk4Is assigned toTo pairUpdating is carried out; will be provided withIs added to the current value of 0.5 × δ ck2hck3Is assigned toTo pairUpdating is carried out; will be provided withIs added by 0.5 × δ bk1hbk2Is assigned toTo pairIs updated toIs added to the current value of (d) by 0.5 x δ ak1hak1Is assigned toTo pairUpdating is carried out; e is to bek4Is added with 0.5 × δ ek4Is assigned to ek4(ii) a To ek4Updating is carried out; will dk3Is added to the current value of (d) by 0.5 x δ dk3Is assigned to dk3(ii) a To dk3Updating is carried out; c is tok2Is added to the current value of 0.5 × δ ck2Is assigned to ck2To c fork2Updating is carried out; b is tok1Is added by 0.5 × δ bk1Is assigned to bk1To b is pairedk1Updating is carried out; a is tok0Is added to the current value of (d) by 0.5 x δ ak0Is assigned to ak0To a, ak0Performing an update, wherein δ dk3、hdk4、δck2、hck3、δbk1、hbk2、δak1、hak1And δ ek4The values of (a) are their current values, respectively;
step C10: judging whether the output error phi is less than 5 and Y is equal to 10000, if at least one of the two conditions is satisfied, determining w1、w2、w3、w4The current values of a, b, c, d and e are taken as SlIf neither of the two conditions is true, the value of Y is updated by adding 1 to the current value of Y, and the process returns to the step C6 to the step C9 to carry out the next iteration until at least one of the two conditions is true to obtain SlThe fingerprint of (2);
fifthly, positioning indoor personnel, and the specific process is as follows:
fifthly-1, acquiring N signal data packets of the current position of the indoor person to be positioned in real time through a computer in the monitoring room, and saving the acquired N signal data packets of the current position of the indoor person as data to be detected as a file named CSI.dat;
fifthly-2, obtaining N CSI amplitude matrixes of the current position according to the N signal data packets of the current position by the same method, and calculating to obtain the mean, the variance sigma and the standard deviation std of the N CSI amplitude matrixes of the current position;
-3 setting a coefficient of variation, and recording the coefficient of variation as λ, and calculating the coefficient of variation λ by using a formula (26):
fifthly, determining a 12-bit binary number corresponding to the mth CSI amplitude data of the Nth' th CSI amplitude matrix in the N CSI amplitude matrices at the current position, wherein the specific process is as follows: judging whether the m-th CSI amplitude data of the N 'th CSI amplitude matrix in the N CSI amplitude matrices at the current position is an integer or a decimal, if the m-th CSI amplitude data of the N' th CSI amplitude matrix in the N CSI amplitude matrices at the current position is an integer, converting the m-th CSI amplitude data into a binary number, supplementing 4 0 s to the right of the obtained binary number to obtain a new binary number, then judging whether the number of bits of the obtained new binary number is equal to 12, if the m-th CSI amplitude data is equal to 12, taking the new binary number as a 12-bit binary number corresponding to the m-th CSI amplitude data of the N 'th CSI amplitude matrix in the N CSI amplitude matrices at the current position, if the m-th CSI amplitude data is greater than 12, selecting 12 bits from right to left as 12-bit binary numbers corresponding to the m-th CSI amplitude data of the N' th CSI amplitude matrix in the N CSI amplitude matrices at the current position, and if the m-th CSI amplitude data is less than 12, after 0 is supplemented to the left of the new binary number until the number of bits is 12 bits, the new binary number is used as a 12-bit binary number corresponding to the mth CSI amplitude data of the Nth' CSI amplitude matrix in the N CSI amplitude matrices at the current position; if the m-th CSI amplitude data of the N' th CSI amplitude matrix in the N CSI amplitude matrices at the current position is a decimal, respectively converting an integer part and a decimal part of the M-th CSI amplitude data into binary numbers, if the number of bits of the binary numbers obtained by converting the integer part is less than 8, complementing 0 at the high bits of the binary numbers to change the binary numbers into 8-bit binary numbers, and if the number of bits of the binary numbers obtained by converting the integer part is more than 8, deleting other bits of the 8-bit binary numbers from right to left to obtain 8-bit binary numbersA binary number, wherein if the number of bits of the binary number obtained by converting the integer part is equal to 8, the binary number is kept unchanged, if the number of bits of the binary number obtained by converting the decimal part is equal to 4, the binary number is kept unchanged, if the number of bits of the binary number obtained by converting the decimal part is less than 4, 0 is supplemented to the lower bits of the binary number to change the binary number into a 4-bit binary number, if the number of bits of the binary number obtained by converting the decimal part is greater than 4, 4 bits of the binary number are left from left to right and deleted to obtain a 4-bit binary number, the 8-bit binary number obtained by processing the integer part is used as the upper 8 bits, the 4-bit binary number obtained by processing the decimal part is used as the lower 4 bits to be spliced into a 12-bit binary number, and the spliced 12-bit binary number is used as a 12-bit binary number corresponding to the mth CSI amplitude data of the Nth' CSI amplitude matrix in the N CSI amplitude matrices at the current position; n' ″ 1,2., N; 1,2, ·, 90; assigning the value of the x-th bit counted from left to right to v by the twelve-bit binary value of the m-th amplitude data12*3*30*(N″′-1)+(m-1)*12+xTo v is to v1,v2,v3,...,vK0Updating, wherein x is 1,2, 12;
fifthly, 5 will be at the position SlThe probability of obtaining v is denoted as Pr (v)l) Determining Pr (v)l) The specific process comprises the following steps:
5-1, adding SlFingerprint w of1、w2、w3、w4And a, b, C, d and e are used as current values, output errors are recalculated according to the methods of the step C6, the step C7 and the step C8, and the output errors calculated at the moment are recorded as phi1;
Fifthly, recording the probability of the indoor position asLet Pr (Lo) be uniformly distributed in the indoor position probabilityl) 1/L, locating the indoor person to be positioned at the position SlIs denoted as Pr (S)l) Calculating by adopting a formula (28) to obtain Pr (S)l):
7 recording the coordinate of the current position of the indoor person to be positioned as (x)l′,yl′) Calculating (x) using equations (29) and (30)l′,yl′):
(x) calculated by (8)l′,yl′) Namely the positioning coordinates of the indoor person to be positioned.
Compared with the prior art, the invention has the advantages that a deep neural network model with a visible layer and four hidden layers is constructed by utilizing a deep learning technology and is used for extracting fingerprints of each position in a room to be measured to establish a fingerprint database, when the fingerprint of each position is extracted, the deep neural network model is divided into four limited Boltzmann machines, the four limited Boltzmann machines are respectively trained to obtain a weight matrix and a matrix of offset, then the obtained weight matrix and the matrix of offset are subjected to iterative updating optimization by utilizing residual error items until an optimization termination condition is met, so that the weight matrix and the matrix of offset in the deep neural network model with higher precision are obtained to improve the precision, the four weight matrices in the 4 trained limited Boltzmann machines are used, and the matrices of offset of the visible layer and the four hidden layers are stored as the fingerprint of the current position, the degree of the space occupied by the data is far smaller than the degree of the space occupied by the acquired data, the cost is low, so that a database is established for positioning indoor personnel after the fingerprints at all positions are extracted, the fingerprints in the fingerprint database can be directly used when the indoor personnel are positioned, and the positioning process is simple; by using the position SlThe output error is recalculated, and the output error is calculated based on the recalculated output errorTo obtain at position SlProbability of obtaining v Pr (v)l) Then according to the calculated probability Pr (v)l) Obtaining the position S of the indoor person to be positionedlThe probability of the positioning coordinate of the indoor person to be positioned is obtained finally, and the result positioning precision is high.
Detailed Description
The present invention will be described in further detail with reference to examples.
Example (b): a passive sensing indoor positioning method based on deep learning comprises the following steps:
firstly, data acquisition:
step 1: preparing a computer, inserting the commercial wireless network card intel 5300 into a wireless network card expansion slot of a computer host, installing an ubuntu12.04 system and a Linux 802.11n CSI Tool in the computer host, and placing the computer in a monitoring room;
step 2: dividing an indoor to be positioned into L areas along the transverse direction and the longitudinal direction, wherein the transverse direction is divided into L1 parts along the maximum length direction, the longitudinal direction is divided into L2 parts along the maximum length direction, so that L areas are obtained, L is L1 multiplied by L2, L1 and L2 are integers which are larger than 4 and smaller than 6 respectively, the center of each area is selected as the position of the area, if the area is a non-centrosymmetric area, the intersection point of the transverse sideline and the perpendicular bisector of the longitudinal sideline of the area is taken as the center, and the position of the first area is recorded as Sl1,2, ·, L; selecting any indoor point as an origin, constructing a two-dimensional coordinate system of the indoor area by taking the transverse direction as the x-axis direction and the longitudinal direction as the y-axis direction, and obtaining the first area S through measurementlCentral point L oflIs expressed as (x)l,yl) A router which is not provided with a password and is communicated with a network is arranged at a certain position in a room to be positioned;
step 3: opening the ubuntu12.04 system in the computer host, connecting the commercial wireless network card intel 5300 arranged on the computer host with the router through a wireless network, and sequentially standing one tester at the position S1~SLTo another test personThe personnel sequentially obtain the positions S at the computer host1~SLThe specific process of the test data is as follows: one tester standing at position SlWhen the computer is in operation, another tester collects the position S through the ping router at the speed of 50 times per second of the computer hostlTakes N signal data packets as position SlTest data of (2), position SlSaving the test data of (a) as a file with a suffix name of dat, and naming the file as csil, wherein N is an integer greater than 800 and less than 1000;
processing data: installing matlab in the ubuntu12.04 system of the computer host, opening a self-contained matlab folder in a Linux 802.11n CSI Tool by using matlab software, and comparing the position S acquired in the step (i)1~SLRespectively processing the test data to obtain the position S1~SlThe specific process of the amplitude matrix is as follows: reading csil.dat file by adopting read _ bf _ file function in matlab folder, and acquiring position SlRespectively opening the position S by adopting a get _ scaled _ csi functionlN signal data packets of (A) to obtain a position SlN CSI data matrices, position SlEach CSI data matrix comprises 30 rows and 3 columns of CSI data, the CSI data are in a complex form, and the position S is respectively calculated by adopting an abs functionlObtaining a position S by using the CSI amplitude matrixes corresponding to the N CSI data matrixeslAt N CSI magnitude matrices, position SlEach CSI amplitude matrix respectively comprises 30 rows and 3 columns of CSI amplitude data;
constructing a deep neural network model: the neural network model comprises a visible layer, a first hidden layer, a second hidden layer, a third hidden layer and a fourth hidden layer which are sequentially arranged from top to bottom, wherein the visible layer is provided with K0 neural units, and K0 is N30 3 12 which represents a multiplication symbol; the first hidden layer has K1 neural units, 300< K1<500, the second hidden has K2 neural units, 200< K2< K1; the third hidden layer has K3 neural units, 100< K3< K2; the fourth hidden layer has K4 neural units, 50< K4< K3; all nerve units on the same layer are not connected with each other, all nerve units on two adjacent layers are connected with each other, and each nerve unit has two states: the neural network comprises an activated state and a closed state, wherein the state value of the neural unit in the activated state is 1, and the state value of the neural unit in the closed state is 0; the visible layer and the first hidden layer form a first limited Boltzmann machine, the first hidden layer and the second hidden layer form a second limited Boltzmann machine, the second hidden layer and the third hidden layer form a third limited Boltzmann machine, and the third hidden layer and the fourth hidden layer form a fourth limited Boltzmann machine;
extracting the position SlThe fingerprint of (2) is prepared by the following specific processes:
step C1, training the first limited Boltzmann machine:
c1-1, constructing a matrix for storing the state quantity of the visible layer, and marking the matrix as v; constructing a matrix for storing the state quantity of the first hidden layer, and recording the matrix as ha; constructing a matrix for storing the offset of the visible layer, and marking the matrix as a; constructing a matrix for storing the offset of the first hidden layer, and recording the matrix as b; constructing a weight matrix storing the connection weight between the first hidden layer and the visible layer, and recording the weight matrix as w1(ii) a Wherein v ═ v (v)1,v2,v3,...,vK0)T,vk0Represents the state quantity of the K0 th neural unit in the visible layer, the superscript T represents the transpose of the matrix, K0 is 1,2, K0; ha ═ ha (ha)1,ha2,ha3,...,haK1)T,hak1Represents the state quantity of the K1 th neural unit in the first hidden layer, K1 being 1,2.., K1; a ═ a1,a2,a3,...,aK0)T,ak0Representing the offset of the k0 th nerve unit in the visible layer, and adopting a random function to ak0Performing initialization to make ak0Is a random number between 0 and 0.1; b ═ b1,b2,b3,...,bK1)T,bk1Representing the offset of the k1 th neural unit in the first hidden layer by using a random function pair bk1Performing initialization to bk1Is a random number between 0 and 0.1; representing the connection weight between the k1 th nerve unit in the first hidden layer and the k0 th nerve unit in the visible layer by adopting a random function pairCarry out initialization toIs a random number between 0 and 0.1;
c1-2, defining the sequence of each CSI assigned data in the CSI amplitude matrix: taking CSI amplitude data from 1 st column and 1 st row to 30 th row in a CSI amplitude matrix as 1 st CSI amplitude data to 30 th CSI amplitude data of the matrix, taking 1 st column and 1 st row and 30 th row CSI amplitude data of the matrix as 31 st CSI amplitude data to 60 th CSI amplitude data of the matrix, and taking 3 rd column and 1 st row and 30 th row CSI amplitude data as 61 st CSI amplitude data to 90 th CSI amplitude data of the matrix; determining a location SlThe 12-bit binary number corresponding to the qth CSI amplitude data of the Nth CSI amplitude matrix is as follows: determination of position SlIf the q-th CSI amplitude data of the N' -th CSI amplitude matrix is an integer or a decimal if the position SlThe q-th CSI amplitude data of the N' -th CSI amplitude matrix is an integer, the integer is converted into a binary number, 4 0S are supplemented to the right of the obtained binary number to obtain a new binary number, whether the number of the obtained new binary number is equal to 12 or not is judged, and if the number of the obtained new binary number is equal to 12, the new binary number is used as a position SlA 12-bit binary number corresponding to the q-th CSI amplitude data of the N' -th CSI amplitude matrix; if the position is more than 12, selecting 12 bits from right to left as the position SlA 12-bit binary number corresponding to the q-th CSI amplitude data of the N' -th CSI amplitude matrix; if less than 12, the left side of the new binary number is complemented with 0 to 12 bits and then used as the positionSlA 12-bit binary number corresponding to the q-th CSI amplitude data of the N' -th CSI amplitude matrix; if at position SlIf the q-th CSI amplitude data of the N 'th CSI amplitude matrix is a decimal, converting an integer part and a decimal part of the N' th CSI amplitude matrix into binary numbers respectively, and if the number of bits of the binary numbers obtained by converting the integer part is less than 8, complementing 0 at a high bit of the binary numbers to change the binary numbers into 8 bits; if the number of the binary digits obtained by converting the integer part is more than 8, retaining 8 digits from right to left, and deleting other digits to obtain 8-digit binary digits; if the binary digit obtained by converting the integer part is equal to 8, keeping the binary digit unchanged; if the binary digit number obtained by fractional part conversion is equal to 4, the binary digit number is kept unchanged; if the digit of the binary number obtained by fractional part conversion is less than 4, 0 is complemented at the low bit of the binary number to change the binary number into a 4-bit binary number; if the digit of the binary number obtained by the fractional part conversion is more than 4, the other 4 bits are left from left to right and deleted to obtain a 4-bit binary number, then 8-bit binary number obtained by processing the integer part is used as the high 8 bits, the 4-bit binary number obtained by processing the fractional part is used as the low 4 bits and spliced into a 12-bit binary number, and the spliced 12-bit binary number is used as a position SlA 12-bit binary number corresponding to the q-th CSI amplitude data of the N' -th CSI amplitude matrix; n' ═ 1,2, ·, N; q ═ 1,2, ·, 90; assigning the value of the p-th bit from left to right of the 12-bit binary number corresponding to the q-th CSI amplitude data to v12*3*30*(N′-1)+(q-1)*12+pP1, 2, 12, thereby completing v1,v2,v3,...,vK0Initial assignment of (1);
c1-3. construct a first training matrix ha ', ha' (ha ') that holds the first training values of the state quantities of the k1 th neural unit in the first hidden layer'1,ha′2,…,ha′K1)T,ha′k1A first training value representing a state quantity of a k1 th neural unit in the first hidden layer; constructing a matrix pa storing the probability of first activation of the k1 th neural unit in the first hidden layer, pa ═ pa1,pa2,pa3,...,paK1)T,pak1Indicating the k1 th god in the first hidden layerA first activation probability by cell; determining a first training matrix ha' of a first state training value of a k1 th neural unit in the first hidden layer and a matrix pa of a first activation probability of a k1 th neural unit in the first hidden layer, wherein the specific process is as follows:
c1-3-1, recording the activation probability of the k1 th nerve unit in the first hidden layer as Pr (k1), and calculating by adopting the formula (1) to obtain Pr (k 1):
where exp denotes an exponential function, Σ being the accumulated sign, bk1The current value of the offset for the k1 th neural unit in the first hidden layer,for the current value of the connection weight between the k1 th neuron in the first hidden layer and the k0 th neuron in the visible layer, vk0The current value of the state quantity of the k0 th nerve unit in the visible layer;
c1-3-2, assigning the current value of Pr (k1) to pak1Generating a random number between 0 and 1 by using a random function, comparing the current value of Pr (k1) with the random number, and if the current value of Pr (k1) is greater than the random number, assigning 1 to ha'k1Assigning 0 to ha 'if the current value of Pr (k1) is not greater than the random number'k1;
C1-4. construct a second training matrix v ', v ═ v'1,v′2,…,v′K0)T,v′k0Training values which are state quantities of k0 th neural units in the visible layer; determining a training value v 'of a state quantity of a k0 th neural unit in a visible layer'k0The specific process is as follows:
c1-4-1, recording the activation probability of the k0 th nerve unit in the visible layer as Pr (k0), and calculating by adopting the formula (2) to obtain Pr (k 0):
wherein, ak0The current value of the offset for the k0 th neural cell in the visible layer,is the current value of the connection weight, ha 'between the k1 th neural unit in the first hidden layer and the k0 th neural unit in the visible layer'k1A first training value of state quantity of a k1 th neural unit in the first hidden layer;
c1-4-2, generating a random number between 0 and 1 by using a random function, comparing the current value of Pr (k0) with the random number, and if the current value of Pr (k0) is greater than the random number, assigning 1 to v'k0Assigning 0 to v 'if the current value of Pr (k1) is not greater than the random number'k0;
C1-5, constructing a third training matrix ha ", ha ″ (ha ″") storing second training values of state quantities of the k1 th neural unit in the first hidden layer1,ha″2,…,ha″K1)T,ha″k1A second training value representing a state quantity of a k1 th neural unit in the first hidden layer; constructing a matrix pa ', pa ═ of (pa'1,pa′2,…,pa′K1)T,pa′k1Representing a second probability of activation of the k1 th neural unit in the first hidden layer; determining a third training matrix ha 'of a second training value of the state quantity of the k1 th neural unit in the first hidden layer and a matrix pa' of a second activation probability of the k1 th neural unit in the first hidden layer, which comprises the following specific processes:
c1-5-1, the activation probability Pr (k1) of the k1 th nerve cell in the first hidden layer is updated by adopting the formula (3):
wherein, bk1The current value of the offset for the k1 th neural unit in the first hidden layer,is the current value, v ', of the connection weight between the k1 th neural unit in the first hidden layer and the k0 th neural unit in the visible layer'k0Training values which are state quantities of k0 th neural units in the visible layer;
c1-5-2 assigning the current value of Pr (k1) to pa'k1Generating a random number between 0 and 1 by adopting a random function, comparing the current value of Pr (k1) with the random number, and assigning 1 to ha' if the current value of Pr (k1) is greater than the random numberk1And simultaneously assigns 1 to hak1(ii) a If the current value of Pr (k1) is not greater than the random number, then a value of 0 is assigned to ha ″k1And simultaneously assigns 0 to hak1;
C1-6. willAdded 0.01 x (pa) to the current value ofk1*vk0-pa′k1*v′k0) Is assigned toTo pairThe value of the weight matrix w is updated to obtain an updated weight matrix w1(ii) a A is tok0Current value of plus 0.01 (vk)0-v′k0) Is assigned to ak0To a, ak0Updating the value of the visible layer to obtain the updated offset a of the visible layer; b is tok1Added 0.01 x (pa) to the current value ofk1-pa′k1) Is assigned to bk1To b is pairedk1Is updated to obtain the updated offset b of the first hidden layer, wherein pak1、vk0、pa′k1And v'k0All the values of (a) are the current values thereof;
step C2: training a second limited Boltzmann machine:
c2-1, constructing a matrix for storing the state quantity of the second hidden layer, and marking the matrix as hb; build up and store a second hidden layerThe matrix of the offset of (1), which is denoted as c; constructing a weight matrix for storing a connection weight between the first hidden layer and the visible layer, and recording the weight matrix as w2;hb=(hb1,hb2,hb3,...,hbK2)T,hbk2A state value representing the K2 th neural unit in the second hidden layer, K2 ═ 1,2.., K2; c ═ c1,c2,c3,...,cK2)T,ck2Representing the offset of the k2 th neural unit in the second hidden layer; representing the connection weight between the k1 th nerve unit in the first hidden layer and the k2 th nerve unit in the second hidden layer, and respectively adopting random functions to pair ck2Andis initialized to ck2Andrespectively, random values of 0-0.1;
c2-2. construct a fourth training matrix hb ', hb ' ═ hb '1,hb′2,…,hb′K2)T,hb′k2A first training value representing a state quantity of a k2 th neural unit in the second hidden layer; constructing a matrix pb storing first activation probabilities for the k2 th neural unit in the second hidden layer (pb ═ pb)1,pb2,pb3,...,pbK2)T,pbk2Representing the probability of the first activation of the k2 th neural unit in the second hidden layer; determining a fourth training matrix hb' of the first training values of the state quantities of the k2 th neural unit in the second hidden layer and a matrix pb of the first activation probabilities of the k2 th neural unit in the second hidden layer, in particularThe process is as follows:
c2-2-1, recording the activation probability of the k2 th nerve unit in the second hidden layer as Pr (k2), and calculating by adopting a formula (4) to obtain Pr (k 2):
wherein, ck2The current value of the offset for the k2 th neural unit in the second hidden layer,is the current value, ha, of the connection weight between the k1 th neuron in the first hidden layer and the k2 th neuron in the second hidden layerk1Is the current value of the state quantity of the k1 th neural unit in the first hidden layer;
c2-2-2, assigning the current value of Pr (k2) to pbk2Generating a random number between 0 and 1 by using a random function, comparing the current value of Pr (k2) with the random number, and if the current value of Pr (k2) is greater than the random number, assigning 1 to hb'k2(ii) a If the current value of Pr (k2) is not greater than the random number, then 0 is assigned to hb'k2(ii) a Completing the assignment of a fourth training matrix hb';
c2-3. constructing a fifth training matrix ha '", ha'" ═ ha '"of (ha'" ', of the third training values of the state quantities of the k1 th neural unit in the first hidden layer'1,ha″′2,…,ha″′K1)T,ha″′k1A third training value representing a state quantity of a k1 th neural unit in the first hidden layer; determining a fifth training matrix ha' ″ storing a third training value of the state quantity of the k1 th neural unit in the first hidden layer by the following specific process:
c2-3-1, the activation probability Pr (k1) of the k1 th nerve cell in the first hidden layer is updated by adopting the formula (5):
wherein, bk1The current value of the offset for the k1 th neural unit in the first hidden layer,is the current value of the connection weight, hb ', between the k1 th neural unit in the first hidden layer and the k2 th neural unit in the second hidden layer'k2A first training value of state quantity of a k2 th neural unit in a second hidden layer;
c2-3-2, generating a random number between 0 and 1 by using a random function, comparing the current value of Pr (k1) with the random number, and if the current value of Pr (k1) is greater than the random number, assigning 1 to ha ″'k1(ii) a If the current value of Pr (k1) is not greater than the random number, then a value of 0 is assigned to ha ″'k1;
C2-4, constructing a sixth training matrix hb ", hb" ═ hb ″ (hb ″) storing second training values of state quantities of the k2 th neural unit in the second hidden layer1,hb″2,…,hb″K2)T,hb″k2A second training value representing a state quantity of a k2 th neural unit in the second hidden layer; constructing a matrix pb ', pb ' ═ b (pb '1,pb′2,…,pb′K2)T,pb′k2The method comprises the following steps of representing the second activation probability of the k2 th neural unit in the second hidden layer, determining a sixth training matrix hb 'of a second training value for storing the state quantity of the k2 th neural unit in the second hidden layer and a matrix pb' of the second activation probability of the k2 th neural unit in the second hidden layer, wherein the specific process is as follows:
c2-4-1, the activation probability Pr (k2) of the k2 th nerve cell in the second hidden layer is updated by adopting the formula (6):
wherein, ck2The current value of the offset for the k2 th neural unit in the second hidden layer,is the k1 th god in the first hidden layerVia the current value of the connection weight between the cell and the k2 th neural cell in the second hidden layer, ha'k1A third training value of state quantities of a k1 th neural unit in the first hidden layer;
c2-4-2 assigning the current value of Pr (k2) to pb'k2Generating a random number between 0 and 1 by adopting a random function, comparing Pr (k2) with the current value with the random number, and if the current value of Pr (k2) is greater than the random number, assigning 1 to hb ″', respectivelyk2And hbk2If the current value of Pr (k2) is not greater than the random number, then 0 is assigned to hb ″', respectivelyk2And hbk2;
C2-5. willCurrent value of plus 0.01 (pb)k2*hak1-pb′k2*ha″′k1) Is assigned toTo pairUpdating to obtain updated weight matrix w2B is mixingk1Current value of (a) plus 0.01 (ha)k1-ha″′k1) Is assigned to bk1To b is pairedk1Updating to obtain the offset b of the updated first hidden layer, and updating ck2Current value of plus 0.01 (pb)k2-pb′k2) To ck2To c fork2Updating to obtain the offset c of the updated second hidden layer; wherein pbk2、hak1、pb′k2And ha'k1The values of (a) are their current values, respectively;
step C3: training a third restricted boltzmann machine:
c3-1, constructing a matrix for storing the state quantity of the third hidden layer, and marking the matrix as hc; constructing a matrix for storing the offset of the third hidden layer, and recording the matrix as d; building a connection weight between the second hidden layer and the third hidden layerThe weight matrix of (2) is recorded as w3;hc=(hc1,hc2,hc3,...,hcK3)T,hck3A state value representing the K3 th neural unit in the third hidden layer, K3 ═ 1,2.., K3; d ═ d (d)1,d2,d3,...,dK3)T,dk3Representing the offset of the k3 th neural unit in the third hidden layer; representing the connection weight between the k2 th nerve unit in the second hidden layer and the k3 th nerve unit in the third hidden layer, and respectively adopting random functions to pair dk3Andinitializing to dk3Andrespectively, random values of 0-0.1;
c3-2. construct a seventh training matrix hc ', hc' ═ hc 'of the first training values of the state quantities of the k3 th neural unit in the third hidden layer'1,hc′2,…,hc′K3)T,hc′k3A first training value representing a state quantity of a k3 th neural unit in the third hidden layer; constructing a matrix pc storing the probability of first activation of the k3 th neural unit in the third hidden layer (pc ═ c)1,pc2,pc3,...,pcK3)T,pck3Represents the first activation probability of the k3 th neural unit in the third hidden layer; determining a seventh training matrix hc' storing the first training value of the state quantity of the k3 th neural unit in the third hidden layer and a matrix pc storing the first activation probability of the k3 th neural unit in the third hidden layer, wherein the specific process is as follows:
c3-2-1. excitation of the k3 th neural unit in the third hidden layerThe activity probability is recorded as Pr (k3), and Pr (k3) is calculated by adopting formula (7):
wherein d isk3The current value of the offset for the k3 th neural unit in the third hidden layer,for the current value of the connection weight, hb, between the k2 th neuron in the second hidden layer and the k3 th neuron in the third hidden layerk2Is the current value of the state value of the k2 th neural unit in the second hidden layer;
c3-2-2, assigning the current value of Pr (k3) to pck3Obtaining an updated matrix pc, generating a random number between 0 and 1 by using a random function, comparing the current value of Pr (k3) with the random number, and if the current value of Pr (k3) is greater than the random number, assigning 1 to hc'k3(ii) a If the current value of Pr (k3) is not greater than the random number, then 0 is assigned to hc'k3;
C3-3 constructing an eighth training matrix hb '", hb'" 'of (hb' "') that deposits a third training value for the state quantity of the k2 th neural unit in the second hidden layer'1,hb″′2,…,hb″′K2)T,hb″′k2And determining an eighth training matrix hb' ″ for storing the third training value of the state quantity of the k2 th neural unit in the second hidden layer by using the third training value representing the state quantity of the k2 th neural unit in the second hidden layer, which comprises the following specific processes:
c3-3-1, updating the activation probability Pr (k2) of the k2 th neural unit in the second hidden layer by using the formula (8):
wherein, ck2The current value of the offset for the k2 th neural unit in the second hidden layer,is the current value of the connection weight, hc 'between the k2 th neural unit in the second hidden layer and the k3 th neural unit in the third hidden layer'k3A first training value of state quantity of a k3 th neural unit in a third hidden layer;
c3-3-2, generating a random number between 0 and 1 by using a random function, comparing the current value of Pr (k2) with the random number, and if the current value of Pr (k2) is greater than the random number, assigning 1 to hb ″'k2(ii) a If the current value of Pr (k2) is not greater than the random number, then 0 is assigned to hb ″'k2;
C3-4, constructing a ninth training matrix hc ", hc ═ hc ″ (hc ″") that stores the second training values of the state quantities of the k3 th neural unit in the third hidden layer1,hc″2,…,hc″K3)T,hc″k3A second training value representing a state quantity of a k3 th neural unit in the third hidden layer; constructing a matrix pc ', pc ═ c (pc'1,pc′2,…,pc′K3)T,pc′k3Represents the second activation probability of the k3 th neural unit in the third hidden layer; the ninth training matrix hc ″ storing the second training value of the state quantity of the k3 th neural unit in the third hidden layer and the matrix pc' of the second activation probability of the k3 th neural unit in the third hidden layer are determined by the following specific processes:
c3-4-1, recording the activation probability of the k3 th nerve unit in the third hidden layer as Pr (k3), and calculating by adopting the formula (9) to obtain Pr (k 3):
wherein d isk3The current value of the offset for the k3 th neural unit in the third hidden layer,is the current value of the connection weight, hb ″, between the k2 th neural unit in the second hidden layer and the k3 th neural unit in the third hidden layer'k2A third training value which is the state quantity of the k2 th neural unit in the second hidden layer;
c3-4-2 assigning the current value of Pr (k3) to pc'k3Generating a random number between 0 and 1 by adopting a random function, comparing the current value of Pr (k3) with the random number, and if the current value of Pr (k3) is greater than the random number, assigning 1 to hc ″', respectivelyk3And hc andk3(ii) a If the current value of Pr (k3) is not greater than the random number, then 0 is assigned to hc ″', respectivelyk3And hc andk3;
c3-5. willCurrent value of (c) plus 0.01 × (pc)k3*hbk2-pc′k3*hb″′k2) Is assigned toTo pairUpdating to obtain updated weight matrix w3(ii) a C is tok2Current value of plus 0.01 (hb)k2-hb″′k2) To ck2To c fork2Updating to obtain the offset c of the updated second hidden layer; will dk3Current value of (c) plus 0.01 × (pc)k3-pc′k3) Is assigned to dk3To d is pairedk3Updating to obtain the offset d of the updated third hidden layer, wherein pck3、hbk2、pc′k3And hb'k2The values of (a) are their current values, respectively;
step C4: training a fourth restricted boltzmann machine:
c4-1, constructing a matrix for storing the state quantity of the fourth hidden layer, and marking the matrix as hd; constructing a matrix for storing the offset of the fourth hidden layer, and recording the matrix as e; constructing a weight matrix storing the connection weight between the third hidden layer and the fourth hidden layer, and recording the weight matrix as w4;hd=(hd1,hd2,hd3,...,hdK4)T,hdk4A state value representing the K4 th neural unit in the fourth hidden layer, K4 ═ 1,2.., K4; e ═ e (e)1,e2,e3,...,eK4)T,ek4Represents the offset of the k4 th neural unit of the fourth hidden layer, representing the connection weight between the k3 th neural unit in the third hidden layer and the k4 th neural unit in the fourth hidden layer by adopting a random functionk4Andrespectively initializing to be random values of 0-0.1;
c4-2. construct a tenth training matrix hd 'that stores the first training values of the state quantities of the k4 th neural unit in the fourth hidden layer, hd ═ hd'1,hd′2,…,hd′K4)T,hd′k4A first training value representing a state quantity of a k4 th neural unit in the fourth hidden layer; constructing a matrix pd storing the probability of the first activation of the k4 th neural unit in the fourth hidden layer (pd ═ pd)1,pd2,pd3,...,pdK4)T, pdk4Represents the probability of the first activation of the k4 th neural unit in the fourth hidden layer; the specific process of determining the tenth training matrix hd' storing the first training value of the state quantity of the k4 th neural unit in the fourth hidden layer and the matrix pd storing the first activation probability of the k4 th neural unit in the fourth hidden layer is as follows:
c4-2-1, recording the activation probability of the k4 nerve unit in the fourth hidden layer as Pr (k4), and calculating by adopting a formula (10)
Wherein e isk4The current value of the offset for the k4 th neural unit of the fourth hidden layer,is the current value of the connection weight, hc, between the k3 th neuron in the third hidden layer and the k4 th neuron in the fourth hidden layerk3The current value of the state quantity of the k3 th neural unit in the third hidden layer;
c4-2-2, assigning the current value of Pr (k4) to pdk4Generating a random number between 0 and 1 by using a random function, comparing the current value of Pr (k4) with the random number, and if Pr (k4) is greater than the random number, assigning 1 to hd'k4Assigning 0 to hd 'if the current value of Pr (k4) is not greater than the random number'k4;
C4-3. constructing an eleventh training matrix hc "storing third training values for the state quantities of the k3 th neural unit in the third hidden layer′, hc″′=(hc″′1,hc″′2,…,hc″′K3)T,hc″′k3And determining an eleventh training matrix hc' for storing the third training value of the state quantity of the k3 th neural unit in the third hidden layer by using the third training value representing the state quantity of the k3 th neural unit in the third hidden layer, wherein the specific process is as follows:
c4-3-1, the activation probability Pr (k3) of the k3 th nerve cell in the third hidden layer is updated by adopting the formula (11):
wherein d isk3The current value of the offset for the k3 th neural unit in the third hidden layer,is the current value of the connection weight, hd ', between the k3 th neural unit in the third hidden layer and the k4 th neural unit in the fourth hidden layer'k4For the k4 th nerve sheet in the fourth hidden layerA first training value of a state quantity of the element;
c4-3-2, generating a random number between 0 and 1 by using a random function, comparing the current value of Pr (k3) with the random number, and if the current value of Pr (k3) is greater than the random number, assigning 1 to hc ″'k3If the current value of Pr (k3) is not greater than the random number, then 0 is assigned to hc'k3;
C4-4, constructing a twelfth training matrix hd ", hd" ═ hd ″, which stores the second training values of the state quantities of the k4 th neural unit in the fourth hidden layer1,hd″2,…,hd″K4)T,hd″k4A second training value representing a state quantity of a k4 th neural unit in the fourth hidden layer; constructing a matrix pd ', pd' ═ of (pd ') of the second activation probability of the k4 th neural unit in the fourth hidden layer'1,pd′2,…,pd′K4)T, pd′k4Represents the second activation probability of the k4 th neural unit in the fourth hidden layer; determining a twelfth training matrix hd 'storing a second training value of the state quantity of the k4 th neural unit in the fourth hidden layer and a matrix pd' of a second activation probability of the k4 th neural unit in the fourth hidden layer, which comprises the following specific processes:
c4-4-1, updating the activation probability Pr (k4) of the k4 th neural unit in the fourth hidden layer by adopting the formula (12):
wherein e isk4The current value of the offset for the k4 th neural unit of the fourth hidden layer,is the current value, hc ″, of the connection weight between the k3 th neural unit in the third hidden layer and the k4 th neural unit in the fourth hidden layer'k3A third training value which is the state quantity of the k3 th neural unit in the third hidden layer;
c4-4-2 assigning the current value of Pr (k4) to pd'k4Generating a 0E to E by random function1, comparing the current value of Pr (k4) with the random number, and if the current value of Pr (k4) is greater than the random number, assigning 1 to hd ″, respectivelyk4And hc andk4if the current value of Pr (k4) is not greater than the random number, then 0 is assigned to hd ″', respectivelyk4And hc andk4;
c4-5. willCurrent value of plus 0.01 x (pd)k4*hck3-pd′k4*hc″′k3) Is assigned toTo pairUpdating to obtain updated weight matrix w4(ii) a Will dk3Current value of (d) plus 0.01 (hc)k3-hc″′k3) Is assigned to dk3To d is pairedk3Updating to obtain the updated offset d of the third hidden layer; e is to be43Current value of plus 0.01 x (pd)k4-pd′k4) To ek4To e is aligned withk4Updating to obtain the offset e of the updated fourth hidden layer; wherein pd isk4、hck3、pd′k4And hc'k3The values of (a) are their current values, respectively;
step C5: setting an iteration variable Y, and initializing the iteration variable Y to enable the initial value of the iteration variable Y to be 1;
and step C6, carrying out the Y-th iteration updating on the activation probability of each nerve unit in the visible layer, the first hidden layer, the second hidden layer, the third hidden layer and the fourth hidden layer, wherein the specific process is as follows:
c6-1, updating the activation probability Pr (k1) of the k1 th nerve cell in the first hidden layer by adopting the formula (13):
wherein, bk1The current value of the offset for the k1 th neural unit in the first hidden layer,for the current value of the connection weight between the k1 th neuron in the first hidden layer and the k0 th neuron in the visible layer, vk0The current value of the state quantity of the k0 th nerve unit in the visible layer;
c6-2, generating a random number between 0 and 1 by using a random function, comparing the current value of Pr (k1) with the random number, and if the current value of Pr (k1) is larger than the random number, assigning 1 to hak1Pair hak1Updating is carried out; if the current value of Pr (k1) is not greater than the random number, then a value of 0 is assigned to hak1Pair hak1Updating is carried out;
c6-3, updating the activation probability Pr (k2) of the k2 neural unit in the second hidden layer by adopting the formula (14):
wherein, ck2The current value of the offset for the k2 th neural unit in the second hidden layer,is the current value, ha, of the connection weight between the k1 th neuron in the first hidden layer and the k2 th neuron in the second hidden layerk1Is the current value of the state quantity of the k1 th neural unit in the first hidden layer;
c6-4, generating a random number between 0 and 1 by using a random function, comparing the current value of Pr (k2) with the random number, and if the current value of Pr (k2) is larger than the random number, assigning 1 to hbk2To hbk2Updating is carried out; if the current value of Pr (k2) is not greater than the random number, then a value of 0 is assigned to hbk2To hbk2Updating is carried out;
c6-5, updating the activation probability Pr (k3) of the k3 neural unit in the third hidden layer by adopting the formula (15):
wherein d isk3The current value of the offset for the k3 th neural unit in the third hidden layer,for the current value of the connection weight, hb, between the k2 th neuron in the second hidden layer and the k3 th neuron in the third hidden layerk2Is the current value of the state quantity of the k2 th nerve unit in the second hidden layer;
c6-6, generating a random number between 0 and 1 by using a random function, comparing the current value of Pr (k3) with the random number, and if the current value of Pr (k3) is larger than the random number, assigning 1 to hck3To hck3Updating is carried out; if the current value of Pr (k3) is not greater than the random number, then a value of 0 is assigned to hck3To hck3Updating is carried out;
c6-7, updating the activation probability Pr (k4) of the k4 neural cell in the fourth hidden layer by adopting the formula (16):
wherein e isk4The current value of the offset for the k4 th neural unit in the fourth hidden layer,is the current value of the connection weight, hc, between the k3 th neuron in the third hidden layer and the k4 th neuron in the fourth hidden layerk3The current value of the state quantity of the k3 th neural unit in the third hidden layer;
c6-8, generating a random number between 0 and 1 by using a random function, comparing the current value of Pr (k4) with the random number, and if the current value of Pr (k4) is larger than the random number, assigning 1 to the random numberValue given to hdk4To hdk4Updating is carried out; if the current value of Pr (k4) is not greater than the random number, then a value of 0 is assigned to hdk4To hdk4Updating is carried out;
c6-9, updating the activation probability Pr (k3) of the k3 th neural cell in the third hidden layer again by adopting the formula (17):
wherein d isk3The current value of the offset for the k3 th neural unit in the third hidden layer,as the current value of the connection weight, hd, between the k3 th neuron in the third hidden layer and the k4 th neuron in the fourth hidden layerk4Is the current value of the state quantity of the k4 th neural unit in the fourth hidden layer;
c6-10, generating a random number between 0 and 1 by using a random function, comparing the current value of Pr (k3) with the random number, and if the current value of Pr (k3) is larger than the random number, assigning 1 to hck3To hck3Updating is carried out; if the current value of Pr (k3) is not greater than the random number, then a value of 0 is assigned to hck3To hck3Updating is carried out;
c6-11, updating the activation probability Pr (k2) of the k2 neural cell in the second hidden layer again by adopting the formula (18):
wherein, ck2The current value of the offset for the k2 th neural unit in the second hidden layer,is the current value of the connection weight, hc, between the k2 th neuron in the second hidden layer and the k3 th neuron in the third hidden layerk3Is as followsThe current value of the state quantity of the k3 th neural unit in the three hidden layers;
c6-12, generating a random number between 0 and 1 by using a random function, comparing the current value of Pr (k2) with the random number, and if the current value of Pr (k2) is larger than the random number, assigning 1 to hbk2To hbk2Updating is carried out; if the current value of Pr (k2) is not greater than the random number, then a value of 0 is assigned to hbk2To hbk2Updating is carried out;
c6-13, updating the activation probability Pr (k1) of the k1 th neural cell in the first hidden layer again by adopting the formula (19):
wherein, bk1The current value of the offset for the k1 th neural unit in the first hidden layer,for the current value of the connection weight, hb, between the k1 th neuron in the first hidden layer and the k2 th neuron in the second hidden layerk2Is the current value of the state quantity of the k2 th nerve unit in the second hidden layer;
c6-14, generating a random number between 0 and 1 by using a random function, comparing the current value of Pr (k1) with the random number, and if the current value of Pr (k1) is larger than the random number, assigning 1 to hak1Pair hak1Updating is carried out; if the current value of Pr (k1) is not greater than the random number, then a value of 0 is assigned to hak1Pair hak1Updating is carried out;
c6-15, updating the activation probability Pr (k0) of the k0 th nerve cell in the visible layer by adopting the formula (20):
wherein, ak0The current value of the offset for the k0 th neural cell in the visible layer,is the current value of the connection weight, ha, between the k1 th neuron in the first hidden layer and the k0 th neuron in the visible layerk1Is the current value of the state quantity of the k1 th neural unit in the first hidden layer;
c6-16: generating a random number between 0 and 1 by adopting a random function, comparing the current value of Pr (k0) with the random number, and if the current value of Pr (k0) is greater than the random number, assigning 1 to v'k0To v'k0Updating is carried out; if the current value of Pr (k0) is not greater than the random number, then 0 is assigned to v'k0To v'k0Updating is carried out;
step C7: creating N reconstructed amplitude matrixes for storing the reconstructed CSI amplitude data, wherein each reconstructed amplitude matrix can store 30 rows and 3 columns of reconstructed CSI amplitude data, the sequencing definition of the 30 rows and 3 columns of reconstructed CSI amplitude data stored in each reconstructed amplitude matrix is the same as that of the CS amplitude matrix, and v't-v'11+tAccording to v'tv′t+1v′t+2v′t+3v′t+4…v′t+ 9v′t+10v′t+11The order combination of the 12 bit binary numbers is 12 bit binary numbers, t is 12N +1, N is 0, 1,2, …, K0/12-1, the 1 st bit to the 8 th bit binary numbers from left to right of the 12 bit binary numbers are used as corresponding decimal numbers obtained by calculating the integral part of the reconstructed CSI amplitude data, the 9 th bit to the 12 th bit are used as corresponding decimal numbers obtained by calculating the decimal part of the reconstructed CSI amplitude data, the decimal numbers corresponding to the integral part are used as integers, the decimal numbers corresponding to the decimal part are used as the decimal part combination of the decimal part as the decimal number corresponding to the 12 bit binary numbers, and the decimal numbers are used as the (t-1)/12) +1) -90 (N ' -1) th reconstructed CSI amplitude data of the N ' reconstructed amplitude matrix to be stored in the N ' reconstructed amplitude matrix, INT is an integer function, and N reconstructed amplitude matrixes respectively storing 30 rows and 3 columns of reconstructed CSI amplitude data are obtained;
step C8: calculating the Euclidean distance between the Nth 'reconstructed amplitude matrix and the Nth' CSI amplitude matrix, accumulating the Euclidean distance between the 1 st reconstructed amplitude matrix and the 1 st CSI amplitude matrix to the Euclidean distance between the Nth reconstructed amplitude matrix and the Nth CSI amplitude matrix, taking the obtained sum as an output error, and recording the output error as phi;
step C9: offset amounts for visible layer, first hidden layer, second hidden layer, third hidden layer and fourth hidden layer and w1,w2,w3And w4Respectively updating, and the specific process is as follows:
c9-1. construct a matrix δ a storing the residual terms of the k0 th neural unit in the visible layer (δ a ═ d1,δa2,…,δaK0),δak0Residual terms for the k0 th neural unit in the visible layer; calculating a residual error term delta a of the k0 th nerve unit in the visible layer by adopting the formula (21)k0:
δak0=-(v′k0-vk0)*v′k0*(1-v′k0) (21)
Wherein, v'k0The current value of the training value, v, of the state quantity of the k0 th neural unit in the visible layerk0The current value of the state quantity of the k0 th nerve unit in the visible layer;
c9-2. construct a matrix δ b that holds the residual terms of the k1 th neural unit in the first hidden layer (δ b ═ b)1,δb2,…,δbK1),δbk1Residual terms for the k1 th neural unit in the first hidden layer; calculating a residual error term delta b of the k1 th nerve unit in the first hidden layer by adopting a formula (22)k1:
Wherein,is the current value of the connection weight, deltaa, between the k1 th neural unit in the visible layer and the first hidden layerk0Is the current value of the residual term for the k0 th neural unit in the visible layer, hak1Is the current value of the state quantity of the k1 th neural unit in the first hidden layer;
c9-3. construct a matrix δ C that holds the residual terms of the k2 th neural unit in the second hidden layer (δ C ═ δ C)1,δc2,…,δcK2),δck2Residual terms for the k2 th neural unit in the second hidden layer; calculating a residual error term deltac of the k2 th neural unit in the second hidden layer by adopting a formula (23)k2:
Wherein,is the current value of the connection weight between the k1 th neuron in the first hidden layer and the k2 th neuron in the second hidden layer, δ bk1Is the current value of the residual term of the k1 th neural unit in the first hidden layer, hbk2Is the current value of the state quantity of the k2 th nerve unit in the second hidden layer;
c9-4. construct a matrix δ d storing the residual terms of the k3 th neural unit in the third hidden layer (δ d ═ d)1,δd2,…,δdK3),δdk3Residual terms for the k3 th neural unit in the third hidden layer; calculating a residual error term delta d of the k3 th nerve unit in the third hidden layer by adopting a formula (24)k3:
Wherein,is the current value of the connection weight, deltac, between the k2 th neuron in the second hidden layer and the k3 th neuron in the third hidden layerk2Is a second hidden layerCurrent value of residual term, hc, of the k2 th neural unit in (A)k3The current value of the state quantity of the k3 th neural unit in the third hidden layer;
c9-5. construct a matrix δ e storing the residual terms of the k4 th neural unit in the fourth hidden layer (δ e ═ e1,δe2,…,δeK4),δek4Calculating a residual term delta e of the k4 neural unit in the fourth hidden layer by adopting a formula (25) for the residual term of the k4 neural unit in the fourth hidden layerk4:
Wherein,is the current value of the connection weight, δ d, between the k3 th neuron in the third hidden layer and the k4 th neuron in the fourth hidden layerk3Is the current value of the residual term, hd, for the k3 th neural unit in the third hidden layerk4Is the current value of the state quantity of the k4 th neural unit in the fourth hidden layer;
c9-6. willIs added to the current value of (d) by 0.5 x δ dk3hdk4Is assigned toTo pairUpdating is carried out; will be provided withIs added to the current value of 0.5 × δ ck2hck3Is assigned toTo pairUpdating is carried out; will be provided withIs added by 0.5 × δ bk1hbk2Is assigned toTo pairIs updated toIs added to the current value of (d) by 0.5 x δ ak1hak1Is assigned toTo pairUpdating is carried out; e is to bek4Is added with 0.5 × δ ek4Is assigned to ek4(ii) a To ek4Updating is carried out; will dk3Is added to the current value of (d) by 0.5 x δ dk3Is assigned to dk3(ii) a To dk3Updating is carried out; c is tok2Is added to the current value of 0.5 × δ ck2Is assigned to ck2To c fork2Updating is carried out; b is tok1Is added by 0.5 × δ bk1Is assigned to bk1To b is pairedk1Updating is carried out; a is tok0Is added to the current value of (d) by 0.5 x δ ak0Is assigned to ak0To a, ak0Performing an update, wherein δ dk3、hdk4、δck2、hck3、δbk1、hbk2、δak1、hak1And δ ek4The values of (a) are their current values, respectively;
step C10: judging whether the output error phi is less than 5 and Y is equal to 10000, if at least one of the two conditions is satisfied, determining w1、w2、w3、w4The current values of a, b, c, d and e are taken as SlThe fingerprint of (a) the fingerprint of (b),if neither condition is satisfied, updating the value of Y by adding 1 to the current value of Y, returning to step C6-step C9 for the next iteration until at least one of the two conditions is satisfied to obtain SlThe fingerprint of (2);
fifthly, positioning indoor personnel, and the specific process is as follows:
fifthly-1, acquiring N signal data packets of the current position of the indoor person to be positioned in real time through a computer in the monitoring room, and saving the acquired N signal data packets of the current position of the indoor person as data to be detected as a file named CSI.dat;
fifthly-2, obtaining N CSI amplitude matrixes of the current position according to the N signal data packets of the current position by the same method, and calculating to obtain the mean, the variance sigma and the standard deviation std of the N CSI amplitude matrixes of the current position;
-3 setting a coefficient of variation, and recording the coefficient of variation as λ, and calculating the coefficient of variation λ by using a formula (26):
fifthly, determining a 12-bit binary number corresponding to the mth CSI amplitude data of the Nth' th CSI amplitude matrix in the N CSI amplitude matrices at the current position, wherein the specific process is as follows: judging whether the m-th CSI amplitude data of the N 'th CSI amplitude matrix in the N CSI amplitude matrices at the current position is an integer or a decimal, if the m-th CSI amplitude data of the N' th CSI amplitude matrix in the N CSI amplitude matrices at the current position is an integer, converting the m-th CSI amplitude data into a binary number, supplementing 4 0 s to the right of the obtained binary number to obtain a new binary number, then judging whether the number of bits of the obtained new binary number is equal to 12, if the m-th CSI amplitude data is equal to 12, taking the new binary number as a 12-bit binary number corresponding to the m-th CSI amplitude data of the N 'th CSI amplitude matrix in the N CSI amplitude matrices at the current position, if the m-th CSI amplitude data is greater than 12, selecting 12 bits from right to left as 12-bit binary numbers corresponding to the m-th CSI amplitude data of the N' th CSI amplitude matrix in the N CSI amplitude matrices at the current position, and if the m-th CSI amplitude data is less than 12, thenAfter 0 is supplemented to the left of the new binary number to the digit of 12 bits, the new binary number is used as a 12-bit binary number corresponding to the mth CSI amplitude data of the Nth' CSI amplitude matrix in the N CSI amplitude matrices at the current position; if the mth CSI amplitude data of the Nth' th CSI amplitude matrix in the N CSI amplitude matrices at the current position is a decimal, respectively converting an integer part and a decimal part of the current position into binary numbers, if the number of bits of the binary numbers obtained by converting the integer part is less than 8, complementing 0 at the high order of the binary numbers into 8-bit binary numbers, if the number of bits of the binary numbers obtained by converting the integer part is more than 8, deleting other bits of the 8-bit binary numbers from the right to the left to obtain 8-bit binary numbers, if the number of bits of the binary numbers obtained by converting the integer part is equal to 8, keeping unchanged, if the number of bits of the binary numbers obtained by converting the decimal part is equal to 4, complementing 0 at the low order of the binary numbers obtained by converting the decimal part into 4-bit binary numbers, if the number of bits obtained by converting the decimal part is more than 4, then, 4 bits of other binary numbers are left from left to right, 4 bits of other binary numbers are obtained by deletion, the 8-bit binary number obtained after the integer part is processed is used as a high 8 bit, the 4-bit binary number obtained after the decimal part is processed is used as a low 4 bit, the low 4 bit is spliced into a 12-bit binary number, and the 12-bit binary number obtained by splicing is used as a 12-bit binary number corresponding to the mth CSI amplitude data of the Nth' CSI amplitude matrix in the N CSI amplitude matrices at the current position; n' ″ 1,2., N; 1,2, ·, 90; assigning the value of the x-th bit counted from left to right to v by the twelve-bit binary value of the m-th amplitude data12*3*30*(N″′-1)+(m-1)*12+xTo v is to v1,v2,v3,...,vK0Updating, wherein x is 1,2, 12;
fifthly, 5 will be at the position SlThe probability of obtaining v is denoted as Pr (v)l) Determining Pr (v)l) The specific process comprises the following steps:
5-1, adding SlFingerprint w of1、w2、w3、w4And a, b, C, d and e are used as current values, the output errors are recalculated according to the methods of the step C6, the step C7 and the step C8, and the output errors calculated at the moment are recorded as the current valuesφ1;
Fifthly, recording the probability of the indoor position asLet Pr (Lo) be uniformly distributed in the indoor position probabilityl) 1/L, locating the indoor person to be positioned at the position SlIs denoted as Pr (S)l) Calculating by adopting a formula (28) to obtain Pr (S)l):
7 recording the coordinate of the current position of the indoor person to be positioned as (x)l′,yl′) Calculating (x) using equations (29) and (30)l′,yl′):
(x) calculated by (8)l′,yl′) Namely the positioning coordinates of the indoor person to be positioned.
Claims (1)
1. A passive sensing indoor positioning method based on deep learning is characterized by comprising the following steps:
firstly, data acquisition:
step 1: preparing a computer, inserting the commercial wireless network card intel 5300 into a wireless network card expansion slot of a computer host, installing an ubuntul2.04 system and a Linux 802.11n CSI Tool in the computer host, and placing the computer in a monitoring room;
step 2: dividing the room to be positioned into L areas along the transverse direction and the longitudinal direction, wherein the transverse direction is divided into L1 parts along the maximum length direction of the transverse direction, and the longitudinal direction is divided into the maximum length direction of the longitudinal directionDividing the length direction of the region into L2 parts evenly, thereby obtaining L regions, wherein L is L1 xL 2, L1 and L2 are integers which are more than 4 and less than 6 respectively, the center of each region is selected as the position of the region, if the region is a non-centrosymmetric region, the intersection point of the perpendicular bisector of the transverse sideline and the longitudinal sideline of the region is taken as the center, and the position of the first region is recorded as Sl1,2, ·, L; selecting any indoor point as an origin, constructing a two-dimensional coordinate system of the indoor area by taking the transverse direction as the x-axis direction and the longitudinal direction as the y-axis direction, and obtaining the first area S through measurementlCentral point L oflIs expressed as (x)l,yl) A router which is not provided with a password and is communicated with a network is arranged at a certain position in a room to be positioned;
step 3: opening the ubuntul2.04 system in the computer host, connecting the commercial wireless network card intel 5300 arranged on the computer host with the router through a wireless network, and sequentially standing a tester at the position S1~SLThe other tester obtains the position S in turn at the computer host1~SLThe specific process of the test data is as follows: one tester standing at position SlWhen the computer is in operation, another tester collects the position S through the ping router at the speed of 50 times per second of the computer hostlTakes N signal data packets as position SlTest data of (2), position SlSaving the test data of (a) as a file with a suffix name of dat, and naming the file as csil, wherein N is an integer greater than 800 and less than 1000;
processing data: installing matlab in the ubuntu12.04 system of the computer host, opening a self-contained matlab folder in a Linux 802.11n CSI Tool by using matlab software, and comparing the position S acquired in the step (i)1~SLRespectively processing the test data to obtain the position S1~SLThe specific process of the amplitude matrix is as follows: reading csil.dat file by adopting read _ bf _ file function in matlab folder, and acquiring position SlRespectively opening the position S by adopting a get _ scaled _ csi functionlN signal data packets ofObtaining a position SlN CSI data matrices, position SlEach CSI data matrix comprises 30 rows and 3 columns of CSI data, the CSI data are in a complex form, and the position S is respectively calculated by adopting an abs functionlObtaining a position S by using the CSI amplitude matrixes corresponding to the N CSI data matrixeslAt N CSI magnitude matrices, position SlEach CSI amplitude matrix respectively comprises 30 rows and 3 columns of CSI amplitude data;
constructing a deep neural network model: the neural network model comprises a visible layer, a first hidden layer, a second hidden layer, a third hidden layer and a fourth hidden layer which are sequentially arranged from top to bottom, wherein the visible layer is provided with K0 neural units, K0 is N30 3 12, and represents a multiplication symbol; the first hidden layer has K1 nerve cells, 300< K1<500, the second hidden layer has K2 nerve cells, 200< K2< K1; the third hidden layer is provided with K3 nerve units, 100< K3< K2; the fourth hidden layer is provided with K4 nerve units, 50< K4< K3; all the nerve units in the same layer are not connected with each other, all the nerve units in the two adjacent layers are connected with each other, and each nerve unit has two states: the neural network comprises an activated state and a closed state, wherein the state value of the neural unit in the activated state is 1, and the state value of the neural unit in the closed state is 0; the visible layer and the first hidden layer form a first limited Boltzmann machine, the first hidden layer and the second hidden layer form a second limited Boltzmann machine, the second hidden layer and the third hidden layer form a third limited Boltzmann machine, and the third hidden layer and the fourth hidden layer form a fourth limited Boltzmann machine;
extracting the position SlThe fingerprint of (2) is prepared by the following specific processes:
step C1: training the first limited Boltzmann machine:
c1-1, constructing a matrix for storing the state quantity of the visible layer, and marking the matrix as v; constructing a matrix for storing the state quantity of the first hidden layer, and recording the matrix as ha; constructing a matrix for storing the offset of the visible layer, and marking the matrix as a; construction ofStoring a matrix of the offset of the first hidden layer, and marking the matrix as b; constructing a weight matrix storing the connection weight between the first hidden layer and the visible layer, and recording the weight matrix as w1(ii) a Wherein v ═ v (v)1,v2,v3,...,vK0)T,vk0Represents the state quantity of the K0 th nerve unit in the visible layer, the superscript T represents the transpose of the matrix, K0 is 1,2 …, K0; ha ═ ha (ha)1,ha2,ha3,...,haK1)T,hak1Represents the state quantity of the K1 th nerve unit in the first hidden layer, wherein K1 is 1,2, … and K1; a ═ a1,a2,a3,...,aK0)T,ak0Representing the offset of the k0 th nerve unit in the visible layer, and adopting a random function to ak0Performing initialization to make ak0Is a random number between 0 and 0.1; b ═ b1,b2,b3,...,bK1)T,bk1Representing the offset of the k1 th neural unit in the first hidden layer by using a random function pair bk1Performing initialization to bk1Is a random number between 0 and 0.1; representing the connection weight between the k1 th nerve unit in the first hidden layer and the k0 th nerve unit in the visible layer by adopting a random function pairCarry out initialization toIs a random number between 0 and 0.1;
c1-2, defining the sequence of each CSI assigned data in the CSI amplitude matrix: sending the CSI amplitude data positioned in the 1 st column and the 1 st row in the CSI amplitude matrix to the 1 stThe CSI amplitude data of 30 rows are used as the 1 st CSI amplitude data to the 30 th CSI amplitude data of the matrix, the 1 st row CSI amplitude data to the 30 th row CSI amplitude data of the 2 nd column are used as the 31 st CSI amplitude data to the 60 th CSI amplitude data of the matrix, and the 1 st row CSI amplitude data to the 30 th row CSI amplitude data of the 3 rd column are used as the 61 st CSI amplitude data to the 90 th CSI amplitude data of the matrix; determining a location SlThe 12-bit binary number corresponding to the qth CSI amplitude data of the Nth CSI amplitude matrix is as follows: determination of position SlIf the q-th CSI amplitude data of the N' -th CSI amplitude matrix is an integer or a decimal if the position SlThe q-th CSI amplitude data of the N' -th CSI amplitude matrix is an integer, the integer is converted into a binary number, 4 0S are supplemented to the right of the obtained binary number to obtain a new binary number, whether the number of the obtained new binary number is equal to 12 or not is judged, and if the number of the obtained new binary number is equal to 12, the new binary number is used as a position SlA 12-bit binary number corresponding to the q-th CSI amplitude data of the N' -th CSI amplitude matrix; if the position is more than 12, selecting 12 bits from right to left as the position SlA 12-bit binary number corresponding to the q-th CSI amplitude data of the N' -th CSI amplitude matrix; if the number of bits is less than 12, the left side of the new binary number is supplemented with 0 to 12 bits and then the result is used as the position SlA 12-bit binary number corresponding to the q-th CSI amplitude data of the N' -th CSI amplitude matrix; if at position SlIf the q-th CSI amplitude data of the N 'th CSI amplitude matrix is a decimal, converting an integer part and a decimal part of the N' th CSI amplitude matrix into binary numbers respectively, and if the number of bits of the binary numbers obtained by converting the integer part is less than 8, complementing 0 at a high bit of the binary numbers to change the binary numbers into 8 bits; if the number of the binary digits obtained by converting the integer part is more than 8, retaining 8 digits from right to left, and deleting other digits to obtain 8-digit binary digits; if the binary digit obtained by converting the integer part is equal to 8, keeping the binary digit unchanged; if the binary digit number obtained by fractional part conversion is equal to 4, the binary digit number is kept unchanged; if the digit of the binary number obtained by fractional part conversion is less than 4, 0 is complemented at the low bit of the binary number to change the binary number into a 4-bit binary number; if the number of binary digits obtained by fractional part conversion is more than 4, then going from left to rightDeleting other 4 bits to obtain a 4-bit binary number, splicing the 8-bit binary number obtained after processing the integer part as a high 8 bit and the 4-bit binary number obtained after processing the decimal part as a low 4 bit to obtain a 12-bit binary number, and splicing the 12-bit binary number obtained as a position SlA 12-bit binary number corresponding to the q-th CSI amplitude data of the N' -th CSI amplitude matrix; n ═ 1,2, …, N; q is 1,2, …, 90; assigning the value of the p-th bit from left to right of the 12-bit binary number corresponding to the q-th CSI amplitude data to v12*3*30*(N′-1)+(q-1)*12+pP is 1,2, …, 12, thereby completing v1,v2,v3,...,vK0Initial assignment of (1);
c1-3. construct a first training matrix ha ', ha' (ha ') that holds the first training values of the state quantities of the k1 th neural unit in the first hidden layer'1,ha′2,…,ha′K1)T,ha′k1A first training value representing a state quantity of a k1 th neural unit in the first hidden layer; constructing a matrix pa storing the probability of first activation of the k1 th neural unit in the first hidden layer, pa ═ pa1,pa2,pa3,...,paK1)T,pak1Representing the probability of the first activation of the k1 th neural unit in the first hidden layer; determining a first training matrix ha' of a first state training value of a k1 th neural unit in the first hidden layer and a matrix pa of a first activation probability of a k1 th neural unit in the first hidden layer, wherein the specific process is as follows:
c1-3-1, recording the activation probability of the k1 th nerve unit in the first hidden layer as Pr (k1), and calculating by adopting the formula (1) to obtain Pr (k 1):
where exp denotes an exponential function, Σ being the accumulated sign, bk1The current value of the offset for the k1 th neural unit in the first hidden layer,for the current value of the connection weight between the k1 th neuron in the first hidden layer and the k0 th neuron in the visible layer, vk0The current value of the state quantity of the k0 th nerve unit in the visible layer;
c1-3-2 assigning the current value of Pr (k1) to Pak1Generating a random number between 0 and 1 by using a random function, comparing the current value of Pr (k1) with the random number, and if the current value of Pr (k1) is greater than the random number, assigning 1 to ha'k1Assigning 0 to ha 'if the current value of Pr (k1) is not greater than the random number'k1;
C1-4. construct a second training matrix v ', v ═ v'1,v′2,…,v′K0)T,v′k0Training values which are state quantities of k0 th neural units in the visible layer; determining a training value v 'of a state quantity of a k0 th neural unit in a visible layer'k0The specific process is as follows:
c1-4-1, recording the activation probability of the k0 th nerve unit in the visible layer as Pr (k0), and calculating by adopting the formula (2) to obtain Pr (k 0):
wherein, ak0The current value of the offset for the k0 th neural cell in the visible layer,is the current value of the connection weight, ha 'between the k1 th neural unit in the first hidden layer and the k0 th neural unit in the visible layer'k1A first training value of state quantity of a k1 th neural unit in the first hidden layer;
c1-4-2, generating a random number between 0 and 1 by using a random function, comparing the current value of Pr (k0) with the random number, and if the current value of Pr (k0) is greater than the random number, assigning 1 to the random numberValue to v'k0Assigning 0 to v 'if the current value of Pr (k1) is not greater than the random number'k0;
C1-5, constructing a third training matrix ha', ha ″ (ha ″ ") storing second training values of state quantities of the k1 th neural unit in the first hidden layer1,ha″2,…,ha″k1)T,ha″k1A second training value representing a state quantity of a k1 th neural unit in the first hidden layer; constructing a matrix pa ', pa ═ of (pa'1,pa′2,…,pa′K1)T,pa′k1Representing a second probability of activation of the k1 th neural unit in the first hidden layer; determining a third training matrix ha ″ of the second training value of the state quantity of the k1 th neural unit in the first hidden layer and a matrix pa' of the second activation probability of the k1 th neural unit in the first hidden layer by the following specific processes:
c1-5-1, the activation probability Pr (k1) of the k1 th nerve cell in the first hidden layer is updated by adopting the formula (3):
wherein, bk1The current value of the offset for the k1 th neural unit in the first hidden layer,is the current value, v ', of the connection weight between the k1 th neural unit in the first hidden layer and the k0 th neural unit in the visible layer'k0Training values which are state quantities of k0 th neural units in the visible layer;
c1-5-2 assigning the current value of Pr (k1) to pa'k1Generating a random number between 0 and 1 by adopting a random function, comparing the current value of Pr (k1) with the random number, and assigning 1 to ha' if the current value of Pr (k1) is greater than the random numberk1And simultaneously assigns 1 to hak1(ii) a If the current value of Pr (k1) is not greater than the random number, then a value of 0 is assigned to ha ″k1And simultaneously assigns 0 to hak1;
C1-6. willAdded 0.01 x (pa) to the current value ofk1*vk0-pa′k1*v′k0) Is assigned toTo pairThe value of the weight matrix w is updated to obtain an updated weight matrix w1(ii) a A is tok0Current value of (v) plus 0.01 x (v)k0-v′k0) Is assigned to ak0To a, ak0Updating the value of the visible layer to obtain the updated offset a of the visible layer; b is tok1Added 0.01 x (pa) to the current value ofk1-pa′k1) Is assigned to bk1To b is pairedk1Is updated to obtain the updated offset b of the first hidden layer, wherein pak1、vk0、pa′k1And v'k0All the values of (a) are the current values thereof;
step C2: training the second limited Boltzmann machine:
c2-1, constructing a matrix for storing the state quantity of the second hidden layer, and marking the matrix as hb; constructing a matrix for storing the offset of the second hidden layer, and recording the matrix as c; constructing a weight matrix for storing a connection weight between the first hidden layer and the visible layer, and recording the weight matrix as w2;hb=(hb1,hb2,hb3,...,hbK2)T,hbk2Represents the state value of the K2 th nerve unit in the second hidden layer, K2 is 1,2, …, K2; c ═ c1,c2,c3,..,cK2)T,ck2Representing the offset of the k2 th neural unit in the second hidden layer;
representing the connection weight between the k1 th nerve unit in the first hidden layer and the k2 th nerve unit in the second hidden layer, and respectively adopting random functions to pair ck2Andis initialized to ck2Andrespectively, random values of 0-0.1;
c2-2. construct a fourth training matrix hb ', hb ' ═ hb '1,hb′2,…,hb′K2)T,hb′k2A first training value representing a state quantity of a k2 th neural unit in the second hidden layer; constructing a matrix pb storing first activation probabilities for the k2 th neural unit in the second hidden layer (pb ═ pb)1,pb2,pb3,...,pbK2)T,pbk2Representing the probability of the first activation of the k2 th neural unit in the second hidden layer; determining a fourth training matrix hb' of the first training value of the state quantity of the k2 th neural unit in the second hidden layer and a matrix pb of the first activation probability of the k2 th neural unit in the second hidden layer, specifically comprising the following steps:
c2-2-1, recording the activation probability of the k2 th nerve unit in the second hidden layer as Pr (k2), and calculating by adopting a formula (4) to obtain Pr (k 2):
wherein,ck2The current value of the offset for the k2 th neural unit in the second hidden layer,is the current value, ha, of the connection weight between the k1 th neuron in the first hidden layer and the k2 th neuron in the second hidden layerk1Is the current value of the state quantity of the k1 th neural unit in the first hidden layer;
c2-2-2, assigning the current value of Pr (k2) to pbk2Generating a random number between 0 and 1 by using a random function, comparing the current value of Pr (k2) with the random number, and if the current value of Pr (k2) is greater than the random number, assigning 1 to hb'k2(ii) a If the current value of Pr (k2) is not greater than the random number, then 0 is assigned to hb'k2(ii) a Completing the assignment of a fourth training matrix hb';
c2-3. constructing a fifth training matrix ha '", ha'" ═ ha '"of (ha'" ', of the third training values of the state quantities of the k1 th neural unit in the first hidden layer'1,ha″′2,…,ha″′K1)T,ha″′k1A third training value representing a state quantity of a k1 th neural unit in the first hidden layer; determining a fifth training matrix ha' ″ storing a third training value of the state quantity of the k1 th neural unit in the first hidden layer by the following specific process:
c2-3-1, the activation probability Pr (k1) of the k1 th nerve cell in the first hidden layer is updated by adopting the formula (5):
wherein, bk1The current value of the offset for the k1 th neural unit in the first hidden layer,for the current value of the connection weight between the k1 th neuron in the first hidden layer and the k2 th neuron in the second hidden layer,hb′k2a first training value of state quantity of a k2 th neural unit in a second hidden layer;
c2-3-2, generating a random number between 0 and 1 by using a random function, comparing the current value of Pr (k1) with the random number, and if the current value of Pr (k1) is greater than the random number, assigning 1 to ha ″'k1(ii) a If the current value of Pr (k1) is not greater than the random number, then a value of 0 is assigned to ha ″'k1;
C2-4, constructing a sixth training matrix hb', hb ″ (hb ″ ") storing second training values of state quantities of the k2 th neural unit in the second hidden layer1,hb″2,…,hb″K2)T,hb″k2A second training value representing a state quantity of a k2 th neural unit in the second hidden layer; constructing a matrix pb ', pb ' ═ b (pb '1,pb′2,…,pb′K2)T,pb′k2The method comprises the following steps of representing the second activation probability of the k2 th neural unit in the second hidden layer, determining a sixth training matrix hb 'of a second training value for storing the state quantity of the k2 th neural unit in the second hidden layer and a matrix pb' of the second activation probability of the k2 th neural unit in the second hidden layer, wherein the specific process is as follows:
c2-4-1, the activation probability Pr (k2) of the k2 th nerve cell in the second hidden layer is updated by adopting the formula (6):
wherein, ck2The current value of the offset for the k2 th neural unit in the second hidden layer,is the current value of the connection weight, ha ″, between the k1 th neural unit in the first hidden layer and the k2 th neural unit in the second hidden layer'k1A third state quantity of the k1 th neural unit in the first hidden layerA secondary training value;
c2-4-2 assigning the current value of Pr (k2) to pb'k2Generating a random number between 0 and 1 by adopting a random function, comparing Pr (k2) with the current value with the random number, and if the current value of Pr (k2) is greater than the random number, assigning 1 to hb ″', respectivelyk2And hbk2If the current value of Pr (k2) is not greater than the random number, then 0 is assigned to hb ″', respectivelyk2And hbk2;
C2-5. willCurrent value of plus 0.01 (pb)k2*hak1-pb′k2*ha″′k1) Is assigned toTo pairUpdating to obtain updated weight matrix w2B is mixingk1Current value of (a) plus 0.01 (ha)k1-ha″′k1) Is assigned to bk1To b is pairedk1Updating to obtain the offset b of the updated first hidden layer, and updating ck2Current value of plus 0.01 (pb)k2-pb′k2) To ck2To c fork2Updating to obtain the offset c of the updated second hidden layer; wherein pbk2、hak1、pb′k2And ha'k1The values of (a) are their current values, respectively;
step C3: training the third limited Boltzmann machine:
c3-1, constructing a matrix for storing the state quantity of the third hidden layer, and marking the matrix as hc; constructing a matrix for storing the offset of the third hidden layer, and recording the matrix as d; constructing a weight matrix for storing the connection weight between the second hidden layer and the third hidden layer, and recording the weight matrix as w3;hc=(hc1,hc2,hc3,...,hcK3)T,hck3Represents the state value of the K3 th neural unit in the third hidden layer, K3 is 1,2, …, K3; d ═ d (d)1,d2,d3,...,dK3)T,dk3Representing the offset of the k3 th neural unit in the third hidden layer; representing the connection weight between the k2 th nerve unit in the second hidden layer and the k3 th nerve unit in the third hidden layer, and respectively adopting random functions to pair dk3Andinitializing to dk3Andrespectively, random values of 0-0.1;
c3-2. construct a seventh training matrix hc ', hc' ═ hc 'of the first training values of the state quantities of the k3 th neural unit in the third hidden layer'1,hc′2,…,hc′K3)T,hc′k3A first training value representing a state quantity of a k3 th neural unit in the third hidden layer; constructing a matrix pc storing the probability of first activation of the k3 th neural unit in the third hidden layer (pc ═ c)1,pc2,pc3,...,pcK3)T,Pck3Represents the first activation probability of the k3 th neural unit in the third hidden layer; determining a seventh training matrix hc' storing the first training value of the state quantity of the k3 th neural unit in the third hidden layer and a matrix pc storing the first activation probability of the k3 th neural unit in the third hidden layer, wherein the specific process is as follows:
c3-2-1, recording the activation probability of the k3 th nerve unit in the third hidden layer as Pr (k3), and calculating by adopting a formula (7) to obtain Pr (k 3):
wherein d isk3The current value of the offset for the k3 th neural unit in the third hidden layer,for the current value of the connection weight, hb, between the k2 th neuron in the second hidden layer and the k3 th neuron in the third hidden layerk2Is the current value of the state value of the k2 th neural unit in the second hidden layer;
c3-2-2, assigning the current value of Pr (k3) to pck3Obtaining an updated matrix pc, generating a random number between 0 and 1 by using a random function, comparing the current value of Pr (k3) with the random number, and if the current value of Pr (k3) is greater than the random number, assigning 1 to hc'k3(ii) a If the current value of Pr (k3) is not greater than the random number, then 0 is assigned to hc'k3;
C3-3, constructing an eighth training matrix hb '″, hb' ″ ═ h b '″, h b ″, which deposits a third training value of the state quantity of the k2 th neural unit in the second hidden layer'1,hb〞′2,…,hb″′k2)T,hb″′k2And determining an eighth training matrix hb' ″ for storing the third training value of the state quantity of the k2 th neural unit in the second hidden layer by using the third training value representing the state quantity of the k2 th neural unit in the second hidden layer, which comprises the following specific processes:
c3-3-1, updating the activation probability Pr (k2) of the k2 th neural unit in the second hidden layer by using the formula (8):
wherein, ck2The current value of the offset for the k2 th neural unit in the second hidden layer,is the current value of the connection weight, hc 'between the k2 th neural unit in the second hidden layer and the k3 th neural unit in the third hidden layer'k3A first training value of state quantity of a k3 th neural unit in a third hidden layer;
c3-3-2, generating a random number between 0 and 1 by using a random function, comparing the current value of Pr (k2) with the random number, and if the current value of Pr (k2) is greater than the random number, assigning 1 to hb ″'k2(ii) a If the current value of Pr (k2) is not greater than the random number, then 0 is assigned to hb ″'k2;
C3-4, constructing a ninth training matrix hc ", hc ═ hc ″ (hc ″") that stores the second training values of the state quantities of the k3 th neural unit in the third hidden layer1,hc″2,…,hc″K3)T,hc″k3A second training value representing a state quantity of a k3 th neural unit in the third hidden layer; constructing a matrix pc ', pc ═ c (pc'1,pc′2,…,pc′K3)T,pc′k3Represents the second activation probability of the k3 th neural unit in the third hidden layer; the ninth training matrix hc 'for storing the second training value of the state quantity of the k3 th neural unit in the third hidden layer and the matrix pc' for the second activation probability of the k3 th neural unit in the third hidden layer are determined by the following specific processes:
c3-4-1, recording the activation probability of the k3 th nerve unit in the third hidden layer as Pr (k3), and calculating by adopting the formula (9) to obtain Pr (k 3):
wherein d isk3The current value of the offset for the k3 th neural unit in the third hidden layer,is the current value of the connection weight, hb ″, between the k2 th neural unit in the second hidden layer and the k3 th neural unit in the third hidden layer'k2A third training value which is the state quantity of the k2 th neural unit in the second hidden layer;
c3-4-2 assigning the current value of Pr (k3) to pc'k3Generating a random number between 0 and 1 by adopting a random function, comparing the current value of Pr (k3) with the random number, and if the current value of Pr (k3) is greater than the random number, assigning 1 to hc ″', respectivelyk3And hc andk3(ii) a If the current value of Pr (k3) is not greater than the random number, then 0 is assigned to hc ″', respectivelyk3And hc andk3;
c3-5. willCurrent value of (c) plus 0.01 × (pc)k3*hbk2-pc′k3*hb″′k2) Is assigned toTo pairUpdating to obtain updated weight matrix w3(ii) a C is tok2Current value of plus 0.01 (hb)k2-hb″′k2) To ck2To c fork2Updating to obtain the offset c of the updated second hidden layer; will dk3Current value of (c) plus 0.01 × (pc)k3-pc′k3) Is assigned to dk3To d is pairedk3Updating to obtain the offset d of the updated third hidden layer, wherein pck3、hbk2、pc′k3And hb'k2The values of (a) are their current values, respectively;
step C4: training the fourth limited Boltzmann machine:
c4-1, constructing a matrix for storing the state quantity of the fourth hidden layer, and applying the matrixIs marked as hd; constructing a matrix for storing the offset of the fourth hidden layer, and recording the matrix as e; constructing a weight matrix storing the connection weight between the third hidden layer and the fourth hidden layer, and recording the weight matrix as w4;hd=(hd1,hd2,hd3,...,hdK4)T,hdk4A state value representing the K4 th neural unit in the fourth hidden layer, K4 ═ 1,2.., K4; e ═ e (e)1,e2,e3,...,eK4)T,ek4Represents the offset of the k4 th neural unit of the fourth hidden layer, representing the connection weight between the k3 th neural unit in the third hidden layer and the k4 th neural unit in the fourth hidden layer by adopting a random functionk4Andrespectively initializing to be random values of 0-0.1;
c4-2. construct a tenth training matrix hd 'that stores the first training values of the state quantities of the k4 th neural unit in the fourth hidden layer, hd ═ hd'1,hd′2,…,hd′k4)T,hd′k4A first training value representing a state quantity of a k4 th neural unit in the fourth hidden layer; constructing a matrix pd storing the probability of the first activation of the k4 th neural unit in the fourth hidden layer (pd ═ pd)1,pd2,pd3,...,pdK4)T,pdk4Represents the probability of the first activation of the k4 th neural unit in the fourth hidden layer; the specific process of determining the tenth training matrix hd' storing the first training value of the state quantity of the k4 th neural unit in the fourth hidden layer and the matrix pd storing the first activation probability of the k4 th neural unit in the fourth hidden layer is as follows:
c4-2-1, recording the activation probability of the k4 th nerve unit in the fourth hidden layer as Pr (k4), and calculating by adopting a formula (10) to obtain Pr (k 4):
wherein e isk4The current value of the offset for the k4 th neural unit of the fourth hidden layer,is the current value of the connection weight, hc, between the k3 th neuron in the third hidden layer and the k4 th neuron in the fourth hidden layerk3The current value of the state quantity of the k3 th neural unit in the third hidden layer;
c4-2-2, assigning the current value of Pr (k4) to pdk4Generating a random number between 0 and 1 by using a random function, comparing the current value of Pr (k4) with the random number, and if Pr (k4) is greater than the random number, assigning 1 to hd'k4Assigning 0 to hd 'if the current value of Pr (k4) is not greater than the random number'k4;
C4-3 constructing an eleventh training matrix hc '", hc'" ═ h ″ ', of third training values of state quantities of k3 th neural units in a third hidden layer'1,hc″′2,…,hc″′k3)T,hc″′k3And determining an eleventh training matrix hc' for storing the third training value of the state quantity of the k3 th neural unit in the third hidden layer by using the third training value representing the state quantity of the k3 th neural unit in the third hidden layer, wherein the specific process is as follows:
c4-3-1, the activation probability Pr (k3) of the k3 th nerve cell in the third hidden layer is updated by adopting the formula (11):
wherein d isk3The current value of the offset for the k3 th neural unit in the third hidden layer,is the current value of the connection weight, hd ', between the k3 th neural unit in the third hidden layer and the k4 th neural unit in the fourth hidden layer'k4A first training value of state quantity of a k4 th neural unit in a fourth hidden layer;
c4-3-2, generating a random number between 0 and 1 by using a random function, comparing the current value of Pr (k3) with the random number, and if the current value of Pr (k3) is greater than the random number, assigning 1 to hc ″'k3If the current value of Pr (k3) is not greater than the random number, then 0 is assigned to hc'k3;
C4-4, constructing a twelfth training matrix hd ″ storing second training values of state quantities of the k4 th neural unit in the fourth hidden layer, hd ″ (hd ″)1,hd″2,…,hd″K4)T,hd″k4A second training value representing a state quantity of a k4 th neural unit in the fourth hidden layer; constructing a matrix pd ', pd' ═ of (pd ') of the second activation probability of the k4 th neural unit in the fourth hidden layer'1,pd′2,…,pd′K4)T,pd′k4Represents the second activation probability of the k4 th neural unit in the fourth hidden layer; determining a twelfth training matrix hd ' ' storing the second training value of the state quantity of the k4 th neural unit in the fourth hidden layer and a matrix pd ' of the second activation probability of the k4 th neural unit in the fourth hidden layer, which comprises the following specific processes:
c4-4-1, updating the activation probability Pr (k4) of the k4 th neural unit in the fourth hidden layer by adopting the formula (12):
wherein e isk4The current value of the offset for the k4 th neural unit of the fourth hidden layer,is the current value, hc ″, of the connection weight between the k3 th neural unit in the third hidden layer and the k4 th neural unit in the fourth hidden layer'k3A third training value which is the state quantity of the k3 th neural unit in the third hidden layer;
c4-4-2 assigning the current value of Pr (k4) to pd'k4Generating a random number between 0 and 1 by adopting a random function, comparing the current value of Pr (k4) with the random number, and if the current value of Pr (k4) is greater than the random number, assigning 1 to hd ″', respectivelyk4And hc andk4if the current value of Pr (k4) is not greater than the random number, then 0 is assigned to hd ″', respectivelyk4And hc andk4;
c4-5. willCurrent value of plus 0.01 x (pd)k4*hck3-pd′k4*hc″′k3) Is assigned toTo pairUpdating to obtain updated weight matrix w4(ii) a Will dk3Current value of (d) plus 0.01 (hc)k3-hc″′k3) Is assigned to dk3To d is pairedk3Updating to obtain the updated offset d of the third hidden layer; e is to be43Current value of plus 0.01 x (pd)k4-pd′k4) To ek4To e is aligned withk4Updating to obtain the offset e of the updated fourth hidden layer; wherein pd isk4、hck3、pd′k4And hc'k3The values of (a) are their current values, respectively;
step C5: setting an iteration variable Y, and initializing the iteration variable Y to enable the initial value of the iteration variable Y to be 1;
step C6: carrying out the Y-th iteration updating on the activation probability of each nerve unit in the visible layer, the first hidden layer, the second hidden layer, the third hidden layer and the fourth hidden layer, wherein the specific process is as follows:
c6-1, updating the activation probability Pr (k1) of the k1 th nerve cell in the first hidden layer by adopting the formula (13):
wherein, bk1The current value of the offset for the k1 th neural unit in the first hidden layer,for the current value of the connection weight between the k1 th neuron in the first hidden layer and the k0 th neuron in the visible layer, vk0The current value of the state quantity of the k0 th nerve unit in the visible layer;
c6-2, generating a random number between 0 and 1 by using a random function, comparing the current value of Pr (k1) with the random number, and if the current value of Pr (k1) is larger than the random number, assigning 1 to hak1Pair hak1Updating is carried out; if the current value of Pr (k1) is not greater than the random number, then a value of 0 is assigned to hak1Pair hak1Updating is carried out;
c6-3, updating the activation probability Pr (k2) of the k2 neural unit in the second hidden layer by adopting the formula (14):
wherein, ck2The current value of the offset for the k2 th neural unit in the second hidden layer,between the k1 th nerve cell in the first hidden layer and the k2 th nerve cell in the second hidden layerCurrent value of the connection weight of hak1Is the current value of the state quantity of the k1 th neural unit in the first hidden layer;
c6-4, generating a random number between 0 and 1 by using a random function, comparing the current value of Pr (k2) with the random number, and if the current value of Pr (k2) is larger than the random number, assigning 1 to hbk2To hbk2Updating is carried out; if the current value of Pr (k2) is not greater than the random number, then a value of 0 is assigned to hbk2To hbk2Updating is carried out;
c6-5, updating the activation probability Pr (k3) of the k3 neural unit in the third hidden layer by adopting the formula (15):
wherein d isk3The current value of the offset for the k3 th neural unit in the third hidden layer,for the current value of the connection weight, hb, between the k2 th neuron in the second hidden layer and the k3 th neuron in the third hidden layerk2Is the current value of the state quantity of the k2 th nerve unit in the second hidden layer;
c6-6, generating a random number between 0 and 1 by using a random function, comparing the current value of Pr (k3) with the random number, and if the current value of Pr (k3) is larger than the random number, assigning 1 to hck3To hck3Updating is carried out; if the current value of Pr (k3) is not greater than the random number, then a value of 0 is assigned to hck3To hck3Updating is carried out;
c6-7, updating the activation probability Pr (k4) of the k4 neural cell in the fourth hidden layer by adopting the formula (16):
wherein e isk4The current value of the offset for the k4 th neural unit in the fourth hidden layer,is the current value of the connection weight, hc, between the k3 th neuron in the third hidden layer and the k4 th neuron in the fourth hidden layerk3The current value of the state quantity of the k3 th neural unit in the third hidden layer;
c6-8, generating a random number between 0 and 1 by using a random function, comparing the current value of Pr (k4) with the random number, and if the current value of Pr (k4) is larger than the random number, assigning 1 to hdk4To hdk4Updating is carried out; if the current value of Pr (k4) is not greater than the random number, then a value of 0 is assigned to hdk4To hdk4Updating is carried out;
c6-9, updating the activation probability Pr (k3) of the k3 th neural cell in the third hidden layer again by adopting the formula (17):
wherein d isk3The current value of the offset for the k3 th neural unit in the third hidden layer,as the current value of the connection weight, hd, between the k3 th neuron in the third hidden layer and the k4 th neuron in the fourth hidden layerk4Is the current value of the state quantity of the k4 th neural unit in the fourth hidden layer;
c6-10, generating a random number between 0 and 1 by using a random function, comparing the current value of Pr (k3) with the random number, and if the current value of Pr (k3) is larger than the random number, assigning 1 to hck3To hck3Updating is carried out; if the current value of Pr (k3) is not greater than the random number, then a value of 0 is assigned to hck3To hck3Updating is carried out;
c6-11, updating the activation probability Pr (k2) of the k2 neural cell in the second hidden layer again by adopting the formula (18):
wherein, ck2The current value of the offset for the k2 th neural unit in the second hidden layer,is the current value of the connection weight, hc, between the k2 th neuron in the second hidden layer and the k3 th neuron in the third hidden layerk3The current value of the state quantity of the k3 th neural unit in the third hidden layer;
c6-12, generating a random number between 0 and 1 by using a random function, comparing the current value of Pr (k2) with the random number, and if the current value of Pr (k2) is larger than the random number, assigning 1 to hbk2To hbk2Updating is carried out; if the current value of Pr (k2) is not greater than the random number, then a value of 0 is assigned to hbk2To hbk2Updating is carried out;
c6-13, updating the activation probability Pr (k1) of the k1 th neural cell in the first hidden layer again by adopting the formula (19):
wherein, bk1The current value of the offset for the k1 th neural unit in the first hidden layer,for the current value of the connection weight, hb, between the k1 th neuron in the first hidden layer and the k2 th neuron in the second hidden layerk2Is the current value of the state quantity of the k2 th nerve unit in the second hidden layer;
c6-14, generating a random number between 0 and 1 by using a random function, and carrying out the current value of Pr (k1) and the random numberIn contrast, if the current value of Pr (k1) is greater than the random number, then a value of 1 is assigned to hak1Pair hak1Updating is carried out; if the current value of Pr (k1) is not greater than the random number, then a value of 0 is assigned to hak1Pair hak1Updating is carried out;
c6-15, updating the activation probability Pr (k0) of the k0 th nerve cell in the visible layer by adopting the formula (20):
wherein, ak0The current value of the offset for the k0 th neural cell in the visible layer,is the current value of the connection weight, ha, between the k1 th neuron in the first hidden layer and the k0 th neuron in the visible layerk1Is the current value of the state quantity of the k1 th neural unit in the first hidden layer;
c6-16: generating a random number between 0 and 1 by adopting a random function, comparing the current value of Pr (k0) with the random number, and if the current value of Pr (k0) is greater than the random number, assigning 1 to v'k0To v'k0Updating is carried out; if the current value of Pr (k0) is not greater than the random number, then 0 is assigned to v'k0To v'k0Updating is carried out;
step C7: creating N reconstructed amplitude matrixes for storing reconstructed CSI amplitude data, wherein each reconstructed amplitude matrix can store 30 rows and 3 columns of reconstructed CSI amplitude data, the sequencing definition of the 30 rows and 3 columns of reconstructed CSI amplitude data stored in each reconstructed amplitude matrix is the same as that of the CS amplitude matrix, and v't~v′11+tAccording to v'tv′t+1v′t+2v′t+3v′t+4…v′t+9v′t+10v′t+11The 12-bit binary number is combined into a 12-bit binary number, t is 12n +1, n is 0, 1,2, … and K0/12-1, and the 1 st bit to the 8 th bit from left to right of the 12-bit binary number is used as a reconstruction binary numberCorresponding decimal numbers obtained by calculating the integral part of the CSI amplitude data, the 9 th to the 12 th digits are taken as corresponding decimal numbers obtained by calculating the fraction part of the reconstructed CSI amplitude data, the decimal numbers corresponding to the integral part are taken as integers, the decimal numbers corresponding to the fraction part are taken as the fraction part to be combined as the decimal number corresponding to the 12-digit binary number, the decimal number is taken as the (t-1)/12) +1) -90 (N' -1) th reconstructed CSI amplitude data of the Nth reconstructed amplitude matrix and is stored in the Nth reconstructed amplitude matrix, INT is an integer function, and N reconstructed amplitude matrixes respectively storing 30 rows and 3 columns of reconstructed CSI amplitude data are obtained;
step C8: calculating the Euclidean distance between the Nth 'reconstructed amplitude matrix and the Nth' CSI amplitude matrix, accumulating the Euclidean distance between the 1 st reconstructed amplitude matrix and the 1 st CSI amplitude matrix to the Euclidean distance between the Nth reconstructed amplitude matrix and the Nth CSI amplitude matrix, taking the obtained sum as an output error, and recording the output error as phi;
step C9: offset amounts for visible layer, first hidden layer, second hidden layer, third hidden layer and fourth hidden layer and w1,w2,w3And w4Respectively updating, and the specific process is as follows:
c9-1. construct a matrix δ a storing the residual terms of the k0 th neural unit in the visible layer (δ a ═ d1,δa2,…,δaK0),δak0Residual terms for the k0 th neural unit in the visible layer; calculating a residual error term delta a of the k0 th nerve unit in the visible layer by adopting the formula (21)k0:
δak0=-(v′k0-vk0)*v′k0*(1-v′k0) (21)
Wherein, v'k0Training of state quantities for the k0 th neural unit in the visible layerCurrent value of exercise, vk0The current value of the state quantity of the k0 th nerve unit in the visible layer;
c9-2. construct a matrix δ b that holds the residual terms of the k1 th neural unit in the first hidden layer (δ b ═ b)1,δb2,…,δbK1),δbk1Residual terms for the k1 th neural unit in the first hidden layer; calculating a residual error term delta b of the k1 th nerve unit in the first hidden layer by adopting a formula (22)k1:
Wherein,is the current value of the connection weight, deltaa, between the k1 th neural unit in the visible layer and the first hidden layerk0Is the current value of the residual term for the k0 th neural unit in the visible layer, hak1Is the current value of the state quantity of the k1 th neural unit in the first hidden layer;
c9-3. construct a matrix δ C that holds the residual terms of the k2 th neural unit in the second hidden layer (δ C ═ δ C)1,δc2,…,δcK2),δck2Residual terms for the k2 th neural unit in the second hidden layer; calculating a residual error term deltac of the k2 th neural unit in the second hidden layer by adopting a formula (23)k2:
Wherein,is the current value of the connection weight between the k1 th neuron in the first hidden layer and the k2 th neuron in the second hidden layer, δ bk1Is the current value of the residual term of the k1 th neural unit in the first hidden layer, hbk2Is the current value of the state quantity of the k2 th nerve unit in the second hidden layer;
c9-4. construct a matrix δ d storing the residual terms of the k3 th neural unit in the third hidden layer (δ d ═ d)1,δd2,…,δdK3),δdk3Residual terms for the k3 th neural unit in the third hidden layer; calculating a residual error term delta d of the k3 th nerve unit in the third hidden layer by adopting a formula (24)k3:
Wherein,is the current value of the connection weight, deltac, between the k2 th neuron in the second hidden layer and the k3 th neuron in the third hidden layerk2Is the current value, hc, of the residual term of the k2 th neural unit in the second hidden layerk3The current value of the state quantity of the k3 th neural unit in the third hidden layer;
c9-5. construct a matrix δ e storing the residual terms of the k4 th neural unit in the fourth hidden layer (δ e ═ e1,δe2,…,δeK4),δek4Calculating a residual term delta e of the k4 neural unit in the fourth hidden layer by adopting a formula (25) for the residual term of the k4 neural unit in the fourth hidden layerk4:
Wherein,is the current value of the connection weight, δ d, between the k3 th neuron in the third hidden layer and the k4 th neuron in the fourth hidden layerk3Is the residual error of the k3 th nerve unit in the third hidden layerCurrent value of the term, hdk4Is the current value of the state quantity of the k4 th neural unit in the fourth hidden layer;
c9-6. willIs added to the current value of (d) by 0.5 x δ dk3hdk4Is assigned toTo pairUpdating is carried out; will be provided withIs added to the current value of 0.5 × δ ck2hck3Is assigned toTo pairUpdating is carried out; will be provided withIs added by 0.5 × δ bk1hbk2Is assigned toTo pairIs updated toIs added to the current value of (d) by 0.5 x δ ak1hak1Is assigned toTo pairUpdating is carried out; e is to bek4Is added with 0.5 × δ ek4Is assigned to ek4(ii) a To ek4Updating is carried out; will dk3Is added to the current value of (d) by 0.5 x δ dk3Is assigned to dk3(ii) a To dk3Updating is carried out; c is tok2Is added to the current value of 0.5 × δ ck2Is assigned to ck2To c fork2Updating is carried out; b is tok1Is added by 0.5 × δ bk1Is assigned to bk1To b is pairedk1Updating is carried out; a is tok0Is added to the current value of (d) by 0.5 x δ ak0Is assigned to ak0To a, ak0Performing an update, wherein δ dk3、hdk4、δck2、hck3、δbk1、hbk2、δak1、hak1And δ ek4The values of (a) are their current values, respectively;
step C10: judging whether the output error phi is less than 5 and Y is equal to 10000, if at least one of the two conditions is satisfied, determining w1、w2、w3、w4The current values of a, b, c, d and e are taken as SlIf neither of the two conditions is true, the value of Y is updated by adding 1 to the current value of Y, and the process returns to the step C6 to the step C9 to carry out the next iteration until at least one of the two conditions is true to obtain SlThe fingerprint of (2);
fifthly, positioning indoor personnel, and the specific process is as follows:
fifthly-1, acquiring N signal data packets of the current position of the indoor person to be positioned in real time through a computer in the monitoring room, and saving the acquired N signal data packets of the current position of the indoor person as data to be detected as a file named CSI.dat;
fifthly-2, obtaining N CSI amplitude matrixes of the current position according to the N signal data packets of the current position by the same method, and calculating to obtain the mean, the variance sigma and the standard deviation std of the N CSI amplitude matrixes of the current position;
-3 setting a coefficient of variation, and recording the coefficient of variation as λ, and calculating the coefficient of variation λ by using a formula (26):
fifthly, determining a 12-bit binary number corresponding to the mth CSI amplitude data of the Nth' th CSI amplitude matrix in the N CSI amplitude matrices at the current position, wherein the specific process is as follows: judging whether the m-th CSI amplitude data of the N 'th CSI amplitude matrix in the N CSI amplitude matrices at the current position is an integer or a decimal, if the m-th CSI amplitude data of the N' th CSI amplitude matrix in the N CSI amplitude matrices at the current position is an integer, converting the m-th CSI amplitude data into a binary number, supplementing 4 0 s to the right of the obtained binary number to obtain a new binary number, then judging whether the number of bits of the obtained new binary number is equal to 12, if the m-th CSI amplitude data is equal to 12, taking the new binary number as a 12-bit binary number corresponding to the m-th CSI amplitude data of the N 'th CSI amplitude matrix in the N CSI amplitude matrices at the current position, if the m-th CSI amplitude data is greater than 12, selecting 12 bits from right to left as 12-bit binary numbers corresponding to the m-th CSI amplitude data of the N' th CSI amplitude matrix in the N CSI amplitude matrices at the current position, and if the m-th CSI amplitude data is less than 12, after 0 is supplemented to the left of the new binary number until the number of bits is 12 bits, the new binary number is used as a 12-bit binary number corresponding to the mth CSI amplitude data of the Nth' CSI amplitude matrix in the N CSI amplitude matrices at the current position; if the m-th CSI amplitude data of the N' th CSI amplitude matrix in the N CSI amplitude matrices at the current position is a decimal, respectively converting an integer part and a decimal part of the M-th CSI amplitude data into binary numbers, if the number of bits of the binary numbers obtained by converting the integer part is less than 8, complementing 0 at the high bits of the binary numbers to change the binary numbers into 8-bit binary numbers, if the number of bits of the binary numbers obtained by converting the integer part is more than 8, remaining other bits of the 8-bit binary numbers from right to left to delete the 8-bit binary numbers to obtain the 8-bit binary numbers, if the number of bits of the binary numbers obtained by converting the integer part is equal to 8, keeping the number of bits obtained by converting the decimal part unchanged, if the number of bits obtained by converting the decimal part is equal to 4, keeping the number of bits obtained by converting the decimal part unchanged, and if the number of bits obtained by converting the decimal part is less than 4If the number of bits of the binary number obtained by converting the decimal part is greater than 4, remaining 4 bits from left to right and deleting the other bits to obtain 4-bit binary numbers, using the 8-bit binary number obtained by processing the integer part as a high 8 bit, splicing the 4-bit binary number obtained by processing the decimal part as a low 4 bit to obtain a 12-bit binary number, and using the spliced 12-bit binary number as a 12-bit binary number corresponding to the mth CSI amplitude data of the Nth' CSI amplitude matrix in the N CSI amplitude matrices at the current position; n ″' -1, 2, …, N; m is 1,2, …, 90; assigning the value of the x-th bit counted from left to right to v by the twelve-bit binary value of the m-th amplitude data12*3*30*(N″′-1)+(m-1)*12+xTo v is to v1,v2,v3,...,vK0Updating, wherein x is 1,2, …, 12;
fifthly, 5 will be at the position SlThe probability of obtaining v is denoted as Pr (v)l) Determining Pr (v)l) The specific process comprises the following steps:
5-1, adding SlFingerprint w of1、w2、w3、w4And a, b, C, d and e are used as current values, output errors are recalculated according to the methods of the step C6, the step C7 and the step C8, and the output errors calculated at the moment are recorded as phi1;
5-2, calculating by formula (27) to obtain Pr (v)l):
Fifthly, recording the probability of the indoor position asLet Pr (Lo) be uniformly distributed in the indoor position probabilityl) 1/L, locating the indoor person to be positioned at the position SlIs denoted as Pr (S)l) Calculating by adopting a formula (28) to obtain Pr (S)l):
7 recording the coordinate of the current position of the indoor person to be positioned as (x)l′,yl′) Calculating (x) using equations (29) and (30)l′,yl′):
(x) calculated by (8)l′,yl′) Namely the positioning coordinates of the indoor person to be positioned.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810977963.5A CN109302309B (en) | 2018-08-27 | 2018-08-27 | Passive sensing indoor positioning method based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810977963.5A CN109302309B (en) | 2018-08-27 | 2018-08-27 | Passive sensing indoor positioning method based on deep learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109302309A CN109302309A (en) | 2019-02-01 |
CN109302309B true CN109302309B (en) | 2021-07-30 |
Family
ID=65165470
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810977963.5A Active CN109302309B (en) | 2018-08-27 | 2018-08-27 | Passive sensing indoor positioning method based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109302309B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110972056B (en) * | 2019-11-08 | 2020-09-29 | 宁波大学 | UWB indoor positioning method based on machine learning |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107832834A (en) * | 2017-11-13 | 2018-03-23 | 合肥工业大学 | A kind of construction method of the WIFI indoor positioning fingerprint bases based on generation confrontation network |
-
2018
- 2018-08-27 CN CN201810977963.5A patent/CN109302309B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107832834A (en) * | 2017-11-13 | 2018-03-23 | 合肥工业大学 | A kind of construction method of the WIFI indoor positioning fingerprint bases based on generation confrontation network |
Non-Patent Citations (1)
Title |
---|
基于启发式概率神经网络的WLAN被动定位算法研究;周祥东;《中国优秀硕士毕业论文全文数据库信息科技辑》;20180430;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN109302309A (en) | 2019-02-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Sokal et al. | Spatial autocorrelation in biology: 1. Methodology | |
CN104573393B (en) | A kind of soil moisture station data based on bayesian theory rises two time scales approach | |
CN101387600B (en) | Cable system health monitoring method in cable structure based on mixed monitor | |
CN110619432B (en) | Feature extraction hydrological forecasting method based on deep learning | |
CN110401978B (en) | Indoor positioning method based on neural network and particle filter multi-source fusion | |
CN109151750B (en) | LTE indoor positioning floor distinguishing method based on recurrent neural network model | |
CN109302309B (en) | Passive sensing indoor positioning method based on deep learning | |
CN108846261B (en) | Gene expression time sequence data classification method based on visual graph algorithm | |
CN108717176A (en) | Time difference locating technology method based on artificial bee colony algorithm | |
CN112101418B (en) | Mammary tumor type identification method, system, medium and equipment | |
CN112815937B (en) | Optimal weight estimation method for data fusion of redundant inertial measurement unit | |
CN117634101B (en) | Chip surface morphology determination method, chip surface morphology determination device, computer device and storage medium | |
CN111932612B (en) | Intelligent vehicle vision positioning method and device based on second-order hidden Markov model | |
CN113344149A (en) | PM2.5 hourly prediction method based on neural network | |
CN116541837A (en) | Internet of things malicious software family classification method based on lightweight convolutional neural network and multi-teacher knowledge distillation | |
CN113515798B (en) | Urban three-dimensional space expansion simulation method and device | |
CN109033181B (en) | Wind field geographic numerical simulation method for complex terrain area | |
CN113688770A (en) | Long-term wind pressure missing data completion method and device for high-rise building | |
Kim et al. | High-resolution touch floor system using particle swarm optimization neural network | |
CN115849202B (en) | Intelligent crane operation target identification method based on digital twin technology | |
CN111553954A (en) | Direct method monocular SLAM-based online luminosity calibration method | |
CN116879960A (en) | Advanced mining detection abnormal body positioning and identifying method based on deep learning | |
CN108304649B (en) | High-rise building deformation prediction method | |
CN113989844A (en) | Pedestrian detection method based on convolutional neural network | |
CN106595472A (en) | Method for determining accuracy of photogrammetric system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |