CN110401978B - Indoor positioning method based on neural network and particle filter multi-source fusion - Google Patents

Indoor positioning method based on neural network and particle filter multi-source fusion Download PDF

Info

Publication number
CN110401978B
CN110401978B CN201910657419.7A CN201910657419A CN110401978B CN 110401978 B CN110401978 B CN 110401978B CN 201910657419 A CN201910657419 A CN 201910657419A CN 110401978 B CN110401978 B CN 110401978B
Authority
CN
China
Prior art keywords
layer
neuron
hidden layer
neural network
representing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201910657419.7A
Other languages
Chinese (zh)
Other versions
CN110401978A (en
Inventor
鲍亚川
杨再秀
李敏
向才炳
刘嘉钰
程可欣
卢小峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CETC 54 Research Institute
Original Assignee
CETC 54 Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CETC 54 Research Institute filed Critical CETC 54 Research Institute
Priority to CN201910657419.7A priority Critical patent/CN110401978B/en
Publication of CN110401978A publication Critical patent/CN110401978A/en
Application granted granted Critical
Publication of CN110401978B publication Critical patent/CN110401978B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B17/00Monitoring; Testing
    • H04B17/30Monitoring; Testing of propagation channels
    • H04B17/309Measuring or estimating channel quality parameters
    • H04B17/318Received signal strength
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W64/00Locating users or terminals or network equipment for network management purposes, e.g. mobility management
    • H04W64/006Locating users or terminals or network equipment for network management purposes, e.g. mobility management with additional information processing, e.g. for direction or speed determination

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Automation & Control Theory (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Quality & Reliability (AREA)
  • Electromagnetism (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an indoor positioning method based on multi-source fusion of a neural network and particle filtering, which mainly solves the problems of low accuracy and long positioning time consumption of the existing indoor positioning technology. The method comprises the following implementation steps: (1) generating a sample set; (2) constructing and training a Deep Belief Network (DBN); (3) constructing and training a Recurrent Neural Network (RNN); (4) carrying out online initial positioning on one indoor communication mobile device; (5) and acquiring a final positioning result of the indoor communication mobile equipment. The invention trains the neural network through the collection sample set to realize the online primary positioning of one indoor communication mobile device, and the particle filter fuses two primary positioning results, thereby improving the positioning accuracy and shortening the positioning time.

Description

Indoor positioning method based on neural network and particle filter multi-source fusion
Technical Field
The invention belongs to the technical field of communication, and further relates to an indoor positioning method based on multi-source fusion of a neural network and particle filtering in the technical field of network communication. The invention utilizes an indoor positioning system to estimate the two-dimensional position of the indoor communication mobile equipment.
Background
There are many strategies applied to indoor positioning, of which WiFi signal based positioning and ZigBee signal based positioning are the most common positioning strategies. The positioning method based on WiFi signals is most commonly used to establish a fingerprint database for positioning by using the acquired received Signal Strength indication rssi (received Signal Strength indication). In recent years, some scholars have proposed a positioning method for performing pedestrian Dead reckoning (pdr) by using an inertial measurement unit (imu) (inertial measurement unit) such as an acceleration sensor and a direction sensor carried by an intelligent terminal. The indoor positioning methods have respective advantages and disadvantages, and the positioning technologies are reasonably fused, so that the overall performance of the system is greatly improved, and the indoor positioning result is more accurate.
The patent document "an indoor positioning method based on particle filter algorithm" (application number: 201810361226.2, application publication number: CN 108632761a) applied by the university of west ann traffic discloses an indoor positioning method based on particle filter algorithm. The method comprises the steps of firstly fusing pedestrian navigation information, WiFi signal intensity information and a geomagnetic signal three-dimensional sequence in N steps after a user starts to walk, determining an initial point area through WiFi, accurately positioning the geomagnetic to determine an initial position coordinate, and then mutually checking and improving the positioning robustness through the independence of positioning results of a nearest neighbor matching algorithm based on WiFi, a particle filter algorithm based on WiFi and PDR and a particle filter algorithm based on the geomagnetic and PDR. However, the method still has the disadvantages that when the WiFi positioning method is used to predict the two-dimensional position of the indoor communication mobile device, the location fingerprint database needs to be searched online, which results in long positioning time and inaccurate positioning result, and thus the method may cause a large error in the final positioning result.
An adaptive Kalman filtering method for WiFi/PDR indoor fusion positioning is disclosed in a patent document 'WiFi/PDR indoor fusion positioning oriented adaptive Kalman filtering method' applied by Chongqing post and telecommunications university (application number: 2017102909741, application publication number: CN 107426687A). The method comprises the steps that firstly, the user holds the terminal equipment in a target area, the RSSI from each AP is received, the position of a user is obtained by using a weighted least square method under a lognormal distribution path loss model, meanwhile, the position of the user is obtained through an estimation model in a PDR algorithm, and then positioning information based on a propagation model and positioning information of the PDR are fused for multiple times by using self-adaptive Kalman filtering to obtain the position of the user. The method has the disadvantages that Kalman filtering is suitable for a linear system and a Gaussian noise environment, and is not suitable for a nonlinear system and an environment with uncertain noise, and the indoor environment of the indoor mobile communication equipment is the nonlinear system and the noise is uncertain, so that a large error is caused to the positioning result of the indoor mobile communication equipment, and the accuracy is reduced.
Disclosure of Invention
The invention aims to provide an indoor positioning method based on multi-source fusion of a neural network and particle filtering aiming at the defects in the prior art, so as to solve the problems of low positioning accuracy and long positioning time consumption of communication mobile equipment in an indoor area.
The idea of the invention for realizing the above purpose is to respectively use the trained deep belief network DBN and recurrent neural network RNN to carry out preliminary position prediction on one indoor communication mobile device, and use a particle filter to fuse the two predicted positions to realize the positioning of one indoor communication mobile device.
The specific steps for realizing the purpose of the invention are as follows:
(1) generating a sample set:
(1a) at least 5 ZigBee route Access Points (APs) are distributed in an indoor area where the indoor communication mobile equipment to be positioned is located, the indoor area space where the indoor communication mobile equipment to be positioned is located is divided into at least 300 square grids to form sample acquisition points, and the position of each sample acquisition point and the signal strength indication (RSSI) of each AP acquired by the position of each sample acquisition point form a sample set A;
(1b) the method comprises the steps that the indoor communication mobile equipment is held by a hand, the movement deflection angle between each walking position and two continuous walking positions is collected in an indoor area where the indoor communication mobile equipment to be positioned is located, and a sample set B is formed by the walking position x of each time, the walking position y of the next time and the movement deflection angle theta between the two walking positions in the form of (x, theta, y);
(2) constructing a Deep Belief Network (DBN):
building a 6-layer deep confidence network DBN, wherein the whole structure of the DBN is sequentially as follows, an input layer → a first hidden layer → a second hidden layer → a third hidden layer → a fourth hidden layer → a BP layer, and the number of neurons in each layer is 95,20,15,10,2 and 2 respectively; the input layer is a visible layer of a 1 st limited Boltzmann machine RBM, and the first hidden layer is a hidden layer of the 1 st limited Boltzmann machine RBM; the first hidden layer and the second hidden layer are respectively a visible layer and a hidden layer of a 2 nd restricted Boltzmann machine RBM; the second hidden layer and the third hidden layer are respectively a visible layer and a hidden layer of a 3 rd limited Boltzmann machine RBM, and the third hidden layer and the fourth hidden layer are respectively a visible layer and a hidden layer of a 4 th limited Boltzmann machine RBM; a backward propagation BP layer is used as an output layer;
(3) training the deep belief network DBN:
(3a) inputting a sample set A into a deep confidence network DBN, taking a signal strength indication RSSI in the sample set A as an initial value of a visible layer v of a restricted Boltzmann machine RBM, and carrying out forward training on the restricted Boltzmann machines RBM in the deep confidence network DBN one by utilizing a contrast divergence method CD-1 to obtain a weight value and a bias value of the deep confidence neural network DBN;
(3b) taking the difference value between the position coordinate output after the forward training of the deep confidence network DBN and the position coordinate of the sample acquisition point in the sample set A as error information, downwards propagating the error information from a back propagation BP layer to each layer of the deep confidence network DBN, and carrying out reverse training on the deep confidence network DBN by adopting a small batch gradient descent method MBGD (moving target group) to update the weight and the offset value of the deep confidence network DBN until the value of the loss function reaches the minimum, stopping training and obtaining the trained deep confidence network DBN;
(4) constructing a Recurrent Neural Network (RNN):
constructing a 3-layer recurrent neural network RNN, wherein the whole structure sequentially comprises an input layer → a hidden layer → an output layer, and the number of neurons in each layer is respectively as follows: 3, 15, 2, the neurons in the layers are mutually independent, the neurons between the layers are fully connected, the activation function between the input layer and the hidden layer is a tanh function, and the activation function between the hidden layer and the output layer is a softmax function;
(5) training the recurrent neural network RNN:
(5a) inputting a sample set B into a Recurrent Neural Network (RNN), taking first position information in the sample set B as an initial state value of a Recurrent Neural Network (RNN) hidden layer, taking first deflection angle information in the sample set B as an initial state value of a Recurrent Neural Network (RNN) input layer, and carrying out forward training on the Recurrent Neural Network (RNN);
(5b) reversely training the recurrent neural network RNN, and updating each parameter value of the recurrent neural network RNN by using a time-dependent back propagation algorithm BPTT until
Figure BDA0002137271380000031
And the update is stopped, so that a trained recurrent neural network RNN is obtained, wherein ∑ represents a summation operation,
Figure BDA0002137271380000032
it is shown that the operation of derivation is performed,
Figure BDA0002137271380000033
representing the value of the p-th parameter before the update at time t,
Figure BDA0002137271380000034
representing the value of the rho parameter before updating at the time t;
(6) performing online initial positioning of an indoor communication mobile device:
(6a) inputting the RSSI information of the signal strength indication measured by the indoor communication mobile equipment in real time into a trained deep confidence network (DBN), and outputting a first predicted position of the indoor communication mobile equipment;
(6b) inputting the initial position of the indoor communication mobile equipment and the motion deflection angle acquired by an inertial measurement unit IMU in the indoor communication mobile equipment into a trained recurrent neural network RNN, and outputting a second predicted position of the indoor communication mobile equipment;
(7) acquiring a final positioning result of an indoor communication mobile device:
(7a) inputting a first predicted position and a second predicted position of an indoor communication mobile device into a particle filter respectively, and randomly generating 300 particles which obey positive space distribution at the second predicted position;
(7b) the weight of each particle is calculated according to the following formula:
Figure BDA0002137271380000041
wherein the content of the first and second substances,
Figure BDA0002137271380000042
denotes the weight of the ζ -th particle, ζ ═ 1, 2., 300,
Figure BDA0002137271380000043
denotes the square root operation, x1And y1Abscissa and ordinate, x, of a first predicted position coordinate of an indoor communication mobile device respectively representing an output of a deep belief networkζAnd yζRespectively representing the abscissa and the ordinate in the position coordinate of the ζ -th particle;
(7c) the indoor position of an indoor communication mobile device is calculated according to the following formula:
Figure BDA0002137271380000044
Figure BDA0002137271380000045
wherein the content of the first and second substances,
Figure BDA0002137271380000046
and
Figure BDA0002137271380000047
respectively representing the abscissa and ordinate of the indoor position of an indoor communication mobile device.
Compared with the prior art, the invention has the following advantages:
firstly, the deep confidence network DBN is constructed and trained, the cyclic neural network RNN is constructed and trained, and the trained deep confidence network DBN and the trained cyclic neural network RNN are respectively used for predicting the position of the indoor communication mobile equipment, so that the problems of long positioning time and inaccurate positioning result caused by the fact that a position fingerprint database needs to be searched online when the two-dimensional position of the indoor communication mobile equipment is predicted in the prior art are solved, and the indoor positioning precision is improved while the positioning time is shortened.
Secondly, because the invention adopts the particle filter algorithm to fuse the two predicted positions, the invention overcomes the error caused by the non-linear system of the indoor environment where the indoor mobile communication equipment is located and the uncertain noise in the prior art, and the invention can more accurately position the indoor mobile communication equipment.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a block diagram of a deep belief network DBN constructed in accordance with the present invention;
FIG. 3 is a block diagram of a recurrent neural network RNN constructed in accordance with the present invention;
FIG. 4 is a simulation of the present invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
The specific steps implemented by the present invention are further described with reference to fig. 1.
Step 1, generating a sample set.
At least 5 ZigBee route Access Points (APs) are distributed in an indoor area where the indoor communication mobile equipment to be positioned is located, the indoor area space where the indoor communication mobile equipment to be positioned is located is divided into at least 300 square grids to form sample acquisition points, and the position of each sample acquisition point and the signal strength indication (RSSI) of each AP acquired by the position of each sample acquisition point form a sample set A.
The handheld indoor communication mobile equipment collects the movement deflection angle between each walking position and two continuous walking positions in an indoor area where the indoor communication mobile equipment to be positioned is located, and a sample set B is formed by the walking position x, the walking position y and the movement deflection angle theta between the walking position x, the walking position y and the walking position y in each time and the movement deflection angle theta between the walking position x, the walking position y and the walking position y in two times in a (x, theta, y) mode.
And 2, constructing a Deep Belief Network (DBN).
Building a 6-layer deep confidence network DBN, wherein the whole structure of the DBN is sequentially as follows, an input layer → a first hidden layer → a second hidden layer → a third hidden layer → a fourth hidden layer → a BP layer, and the number of neurons in each layer is 95,20,15,10,2 and 2 respectively; the input layer is a visible layer of a 1 st limited Boltzmann machine RBM, and the first hidden layer is a hidden layer of the 1 st limited Boltzmann machine RBM; the first hidden layer and the second hidden layer are respectively a visible layer and a hidden layer of a 2 nd restricted Boltzmann machine RBM; the second hidden layer and the third hidden layer are respectively a visible layer and a hidden layer of a 3 rd limited Boltzmann machine RBM, and the third hidden layer and the fourth hidden layer are respectively a visible layer and a hidden layer of a 4 th limited Boltzmann machine RBM; the back propagation BP layer serves as the output layer.
Neurons in the visible and hidden layers of the restricted boltzmann machine are fully connected, and neurons in the layers are unconnected.
And 3, training a Deep Belief Network (DBN).
Inputting a sample set A into a deep confidence network DBN, taking a signal strength indication RSSI in the sample set A as an initial value of a visible layer v of a restricted Boltzmann machine RBM, and carrying out forward training on the restricted Boltzmann machines RBM in the deep confidence network DBN one by utilizing contrast divergence in combination with a one-step Gibbs sampling method CD-1 to obtain a weight value and a bias value of the deep confidence network DBN.
The contrast divergence is described below in conjunction with the steps of the one-step Gibbs sampling method CD-1.
Step 1, calculating the energy of each restricted Boltzmann machine RBM in the deep belief network DBN according to the following formula:
Figure BDA0002137271380000061
wherein E isγRepresenting the gamma restricted boltzmannThe energy of the machine RBM, γ ═ 1,2,3,4, (v, h | θ) denotes the arrangement of a visible layer state value matrix v and a hidden layer state value matrix h in the limited boltzmann machine RBM when a parameter θ condition is satisfied, v ═ v [ v ], andi]1×m,[]1×mrepresenting a matrix of 1 row and m columns, m being equal to i, h ═ hj]1×n,[]1×nRepresenting a matrix with 1 row and n columns, wherein the value of n is equal to j; θ ═ wji,ai,bjDenotes a set, wjiRepresents the connection weight of the jth neuron of the hidden layer and the ith neuron of the visible layer with the initial value of 1, aiRepresents the bias value of the ith neuron of the visible layer with the initial value of 0, bjDenotes an offset value of the jth neuron of the hidden layer with an initial value of 0, I denotes the total number of nodes of the visible layer, ∑ denotes a summing operation, viRepresenting the state value of the ith neuron of the visible layer with the initial value being the RSSI value of the signal strength indicator in the sample set A, J representing the total number of nodes of the hidden layer, hjRepresenting the state value of the jth neuron of the hidden layer.
And 2, calculating the joint probability distribution of the state value matrix of the visible layer and the hidden layer of each restricted Boltzmann machine RBM according to the following formula:
Figure BDA0002137271380000062
wherein p isγ(v, h | θ) represents the joint probability distribution of the visible and hidden layer state value matrices of the γ -th restricted boltzmann machine RBM.
And 3, calculating the probability of activation of each neuron of each RBM hidden layer of the restricted Boltzmann machine according to the following formula:
Figure BDA0002137271380000063
wherein p isγj(h γj1| v) represents the probability that the jth neuron of the gamma-th restricted boltzmann machine RBM hidden layer is activated, h γj1 denotes that the jth neuron in the jth restricted boltzmann machine RBM hidden layer is in an activated state.
And 4, calculating the activated probability of each neuron of the RBM visible layer of each limited Boltzmann machine according to the following formula:
Figure BDA0002137271380000071
wherein p isγi(v γi1| h) represents the probability that the ith neuron of the gamma restricted boltzmann machine RBM visible layer is activated, v γi1 indicates that the ith neuron in the visible layer is in an activated state.
Step 5, the probability p (v) that the ith neuron of the Gamma restricted Boltzmann machine RBM visible layer is activatedγi1| h) of the sequence
Figure BDA0002137271380000072
Wherein the content of the first and second substances,
Figure BDA0002137271380000073
and the state value of the ith neuron of the RBM visible layer of the gamma restricted Boltzmann machine is extracted.
And 6, updating the probability of each neuron of the RBM hidden layer of each limited Boltzmann machine to be activated according to the following formula:
Figure BDA0002137271380000074
wherein the content of the first and second substances,
Figure BDA0002137271380000075
representing the probability that the jth neuron of the hidden layer after the updated RBM of the gamma restricted Boltzmann machine is activated,
Figure BDA0002137271380000076
and the state that the jth neuron of the hidden layer after the updated RBM of the gamma limited Boltzmann machine is activated is shown.
And 7, calculating the updated parameter value of each restricted Boltzmann machine RBM in the deep confidence network DBN according to the following formula:
Figure BDA0002137271380000077
wherein, WγzRepresenting the updated connection weight matrix W of each neuron of the Gamma restricted Boltzmann machine RBM hidden layer and each neuron of the visible layerγRepresenting the initial connection weight matrix of each neuron of the Gamma restricted Boltzmann machine RBM hidden layer and each neuron of the visible layer, tau represents the learning rate with the initial value of 0.05, Pγ(hγj=1|vγ) A matrix of probability values, v, representing the activation of all neurons of the RBM hidden layer of the gamma restricted Boltzmann machineγA matrix of initial state values of all neurons representing the RBM visible layer of the gamma-th restricted boltzmann machine, T representing a transpose operation,
Figure BDA0002137271380000078
a probability value matrix representing the activated all neurons of the hidden layer after the updated RBM of the gamma limited Boltzmann machine,
Figure BDA0002137271380000079
representing the updated state value matrix of all neurons in the RBM visible layer of the gamma restricted Boltzmann machine, aγzRepresenting the updated bias value matrix of all neurons in the visible layer of the Gamma restricted Boltzmann machine RBM, aγInitial bias value matrix representing all neurons of the RBM visible layer of the gamma-restricted Boltzmann machine, bγzRepresenting the updated bias value matrix of all neurons in the RBM hidden layer of the gamma-limited Boltzmann machine, bγzRepresenting the updated bias value matrix of all neurons in the RBM hidden layer of the gamma-limited Boltzmann machine, bγAnd (3) representing an initial bias value matrix of all neurons of an RBM hidden layer of the gamma-th limited Boltzmann machine.
And taking the difference value between the position coordinate output after the forward training of the deep confidence network DBN and the position coordinate of the sample acquisition point in the sample set A as error information, downwards propagating the error information from a back propagation BP layer to each layer of the deep confidence network DBN, and carrying out reverse training on the deep confidence network DBN by adopting a small batch gradient descent method MBGD (belief network), so as to update the weight and the offset value of the deep confidence network DBN, and stopping training until the value of the loss function reaches the minimum value, thus obtaining the trained deep confidence network DBN.
The loss function is expressed as follows:
Figure BDA0002137271380000081
wherein | · | purple sweet2The 2-norm operation is shown as being performed,
Figure BDA0002137271380000084
the position coordinates output by the forward trained deep confidence network DBN are shown, and y represents the actual position coordinates of the sampling points in the sample set a.
The steps of the small batch gradient descent method MBGD are described as follows:
step 1, calculating the sensitivity of each neuron in each layer of the deep belief network DBN according to the following formula:
Figure BDA0002137271380000082
wherein the content of the first and second substances,klsensitivity of the ith neuron of the kth layer representing a deep confidence network DBN, k being 1,2klRepresents the output of the ith neuron of the k layer of the deep confidence network DBN after forward training,
Figure BDA0002137271380000083
the output of the ith neuron, which represents the k-th layer of the deep belief network DBN, when the loss function reaches a minimum.
And 2, calculating the sensitivity of each neuron in each hidden layer of the deep belief network DBN according to the following formula:
hj=yhj(1-yhj)∑whji (h+1)j
wherein the content of the first and second substances,hjh-th hidden to represent deep belief network DBNSensitivity of jth neuron of Tibetan layer, h ═ 1,2,3, yhjRepresenting the forward trained output of the jth neuron representing the h hidden layer of the deep belief network DBN, ∑ representing the summation operation, whjiThe j node of the h hidden layer of the deep belief network DBN and the ith neuron of the next hidden layer are represented by the connection weight value after forward training,(h+1)jthe sensitivity of the j-th neuron of h +1 th hidden layer of the deep belief network DBN is shown.
And 3, updating the network parameters of the deep belief network DBN according to the following formula:
Figure BDA0002137271380000091
Figure BDA0002137271380000092
wherein the content of the first and second substances,
Figure BDA0002137271380000093
representing updated connection weight, w, between jth neuron of kth layer and ith neuron of next hidden layer of Deep Belief Network (DBN)kjiRepresenting the connection weight before updating between the jth neuron of the kth layer and the ith neuron of the next hidden layer of the deep belief network DBN, representing the learning rate with the initial value of 0.1, ykjRepresents the output of the j-th neuron of the k-th layer of the deep confidence network DBN after forward training,(k+1)jrepresents the sensitivity of the jth neuron at the k +1 th layer of the deep belief network DBN,
Figure BDA0002137271380000094
represents the updated bias value of the jth neuron at the k layer of the deep belief network DBN, bkjRepresents the bias value before the jth neuron of the k-th layer of the deep belief network DBN updates.
And 4, constructing a Recurrent Neural Network (RNN).
Constructing a 3-layer recurrent neural network RNN, wherein the whole structure sequentially comprises an input layer → a hidden layer → an output layer, and the number of neurons in each layer is respectively as follows: 3, 15, 2, the neurons in the layers are independent from each other, the neurons between the layers are fully connected, the activation function between the input layer and the hidden layer is a tanh function, and the activation function between the hidden layer and the output layer is a softmax function.
And 5, training a Recurrent Neural Network (RNN).
And inputting the sample set B into the recurrent neural network RNN, taking the first position information in the sample set B as an initial state value of a recurrent neural network RNN hidden layer, taking the first deflection angle information in the sample set B as an initial state value of a recurrent neural network RNN input layer, and carrying out forward training on the recurrent neural network RNN.
The steps of the forward training of the recurrent neural network RNN are described below.
Step 1, calculating the state value of each neuron in the RNN hidden layer of the recurrent neural network by using the following formula:
e=tanh(rλκxt+se(t-1)κκ+cκ)
wherein e isDenotes the state value of the k-th neuron of the hidden layer at time t of the recurrent neural network RNN, tanh denotes the activation function between the input layer and the hidden layer, k 1,2λκRepresents the connection weight value x of the lambda-th neuron of the input layer of the recurrent neural network RNN and the kappa-th neuron of the hidden layertRepresenting the input value of the recurrent neural network RNN at time t, s representing the product factor between the state value of the recurrent neural network RNN at time k and the state value of t-1, and e(t-1)κRepresents the state value of the k-th neuron of the hidden layer at the time t-1 of the recurrent neural network RNN, cκAnd the offset value of the k-th neuron of the RNN hidden layer of the recurrent neural network is represented and is 1.
And step 2, calculating an output value of the recurrent neural network RNN by using the following formula:
ft=softmax(e+dβ)
wherein f istDenotes the output value at time t of the recurrent neural network RNN, softmax denotes concealmentActivation function between layer and output layer, dβThe offset value of the λ -th neuron in the output layer of the recurrent neural network RNN is 1, and β is 1 and 2.
Reversely training the recurrent neural network RNN, and updating each parameter value of the recurrent neural network RNN by using a time-dependent back propagation algorithm BPTT until
Figure BDA0002137271380000101
And the update is stopped, so that a trained recurrent neural network RNN is obtained, wherein ∑ represents a summation operation,
Figure BDA0002137271380000102
it is shown that the operation of derivation is performed,
Figure BDA0002137271380000103
representing the value of the p-th parameter before the update at time t,
Figure BDA0002137271380000104
indicating the value of the p-th parameter before updating at time t.
The back propagation over time algorithm BPTT, each parameter of the updated recurrent neural network RNN is implemented by:
Figure BDA0002137271380000105
wherein the content of the first and second substances,
Figure BDA00021372713800001010
the updated parameter values of the rho th in the recurrent neural network RNN are shown, rho is 1,2,3 and 4, and respectively shows the connection weight of the lambda-th neuron of the input layer of the recurrent neural network RNN and the kappa-th neuron of the hidden layer, a product factor between a state value of the kappa-th neuron of the recurrent neural network RNN at the t-1 moment and a state value at the t moment, an offset value of the kappa-th neuron of the hidden layer of the recurrent neural network RNN with the value of 1, and an offset value of the lambda-th neuron of the output layer of the recurrent neural network RNN with the value of 1,
Figure BDA0002137271380000106
denotes the p-th pre-update parameter value in the recurrent neural network RNN, ∑ denotes the summing operation,
Figure BDA0002137271380000107
it is shown that the operation of derivation is performed,
Figure BDA0002137271380000108
representing the value of the p-th parameter before the update at time t,
Figure BDA0002137271380000109
indicating the value of the p-th parameter before updating at time t.
And 6, carrying out online initial positioning on one indoor communication mobile device.
And inputting the RSSI information measured by the indoor communication mobile equipment in real time into the trained deep confidence network DBN, and outputting a first predicted position of the indoor communication mobile equipment.
And respectively inputting the initial position of the indoor communication mobile equipment and the motion deflection angle acquired by the inertial measurement unit IMU in the indoor communication mobile equipment into the trained recurrent neural network RNN, and outputting a second predicted position of the indoor communication mobile equipment.
And 7, acquiring a final positioning result of the indoor communication mobile equipment.
A first predicted position and a second predicted position of an indoor communication mobile device are respectively input into a particle filter, and 300 particles which obey positive distribution are randomly generated at the second predicted position.
The weight of each particle is calculated according to the following formula:
Figure BDA0002137271380000111
wherein the content of the first and second substances,
Figure BDA0002137271380000112
denotes the ζ th particleThe weight of (c), ζ 1, 2., 300,
Figure BDA0002137271380000113
denotes the square root operation, x1And y1Abscissa and ordinate, x, of a first predicted position coordinate of an indoor communication mobile device respectively representing an output of a deep belief networkζAnd yζRespectively, the abscissa and the ordinate in the ζ -th particle position coordinate.
The indoor position of an indoor communication mobile device is calculated according to the following formula:
Figure BDA0002137271380000114
Figure BDA0002137271380000115
wherein the content of the first and second substances,
Figure BDA0002137271380000116
and
Figure BDA0002137271380000117
respectively representing the abscissa and ordinate of the indoor position of an indoor communication mobile device.
The effect of the present invention is further explained by combining the simulation experiment as follows:
1. simulation experiment conditions are as follows:
the hardware platform of the simulation experiment of the invention is as follows: the processor is an Intel i7-6700 CPU, the main frequency is 3.4GHz, and the memory is 8 GB.
The software platform of the simulation experiment of the invention is as follows: windows 10 operating system, python 3.6 and MATLABR2014 b.
The sample set A and the sample set B used in the simulation experiment are collected from a main building laboratory of the university of electronic science and technology of Xian, the collection time is 2019 and 3 months, and the sample sets are 300 in size.
2. Simulation content and result analysis:
the simulation experiment of the invention is to respectively carry out forward training and backward training on a deep belief network DBN, carry out backward training on a recurrent neural network and fuse two predicted positions of indoor communication mobile equipment by adopting the invention and four prior arts (a contrast divergence method CD-1, a small batch gradient descent method MBGD, a back propagation algorithm BPTT along with time and a particle filter algorithm PF).
In the simulation experiment, the four prior arts adopted refer to:
the prior art contrast and Divergence method CD-1 learning method refers to a restricted Boltzmann machine RBM fast learning algorithm, which is called contrast and Divergence method CD-1 learning method for short, and is proposed by Hinton et al in Training Products of expertts by Minimizing contrast and Divergence dictionary [ J ]. Neural Computation,2002,14(8):1771 and 1800 ].
The prior art small batch gradient descent method MBGD refers to a learning algorithm of a neural network proposed by Okamoto et al in "A learning algorithm with a gradient mutation and a learning rate adaptation for the mini-batch type learning, 201756 th Annual Conference of the Society of instruments and Control Engineers of Japan (SICE), Kanazawa,2017, pp.811-816", which is abbreviated as the small batch gradient descent method MBGD.
The prior art back propagation algorithm BPTT refers to a learning algorithm of a recurrent neural network, which is proposed by Sivakumar et al in "A modified BPTTal learning for objective learning in block-diagonalacquired neural networks [ C ] IEEE Canadian Conference on electric & Computer engineering, IEEE, 1997", and is called back propagation algorithm BPTT over time for short.
The PF is a data fusion algorithm, called PF for short, which is proposed by Masiero et al in "A Particle Filter for smartphone-Based Particle Navigation [ J ]. Micromachines,2014,5(4): 1012-.
In the indoor area where the mobile communication device is located, 50 times of positioning is performed on the mobile communication device by respectively using the indoor positioning method based on multi-source fusion of the neural network and the particle filter and other indoor positioning methods provided by the invention, the distance between the positioning result and the actual position is calculated, and the probability of the average positioning error of each method is calculated, so that a graph 4 is obtained.
FIG. 4 is a comparison graph of the average positioning error CDF between the positioning method of the present invention and other positioning methods, wherein the abscissa in FIG. 4 represents the average positioning error, the ordinate represents the probability, the curve marked with ". X" represents the simulation curve of the indoor positioning method based on the multi-source fusion of the neural network and the particle filter of the present invention, the curve marked with ". DELTA" represents the simulation curve of the fingerprint positioning method based on the deep confidence network DBN, the curve marked with ". gamma" represents the simulation curve of the conventional fingerprint positioning method, the curve marked with ". O" represents the simulation curve of the pedestrian dead reckoning PDR positioning method based on the recurrent neural network RNN, the curve marked with "□" represents the simulation curve of the conventional pedestrian dead reckoning PDR positioning method, as can be seen from FIG. 4, the probability that the average positioning error of the multi-source fusion positioning method based on the neural network and the particle filter of the present invention is less than 0.5 m is about 78%, the probability of less than 1 meter is about 98%, which has a higher accuracy than other positioning methods.
The time consumed in the positioning process of the indoor positioning method based on the neural network and the particle filter fusion, which is provided by the invention and is output by using simulation software MATLAB R2014b, and other positioning methods is drawn as table 1.
TABLE 1 comparison of simulation time consumption of the present invention and other methods
Positioning method Actual location estimate time(s)
Traditional fingerprint positioning method 8
Pedestrian navigation position measuring, calculating and positioning method 0.5
Fingerprint positioning method based on deep confidence network DBN 1.2
Pedestrian navigation position measuring, calculating and positioning method based on Recurrent Neural Network (RNN) 0.7
Multi-source fusion positioning method based on neural network and particle filter 2.3
As can be seen from table 1, compared with the conventional fingerprint positioning method, the time consumed by the indoor positioning method based on the fusion of the neural network and the particle filter provided by the invention is shortened by about 71%, the calculation complexity is lower, and the real-time performance is higher.
The above simulation experiments show that: according to the method, the trained deep belief network DBN is utilized, the precision based on the traditional fingerprint positioning method can be improved, the positioning time is shortened, the precision of a pedestrian navigation position measuring and calculating PDR method can be improved by utilizing the trained recurrent neural network RNN, the predicted positions of two positioning methods are fused by utilizing particle filtering, and the precision of positioning indoor mobile communication equipment can be further improved.

Claims (3)

1. An indoor positioning method based on multi-source fusion of a neural network and particle filtering is characterized by comprising the following steps of constructing and training a deep confidence network DBN, constructing and training a recurrent neural network RNN, and primarily positioning an indoor communication mobile device by respectively utilizing the trained deep confidence network DBN and the trained recurrent neural network RNN to obtain two predicted positions, wherein the method comprises the following specific steps:
(1) generating a sample set:
(1a) at least 5 ZigBee route Access Points (APs) are distributed in an indoor area where the indoor communication mobile equipment to be positioned is located, the indoor area space where the indoor communication mobile equipment to be positioned is located is divided into at least 300 square grids to form sample acquisition points, and the position of each sample acquisition point and the signal strength indication (RSSI) of each AP acquired by the position of each sample acquisition point form a sample set A;
(1b) the method comprises the steps that the indoor communication mobile equipment is held by a hand, the movement deflection angle between each walking position and two continuous walking positions is collected in an indoor area where the indoor communication mobile equipment to be positioned is located, and a sample set B is formed by the walking position x of each time, the walking position y of the next time and the movement deflection angle theta between the two walking positions in the form of (x, theta, y);
(2) constructing a Deep Belief Network (DBN):
building a 6-layer deep confidence network DBN, wherein the whole structure of the DBN is sequentially as follows, an input layer → a first hidden layer → a second hidden layer → a third hidden layer → a fourth hidden layer → a BP layer, and the number of neurons in each layer is 95,20,15,10,2 and 2 respectively; the input layer is a visible layer of a 1 st limited Boltzmann machine RBM, and the first hidden layer is a hidden layer of the 1 st limited Boltzmann machine RBM; the first hidden layer and the second hidden layer are respectively a visible layer and a hidden layer of a 2 nd restricted Boltzmann machine RBM; the second hidden layer and the third hidden layer are respectively a visible layer and a hidden layer of a 3 rd limited Boltzmann machine RBM, and the third hidden layer and the fourth hidden layer are respectively a visible layer and a hidden layer of a 4 th limited Boltzmann machine RBM; a backward propagation BP layer is used as an output layer;
(3) training the deep belief network DBN:
(3a) inputting a sample set A into a deep confidence network DBN, taking a signal strength indication RSSI in the sample set A as an initial value of a visible layer v of a restricted Boltzmann machine RBM, and carrying out forward training on the restricted Boltzmann machines RBM in the deep confidence network DBN one by using a contrast divergence method CD-1 to obtain a weight value and a bias value of the deep confidence neural network DBN;
the contrast divergence method CD-1 comprises the following steps:
firstly, calculating the energy of each restricted Boltzmann machine RBM in a deep belief network DBN according to the following formula:
Figure FDA0002623734460000021
wherein E isγRepresents the energy of the gamma limited Boltzmann machine RBM, gamma is 1,2,3,4, (v, h | theta) represents the configuration of a visible layer state value matrix v and a hidden layer state value matrix h in the limited Boltzmann machine RBM when a parameter theta condition is satisfied, and v is [ v ═ v [ [ theta ]i]1×m,[]1×mRepresenting a matrix of 1 row and m columns, m being equal to i, h ═ hj]1×n,[]1×nRepresenting a matrix with 1 row and n columns, wherein the value of n is equal to j; θ ═ wji,ai,bjDenotes a set, wjiRepresents the connection weight of the jth neuron of the hidden layer and the ith neuron of the visible layer with the initial value of 1, aiRepresents the bias value of the ith neuron of the visible layer with the initial value of 0, bjDenotes an offset value of the jth neuron of the hidden layer with an initial value of 0, I denotes the total number of nodes of the visible layer, ∑ denotes a summing operation, viRepresenting the state value of the ith neuron of the visible layer with the initial value being the signal strength indication RSSI value in the sample set A, J representing the total number of nodes of the hidden layer, hjA state value representing a jth neuron of the hidden layer;
secondly, calculating the joint probability distribution of the state value matrix of the visible layer and the hidden layer of each restricted Boltzmann machine RBM according to the following formula:
Figure FDA0002623734460000022
wherein p isγ(v, h | θ) represents a joint probability distribution of a visible layer and hidden layer state value matrix of the γ restricted boltzmann machine RBM;
thirdly, calculating the activated probability of each neuron of each RBM hidden layer of the restricted Boltzmann machine according to the following formula:
Figure FDA0002623734460000023
wherein p isγj(hγj1| v) represents the probability that the jth neuron of the gamma-th restricted boltzmann machine RBM hidden layer is activated, hγj1 represents that the jth neuron in the gamma restricted boltzmann machine RBM hidden layer is in an activated state;
fourthly, calculating the activated probability of each neuron of the RBM visible layer of each restricted Boltzmann machine according to the following formula:
Figure FDA0002623734460000024
wherein p isγi(vγi1| h) represents the probability that the ith neuron of the gamma restricted boltzmann machine RBM visible layer is activated, vγi1 indicates that the ith neuron in the visible layer is in an activated state;
fifthly, the probability p (v) that the ith neuron of the gamma restricted Boltzmann machine RBM visible layer is activatedγi1| h) of the sequence
Figure FDA0002623734460000031
Wherein the content of the first and second substances,
Figure FDA0002623734460000032
representing the state value of the ith neuron of the RBM visible layer of the extracted gamma restricted Boltzmann machine;
sixthly, updating the probability of each neuron of the RBM hidden layer of each limited Boltzmann machine to be activated according to the following formula:
Figure FDA0002623734460000033
wherein the content of the first and second substances,
Figure FDA0002623734460000034
representing the probability that the jth neuron of the hidden layer after the updated RBM of the gamma restricted Boltzmann machine is activated,
Figure FDA0002623734460000035
the state that the jth neuron of the hidden layer after the updated RBM of the gamma limited Boltzmann machine is activated is shown;
seventhly, calculating the updated parameter value of each restricted Boltzmann machine RBM in the deep belief network DBN according to the following formula,
Figure FDA0002623734460000036
wherein, WγzRepresenting the updated connection weight matrix W of each neuron of the Gamma restricted Boltzmann machine RBM hidden layer and each neuron of the visible layerγRepresenting the initial connection weight matrix of each neuron of the Gamma restricted Boltzmann machine RBM hidden layer and each neuron of the visible layer, tau represents the learning rate with the initial value of 0.05, Pγ(hγj=1|vγ) A matrix of probability values, v, representing the activation of all neurons of the RBM hidden layer of the gamma restricted Boltzmann machineγA matrix of initial state values of all neurons representing the RBM visible layer of the gamma-th restricted boltzmann machine, T representing a transpose operation,
Figure FDA0002623734460000037
a probability value matrix representing the activated all neurons of the hidden layer after the updated RBM of the gamma limited Boltzmann machine,
Figure FDA0002623734460000038
representing the updated state value matrix of all neurons in the RBM visible layer of the gamma restricted Boltzmann machine, aγzRepresenting the updated bias value matrix of all neurons in the visible layer of the Gamma restricted Boltzmann machine RBM, aγInitial bias value matrix representing all neurons of the RBM visible layer of the gamma-restricted Boltzmann machine, bγzRepresenting the updated bias value matrix of all neurons in the RBM hidden layer of the gamma-limited Boltzmann machine, bγAn initial bias value matrix representing all neurons of an RBM hidden layer of a gamma-th limited Boltzmann machine;
(3b) taking the difference value between the position coordinate output after the forward training of the deep confidence network DBN and the position coordinate of the sample acquisition point in the sample set A as error information, downwards propagating to each layer of the deep confidence network DBN from a back propagation BP layer, carrying out reverse training on the deep confidence network DBN by adopting a small batch gradient descent method MBGD, updating the weight and the offset value of the deep confidence network DBN, and stopping training until the value of the loss function reaches the minimum value to obtain the trained deep confidence network DBN;
the small batch gradient descent MBGD method comprises the following steps:
first, the sensitivity of each neuron in each layer of the deep belief network DBN is calculated according to the following formula:
Figure FDA0002623734460000041
wherein the content of the first and second substances,klsensitivity of the ith neuron of the kth layer representing a deep confidence network DBN, k being 1,2klRepresents the output of the ith neuron of the k layer of the deep confidence network DBN after forward training,
Figure FDA0002623734460000042
representing the output of the ith neuron of the kth layer of the deep belief network DBN when the loss function reaches a minimum value;
secondly, calculating the sensitivity of each neuron in each hidden layer of the deep belief network DBN according to the following formula:
hj=yhj(1-yhj)∑whji (h+1)j
wherein the content of the first and second substances,hjsensitivity of j-th neuron of h-th hidden layer representing deep belief network DBN, h is 1,2,3, yhjRepresenting the forward trained output, w, of the jth neuron representing the h hidden layer of the deep belief network DBNhjiThe j node of the h hidden layer of the deep belief network DBN and the ith neuron of the next hidden layer are represented by the connection weight value after forward training,(h+1)jto representSensitivity of j neuron of h +1 hidden layer of deep belief network DBN;
thirdly, updating the network parameters of the deep belief network DBN according to the following formula:
Figure FDA0002623734460000043
Figure FDA0002623734460000044
wherein the content of the first and second substances,
Figure FDA0002623734460000045
representing updated connection weight, w, between jth neuron of kth layer and ith neuron of next hidden layer of Deep Belief Network (DBN)kjiRepresenting the connection weight before updating between the jth neuron of the kth layer and the ith neuron of the next hidden layer of the deep belief network DBN, representing the learning rate with the initial value of 0.1, ykjRepresents the output of the j-th neuron of the k-th layer of the deep confidence network DBN after forward training,(k+1)jrepresents the sensitivity of the jth neuron at the k +1 th layer of the deep belief network DBN,
Figure FDA0002623734460000046
represents the updated bias value of the jth neuron at the k layer of the deep belief network DBN, bkjRepresenting the bias value before the jth neuron of the kth layer of the deep belief network DBN is updated;
(4) constructing a Recurrent Neural Network (RNN):
constructing a 3-layer recurrent neural network RNN, wherein the whole structure sequentially comprises an input layer → a hidden layer → an output layer, and the number of neurons in each layer is respectively as follows: 3, 15, 2, the neurons in the layers are mutually independent, the neurons between the layers are fully connected, the activation function between the input layer and the hidden layer is a tanh function, and the activation function between the hidden layer and the output layer is a softmax function;
(5) training the recurrent neural network RNN:
(5a) inputting a sample set B into a Recurrent Neural Network (RNN), taking first position information in the sample set B as an initial state value of a Recurrent Neural Network (RNN) hidden layer, taking first deflection angle information in the sample set B as an initial state value of a Recurrent Neural Network (RNN) input layer, and carrying out forward training on the Recurrent Neural Network (RNN);
the forward training of the recurrent neural network RNN is carried out according to the following steps:
first, the state value of each neuron in the recurrent neural network RNN hidden layer is calculated using the following formula:
e=tanh(rλκxt+se(t-1)κκ+cκ)
wherein e isDenotes the state value of the k-th neuron of the hidden layer at time t of the recurrent neural network RNN, tanh denotes the activation function between the input layer and the hidden layer, k 1,2λκRepresents the connection weight value x of the lambda-th neuron of the input layer of the recurrent neural network RNN and the kappa-th neuron of the hidden layertRepresenting the input value of the recurrent neural network RNN at time t, s representing the product factor between the state value of the recurrent neural network RNN at time k and the state value of t-1, and e(t-1)κRepresents the state value of the k-th neuron of the hidden layer at the time t-1 of the recurrent neural network RNN, cκThe offset value of the k-th neuron of the RNN hidden layer of the recurrent neural network is represented and is 1;
secondly, calculating the output value of the recurrent neural network RNN by using the following formula:
ft=softmax(e+dβ)
wherein f istRepresenting the output value of the recurrent neural network RNN at time t, softmax representing the activation function between the hidden layer and the output layer, dβThe bias value of the lambda-th neuron of the output layer of the recurrent neural network RNN is represented and is 1, and β is 1 and 2;
(5b) reversely training the recurrent neural network RNN, and updating each parameter value of the recurrent neural network RNN by using a time-dependent back propagation algorithm BPTT until
Figure FDA0002623734460000051
And the update is stopped, so that a trained recurrent neural network RNN is obtained, wherein ∑ represents a summation operation,
Figure FDA0002623734460000052
it is shown that the operation of derivation is performed,
Figure FDA0002623734460000053
representing the value of the rho parameter before updating at the time t;
each parameter of the update recurrent neural network RNN is implemented by:
Figure FDA0002623734460000054
wherein the content of the first and second substances,
Figure FDA0002623734460000061
the parameter values after the updated rho parameter in the recurrent neural network RNN are represented, wherein rho is 1,2,3 and 4, and respectively represents the connection weight of the lambda-th neuron of the input layer and the kappa-th neuron of the hidden layer of the recurrent neural network RNN, a product factor between a state value of the kappa-th neuron of the recurrent neural network RNN at the t-1 moment and a state value at the t moment, an offset value of the kappa-th neuron of the hidden layer of the recurrent neural network RNN with the value of 1, and an offset value of the lambda-th neuron of the output layer of the recurrent neural network RNN with the value of 1;
(6) performing online initial positioning of an indoor communication mobile device:
(6a) inputting the RSSI information of the signal strength indication measured by the indoor communication mobile equipment in real time into a trained deep confidence network (DBN), and outputting a first predicted position of the indoor communication mobile equipment;
(6b) inputting the initial position of the indoor communication mobile equipment and the motion deflection angle acquired by an inertial measurement unit IMU in the indoor communication mobile equipment into a trained recurrent neural network RNN, and outputting a second predicted position of the indoor communication mobile equipment;
(7) acquiring a final positioning result of an indoor communication mobile device:
(7a) inputting a first predicted position and a second predicted position of an indoor communication mobile device into a particle filter respectively, and randomly generating 300 particles which obey positive space distribution at the second predicted position;
(7b) the weight of each particle is calculated according to the following formula:
Figure FDA0002623734460000062
wherein the content of the first and second substances,
Figure FDA0002623734460000063
denotes the weight of the ζ -th particle, ζ ═ 1, 2., 300,
Figure FDA0002623734460000064
denotes the square root operation, x1And y1Abscissa and ordinate, x, of a first predicted position coordinate of an indoor communication mobile device respectively representing an output of a deep belief networkζAnd yζRespectively representing the abscissa and the ordinate in the position coordinate of the ζ -th particle;
(7c) the indoor position of an indoor communication mobile device is calculated according to the following formula:
Figure FDA0002623734460000065
Figure FDA0002623734460000066
wherein the content of the first and second substances,
Figure FDA0002623734460000067
and
Figure FDA0002623734460000068
respectively representing the abscissa and ordinate of the indoor position of an indoor communication mobile device.
2. The method for indoor positioning based on multi-source fusion of neural network and particle filter as claimed in claim 1, wherein in the limited boltzmann machine in step (2), neurons in visible layer and hidden layer are all connected, and neurons in inner layer are not connected.
3. The indoor positioning method based on neural network and particle filter multi-source fusion of claim 1, wherein the loss function in step (3b) is expressed as follows:
Figure FDA0002623734460000071
wherein | · | purple sweet2The 2-norm operation is shown as being performed,
Figure FDA0002623734460000072
the output position coordinates of the deep confidence network DBN trained in the forward direction are represented, and y represents the actual position coordinates of the sampling points in the sample set a.
CN201910657419.7A 2019-07-19 2019-07-19 Indoor positioning method based on neural network and particle filter multi-source fusion Expired - Fee Related CN110401978B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910657419.7A CN110401978B (en) 2019-07-19 2019-07-19 Indoor positioning method based on neural network and particle filter multi-source fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910657419.7A CN110401978B (en) 2019-07-19 2019-07-19 Indoor positioning method based on neural network and particle filter multi-source fusion

Publications (2)

Publication Number Publication Date
CN110401978A CN110401978A (en) 2019-11-01
CN110401978B true CN110401978B (en) 2020-10-09

Family

ID=68324842

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910657419.7A Expired - Fee Related CN110401978B (en) 2019-07-19 2019-07-19 Indoor positioning method based on neural network and particle filter multi-source fusion

Country Status (1)

Country Link
CN (1) CN110401978B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111524349B (en) * 2020-04-14 2021-05-25 长安大学 Context feature injected multi-scale traffic flow prediction model establishing method and using method
CN112073902A (en) * 2020-08-25 2020-12-11 中国电子科技集团公司第五十四研究所 Multi-mode indoor positioning method
CN112146660B (en) * 2020-09-25 2022-05-03 电子科技大学 Indoor map positioning method based on dynamic word vector
CN113008226B (en) * 2021-02-09 2022-04-01 杭州电子科技大学 Geomagnetic indoor positioning method based on gated cyclic neural network and particle filtering
CN113074718B (en) * 2021-04-27 2024-03-29 广东电网有限责任公司清远供电局 Positioning method, device, equipment and storage medium
CN113420720B (en) * 2021-07-21 2024-01-09 中通服咨询设计研究院有限公司 High-precision low-delay large-scale indoor stadium crowd distribution calculation method
CN114040347A (en) * 2021-10-29 2022-02-11 中国石油大学(华东) Signal fingerprint positioning method based on deep confidence network

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105957086A (en) * 2016-05-09 2016-09-21 西北工业大学 Remote sensing image change detection method based on optimized neural network model
CN106412826A (en) * 2016-09-07 2017-02-15 清华大学 Indoor positioning method and positioning device based on multi-source information fusion
CN107798296A (en) * 2017-09-28 2018-03-13 江南大学 A kind of quick motion gesture recognition methods applied to complex background scene
CN108692701A (en) * 2018-05-28 2018-10-23 佛山市南海区广工大数控装备协同创新研究院 Mobile robot Multi-sensor Fusion localization method based on particle filter

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102768361A (en) * 2012-07-09 2012-11-07 东南大学 GPS/INS combined positioning method based on genetic particle filtering and fuzzy neural network
CN105911518A (en) * 2016-03-31 2016-08-31 山东大学 Robot positioning method
CN108053423A (en) * 2017-12-05 2018-05-18 中国农业大学 A kind of multiple target animal tracking method and device
CN108769969B (en) * 2018-06-20 2021-10-15 吉林大学 RFID indoor positioning method based on deep belief network
CN109151727B (en) * 2018-07-28 2020-11-10 天津大学 WLAN fingerprint positioning database construction method based on improved DBN

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105957086A (en) * 2016-05-09 2016-09-21 西北工业大学 Remote sensing image change detection method based on optimized neural network model
CN106412826A (en) * 2016-09-07 2017-02-15 清华大学 Indoor positioning method and positioning device based on multi-source information fusion
CN107798296A (en) * 2017-09-28 2018-03-13 江南大学 A kind of quick motion gesture recognition methods applied to complex background scene
CN108692701A (en) * 2018-05-28 2018-10-23 佛山市南海区广工大数控装备协同创新研究院 Mobile robot Multi-sensor Fusion localization method based on particle filter

Also Published As

Publication number Publication date
CN110401978A (en) 2019-11-01

Similar Documents

Publication Publication Date Title
CN110401978B (en) Indoor positioning method based on neural network and particle filter multi-source fusion
CN109492822B (en) Air pollutant concentration time-space domain correlation prediction method
CN107396322B (en) Indoor positioning method based on path matching and coding-decoding cyclic neural network
CN106714110A (en) Auto building method and system of Wi-Fi position fingerprint map
CN107293115B (en) Traffic flow prediction method for microscopic simulation
CN109145464B (en) Structural damage identification method integrating multi-target ant lion optimization and trace sparse regularization
CN105263113A (en) Wi-Fi location fingerprint map building method and system based on crowd-sourcing
CN111310965A (en) Aircraft track prediction method based on LSTM network
CN108362289B (en) Mobile intelligent terminal PDR positioning method based on multi-sensor fusion
CN111461187B (en) Intelligent building settlement detection system
CN112365708B (en) Scenic spot traffic volume prediction model establishing and predicting method based on multi-graph convolution network
CN115951014A (en) CNN-LSTM-BP multi-mode air pollutant prediction method combining meteorological features
Kadir et al. Wheat yield prediction: Artificial neural network based approach
CN105678417A (en) Prediction method and device for tunnel face water inflow of construction tunnel
CN108416458A (en) A kind of tunnel rich water rock mass Synthetic Geological Prediction Ahead of Construction method based on BP neural network
CN114723188A (en) Water quality prediction method, device, computer equipment and storage medium
Cui et al. Improved genetic algorithm to optimize the Wi-Fi indoor positioning based on artificial neural network
Khassanov et al. Finer-level sequential wifi-based indoor localization
CN114154401A (en) Soil erosion modulus calculation method and system based on machine learning and observation data
CN113640712B (en) Prediction method for vertical component of vertical induction magnetic field of ship
CN111623797B (en) Step number measuring method based on deep learning
CN107273692B (en) Distributed fusion method of random set theory with limited sensor sensing capability
CN117272202A (en) Dam deformation abnormal value identification method and system
CN114386672B (en) Environment big data Internet of things intelligent detection system
CN108668254B (en) WiFi signal characteristic area positioning method based on improved BP neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20201009