CN117098224A - Indoor positioning and navigation method based on WiFi fingerprint and inertial sensor information fusion - Google Patents

Indoor positioning and navigation method based on WiFi fingerprint and inertial sensor information fusion Download PDF

Info

Publication number
CN117098224A
CN117098224A CN202310786281.7A CN202310786281A CN117098224A CN 117098224 A CN117098224 A CN 117098224A CN 202310786281 A CN202310786281 A CN 202310786281A CN 117098224 A CN117098224 A CN 117098224A
Authority
CN
China
Prior art keywords
data
wifi
positioning
fingerprint
wifi fingerprint
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310786281.7A
Other languages
Chinese (zh)
Inventor
孙方敏
周攀
李烨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN202310786281.7A priority Critical patent/CN117098224A/en
Publication of CN117098224A publication Critical patent/CN117098224A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W64/00Locating users or terminals or network equipment for network management purposes, e.g. mobility management
    • H04W64/006Locating users or terminals or network equipment for network management purposes, e.g. mobility management with additional information processing, e.g. for direction or speed determination
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/02Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using radio waves
    • G01S5/0252Radio frequency fingerprinting
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/02Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using radio waves
    • G01S5/0257Hybrid positioning
    • G01S5/0263Hybrid positioning by combining or switching between positions derived from two or more separate positioning systems
    • G01S5/0264Hybrid positioning by combining or switching between positions derived from two or more separate positioning systems at least one of the systems being a non-radio wave positioning system
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/024Guidance services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/33Services specially adapted for particular environments, situations or purposes for indoor environments, e.g. buildings
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W84/00Network topologies
    • H04W84/02Hierarchically pre-organised networks, e.g. paging networks, cellular networks, WLAN [Wireless Local Area Network] or WLL [Wireless Local Loop]
    • H04W84/10Small scale networks; Flat hierarchical networks
    • H04W84/12WLAN [Wireless Local Area Networks]

Abstract

The invention discloses an indoor positioning and navigation method based on WiFi fingerprint and inertial sensor information fusion. The method comprises the following steps: collecting WiFi fingerprint data of target equipment, inputting the WiFi fingerprint data into a trained fingerprint positioning model, and obtaining a global coordinate observation value; acquiring acceleration signals and angular velocity signals of target equipment by using an inertial sensor, and inputting the acceleration signals and the angular velocity signals into a trained track tracking model to obtain a relative displacement observed value, wherein the relative displacement observed value reflects the moving track of the target equipment; and fusing the global coordinate observation value and the relative displacement observation value by using an extended Kalman filter to obtain fused position information as a positioning result of the target equipment. The intelligent device and the method can effectively integrate the inertial sensor information integrated in the intelligent device with the received WiFi fingerprint information, and improve the accuracy and the robustness of indoor positioning.

Description

Indoor positioning and navigation method based on WiFi fingerprint and inertial sensor information fusion
Technical Field
The invention relates to the technical field of computer application, in particular to an indoor positioning and navigation method based on WiFi fingerprint and inertial sensor information fusion.
Background
With the development of social economy level, people's daily trip increasingly relies on positioning service, and indoor positioning technology plays an important role in aspects such as emergency guiding and rescue, intelligent home, people flow monitoring, parking lot vehicle positioning, patient medical guide and the like. WiFi and inertial sensors (or inertial odometers) are rapidly becoming mainstream indoor positioning technologies for wearable device platforms due to their advantages of simple deployment, low cost, etc.
In the prior art, indoor positioning technologies can be classified into two types according to the use conditions: wireless location technology that relies on external devices and inertial location technology that does not rely on external devices. The wireless positioning technology relies on signals sent by external sources, and common modes include WiFi, bluetooth, ultra-wideband (UWB), radio frequency identification (Radio Frequency Identification, RFID), ultrasonic waves, infrared rays and the like, and then the signals are received by a terminal carried by a user for processing, and the current position of the user is calculated by a specific algorithm, so that the method needs to deploy external signal sending equipment in an indoor environment in advance. Inertial positioning technology based on inertial sensors has high positioning accuracy in a short time, but is susceptible to error accumulation in long-time operation, resulting in serious reduction in positioning accuracy. The current inertial positioning mainly comprises an inertial navigation system, inertial navigation dead reckoning and a deep learning-based inertial odometer.
The indoor positioning technology adopting the single method of the wireless or inertial sensor has obvious advantages and disadvantages, and the defects of the indoor positioning technology can be overcome to a great extent by fusing a plurality of technologies, so that the positioning precision is improved, and the conventional fusion method mainly comprises the extended Kalman filtering and the particle filtering. For example, literature (Poulose a, han D s.hybrid indoor localization using IMU Sensors andsmartphone camera J. Sensors,2019,19 (23): 5084.) proposes a smart phone camera and inertial sensor fusion positioning system based on linear kalman filtering, which has an average positioning error of up to 0.069 meters. As another example, document (Yu N, zhan X, zhao S, et al A precise deadreckoning algorithm based on Bluetooth and multiple sensors [ J ]. IEEE Internetof Things Journal,2017,5 (1): 336-351) uses a Kalman filter to fuse Bluetooth and multiple inertial sensor data, with a positioning error within 0.8 meters. However, the use of the camera brings problems of high energy consumption, privacy security and the like to the intelligent device, and the wireless positioning technology such as Bluetooth and the like requires a large number of external devices to be additionally deployed in an indoor environment, so that the fusion of WiFi and inertial sensors has obvious advantages.
Although the indoor positioning technology plays an important role in emergency evacuation and rescue, augmented reality, patient hospitalization and the like, the GPS signal is greatly attenuated in the indoor environment and cannot be accurately positioned, so researchers have proposed various methods based on technologies such as WiFi, inertial sensors, infrared rays, ultrasonic waves, bluetooth, RFID and cameras. In different application scenarios, the technologies have advantages and disadvantages, and the considerations of deployment cost, positioning accuracy, stability, safety and the like affect the development of the technologies.
For example, both ultrasonic and infrared require specific signal receiving equipment, and the smart phone is not integrated with corresponding hardware, so that the two methods cannot be applied on a large scale to a certain extent. Bluetooth, RFID and cameras are available on smart phones, but Bluetooth and RFID have shorter working distances, more hardware devices need to be deployed, and cameras have problems in aspects of privacy safety, high power consumption and the like. At present, sensors such as a gyroscope, an accelerometer, a magnetometer, a WiFi signal receiving module and the like are already built in the smart phone, and in view of the popularity of the smart phone and the fact that a large number of routers are already arranged in a public building, wiFi and inertial sensor positioning has non-negligible congenital advantages.
Indoor positioning method based on WiFi mainly comprises a geometric method and a fingerprint method. The geometry uses a triangulation method based on angle of arrival, time of flight or time of arrival to achieve positioning. Fingerprinting is more accurate than geometry, and common fingerprint features include RSSI, CSI, and signal-to-noise ratio. Compared with RSSI, the CSI is more stable and more accurate, but the CSI can be acquired only by specific signal acquisition software, so that the application of the CSI is limited.
Classical localization algorithms based on IMUs (inertial measurement units, typically comprising three single axis accelerometers, gyroscopes and magnetometers) include physical (e.g. IMU twice integration) or heuristic based localization methods. At present, the IMU carried by the smart phone is miniaturized and low in cost, the measurement error is relatively large, and the error can be rapidly amplified based on a physical algorithm. In addition, the mode that the user carries the mobile phone in the real scene is not limited, and the heuristic algorithm has a certain limitation. With the development of deep learning, data-driven deep inertial odometers are attracting more attention, and the deep inertial odometers directly derive results from inertial sensor signals by utilizing the powerful feature extraction and data fitting capabilities of the deep learning, and have obvious advantages compared with classical algorithms.
In addition, the advantages and the disadvantages of the single indoor positioning technology are obvious, and the fusion of the multi-mode sensor can compensate the respective defects to a great extent. Fusing WiFi and IMU can obtain a consistent and more accurate positioning result in a long time. However, most of the current research is fusion of IMU-based classical algorithms (like PDR) with WiFi positioning, lacking a fusion scheme between deep inertial odometer and WiFi.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides an indoor positioning and navigation method based on WiFi fingerprint and inertial sensor information fusion. The method comprises the following steps:
collecting WiFi fingerprint data of target equipment, inputting the WiFi fingerprint data into a trained fingerprint positioning model, and obtaining a global coordinate observation value;
acquiring acceleration signals and angular velocity signals of target equipment by using an inertial sensor, and inputting the acceleration signals and the angular velocity signals into a trained track tracking model to obtain a relative displacement observed value, wherein the relative displacement observed value reflects the moving track of the target equipment;
and fusing the global coordinate observation value and the relative displacement observation value by using an extended Kalman filter to obtain fused position information as a positioning result of the target equipment.
Compared with the prior art, the indoor positioning and navigation method based on WiFi fingerprint and inertial sensor information fusion has the advantages that aiming at the defects existing in the independent positioning process of WiFi fingerprint positioning and depth inertial odometer, the integrated inertial sensor information and the received WiFi fingerprint information in the portable intelligent equipment can be effectively fused, and the accuracy and the robustness of indoor positioning are improved.
Other features of the present invention and its advantages will become apparent from the following detailed description of exemplary embodiments of the invention, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a flow chart of an indoor positioning and navigation method based on WiFi fingerprint and inertial sensor information fusion according to one embodiment of the invention;
fig. 2 is an RSSI histogram according to one embodiment of the invention;
FIG. 3 is a schematic diagram of AP coverage and AP location estimation according to one embodiment of the present invention;
FIG. 4 is a WiFi fingerprint data enhancement schematic diagram in accordance with an embodiment of the invention;
FIG. 5 is a schematic diagram of a denoising self-encoder and CNN regression network according to one embodiment of the present invention;
FIG. 6 is a data processing flow diagram according to one embodiment of the invention;
FIG. 7 is a schematic diagram of a data partitioning method according to one embodiment of the present invention;
FIG. 8 is a schematic diagram of a deep inertial odometer network architecture, according to one embodiment of the invention;
FIG. 9 is a schematic diagram of a fused positioning system according to one embodiment of the present invention;
FIG. 10 is a schematic diagram of CDF curves for different algorithms according to one embodiment of the invention;
FIG. 11 is a graph showing trace comparisons in a sample for different algorithms according to one embodiment of the invention;
FIG. 12 is a schematic view of various initial heading angle offset angle evaluations according to one embodiment of the invention;
FIG. 13 is a schematic diagram of a comparison of different initial heading angle offset angle trajectories according to an embodiment of the invention.
Detailed Description
Various exemplary embodiments of the present invention will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present invention unless it is specifically stated otherwise.
The following description of at least one exemplary embodiment is merely exemplary in nature and is in no way intended to limit the invention, its application, or uses.
Techniques, methods, and apparatus known to one of ordinary skill in the relevant art may not be discussed in detail, but are intended to be part of the specification where appropriate.
In all examples shown and discussed herein, any specific values should be construed as merely illustrative, and not a limitation. Thus, other examples of exemplary embodiments may have different values.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further discussion thereof is necessary in subsequent figures.
According to the invention, a deep learning method is adopted, wiFi fingerprint positioning and deep inertial odometer technologies are respectively researched, and positioning results are further fused based on extended Kalman filtering, so that the stability and precision of intelligent equipment positioning are improved. Smart devices (or target devices) include smartphones, tablet computers, etc., and are described herein by way of example as smartphones.
Referring to fig. 1, the provided indoor positioning and navigation method based on the fusion of WiFi fingerprint and inertial sensor information comprises the following steps:
Step S110, acquiring a WiFi fingerprint data set and an inertial navigation data set.
Hereinafter, RSSI is described as an example of fingerprint data (or referred to as WiFi fingerprint data), but it should be understood that CSI or other indicators may also be employed as fingerprint data. The RSSI of an AP (access point) is typically between-110 dBm and 0 dBm.
For example, a WiFi fingerprint dataset is constructed based on a ujidrooloc dataset that provides a standard dataset for different WiFi positioning algorithms that contains fingerprint data for three buildings that does not publish test samples. In one embodiment, training samples are partitioned into training and validation sets in a 8:2 ratio, with all validation samples used as test sets. The acquisition interval between the training set and the test set is 10 days. This Dataset is labeled herein as Dataset1, and is primarily used to verify the performance of the provided WiFi fingerprint positioning algorithm. The obtained WiFi fingerprint data set comprises position Reference Point (RP) coordinates of the coverage area of the access point, and the coordinates of the RP are used as labels.
For example, the inertial navigation data set adopts the RoNIN data set, the total acquisition time exceeds 42.7 hours, and the data type comprises inertial sensor data acquired by the smart phone and real 3D motion data output by Tango. The subject may carry the smartphone as naturally as daily activities, with only about 50% of the data being published by RoNIN for security and privacy concerns. This Dataset is labeled herein as Dataset2, and is used primarily to verify the performance of the provided depth odometer.
In addition, a local data set is constructed. For example, a WiFi signal and an IMU signal in a real scene are acquired in a campus. The whole acquisition process is divided into two stages. In the first stage, only WiFi signals are acquired and used for training a proposed WiFi fingerprint localization network model, RP position coordinates are provided by a laser range finder, 190 RPs in total are selected, the acquisition duration of each RP is about 25-30 seconds, and the sampling frequency is set to be 1Hz. The second stage synchronously collects the WiFi signal and the IMU signal, and the gold standard data is derived from AREngine (similar to Tango). This Dataset is labeled herein as Dataset3, which is used primarily to verify the localization performance of the model provided and the localization performance of the fusion system.
Step S120, preprocessing the WiFi fingerprint data set and training a fingerprint positioning model.
After construction of the fingerprint data set and before inputting the fingerprint data into the fingerprint positioning model, further processing of the fingerprint data is required so that the model can converge and achieve better positioning accuracy.
In one embodiment, the WiFi fingerprint dataset preprocessing process includes the steps of:
step S121, noise is added to the fingerprint data.
The WiFi signal has the characteristics of being randomly shielded by a human body and changing with time, which means that the historical data forming the WiFi fingerprint library can have larger difference from the data acquired in an online stage, so that the WiFi fingerprint is firstly noisy to improve the robustness of the model. The noise types include, for example, masking noise and gaussian noise. The masking noise randomly sets a part of fingerprint data to-110 dBm, is used for simulating signal change caused by random shielding of a human body, and is set to be 0.3. Gaussian noise refers to a type of noise that follows a normal distribution and is used to simulate signal variations caused by small range fluctuations in the signal over time.
Step S122, normalization processing is performed on the fingerprint data.
Fig. 2 shows the RSSI distribution of a WiFi fingerprint in a laboratory fingerprint library. It can be seen that 140 of the 151 APs have no signal strength (which can be set to-110 dBm) and that there is a data sparseness problem with WiFi fingerprint data due to the large number of repeated constant values in the data, which can affect the performance of the model.
In addition, as can be seen from fig. 2, the magnitude of the RSSI is typically between-110 dBm and 0dBm, and the direct input to the model results in difficulty in converging the model, so that the fingerprint data needs to be normalized first, which is expressed as:
wherein RSSI ij The signal strength value of the ith AP received at the jth RP is represented, and Min_RSSI represents the smallest RSSI value in the fingerprint database. After the data is normalized, the distribution of WiFi fingerprint data becomes unbiased low variance, so that the training of a model can be quickened.
Step S123, enhancing the fingerprint data based on the AP ordering method.
Aiming at the problem of data sparseness of WiFi fingerprints, an AP ordering-based method is provided for data enhancement of WiFi fingerprint data, so that the WiFi fingerprint data form of global sparseness but local denseness is formed, and the data enhancement mode is displayed on the basis of Dataset 1. Generally, the data augmentation process includes extracting a coarse location of the AP, reordering the WiFi fingerprint data in a set direction based on the coarse location, and so forth.
Specifically, as known from the signal propagation theory, the signal coverage of each AP is limited, and the fingerprint library has coordinate information of all RPs; let the ith AP be AP i Finding an AP from a fingerprint library i At which RPs are detected, noted:
wherein,represents the kth RP from which the ith AP can be detected and then is obtained from the fingerprint libraryThe corresponding RP coordinate list is marked as:
wherein,representing the coordinates of the kth RP from which the ith AP can be detected, and finally calculating the AP using equation (4) i Is expressed as:
figure 3 shows the RP ranges that four APs can detect. It can be seen that. Some APs have wide coverage and some APs have limited coverage, but the coordinates of the coverage area can be used to describe the approximate location coordinates of the AP. Fig. 3 shows the approximate location coordinates of all APs calculated by the algorithm described above, distributed relatively uniformly across the data collection site.
After the AP coarse position coordinates are calculated, the APs are sorted, for example, in the northwest-southeast direction (top left corner to bottom right corner in fig. 3), so that the original WiFi fingerprint becomes data with AP spatial information characteristics. According to the data enhancement mode, the position of the Access Point (AP) is roughly estimated by acquiring the position information of the Reference Point (RP) covered by the WiFi signals transmitted by the Access Point (AP) in the fingerprint database, and the fingerprint data arrangement mode is changed accordingly, so that the WiFi positioning accuracy under the multi-building scene is effectively improved, and the data preprocessing complexity is reduced. Further, to process fingerprint data using two-dimensional convolution, in one embodiment, the WiFi fingerprint data dimension is changed from 1 Xm to Fig. 4 illustrates the process of raw WiFi fingerprint data enhancement. As can be seen from fig. 4, the two similar WiFi fingerprints have a higher degree of discrimination after being processed, and the data which is originally globally sparse also becomes locally dense.
In step S124, the coordinates of RP as the label are normalized.
Preferably, the coordinates of RP as a label also need to be normalized, expressed as:
wherein, pos j Represents the position coordinates (X) of the jth RP j ,Y j ) Max_pos and Min_pos represent the largest and smallest position coordinates, respectively, in all RPs. In addition, as for Dataset1, as can be seen from fig. 3, the entire positioning area is inclined, and the inclination angle can be eliminated by the coordinate rotation method.
For the fingerprint localization model, various types of neural network models may be employed. For example, considering that the historical data that make up the WiFi fingerprint library may vary significantly from the data collected during the online phase, in one embodiment, the fingerprint localization model comprises a denoising self-encoder and a CNN regression network model, the overall structure of which is shown in fig. 5, where fig. 5 (a) is a denoising self-encoder, generally comprising an encoder and a decoder. The de-noising self-encoder is used for extracting robust features from noisy WiFi fingerprints, the input of the de-noising self-encoder is 3 Xm picture data composed of WiFi fingerprints, the encoder comprises two convolution layers and a pooling layer, and the encoder compresses the data dimension into The decoder comprises an up-sampling layer and a convolution layer, and the decoder reconstructs the features extracted by the encoder into 3×m WiFi fingerprint data. Fig. 5 (b) is a regression network model structure for regression of WiFi positioning coordinates, the input of which is a feature map extracted by the encoder. The regression network model comprises three convolution layers, two pooling layers and a full connection module, wherein the three convolution layers are used for extracting spatial features of WiFi fingerprint pictures, and the full connection module is used for regressing positioning results. The regression network model reflects the nonlinear relationship between the WiFi fingerprint data and the location coordinates.
In one embodiment, the parameter settings of the denoising auto-encoder and regression network model (or regression module) are described in Table 1.
Table 1: model detail parameters
In table 1, mlp is a fully connected module composed of a multi-layer perceptron for regression of the positioning results. For example, the fully connected module contains 6 fully connected layers, each with 128, 256, 128, 64, 32 and 2 hidden neurons, with one BN layer connected after the other fully connected layers except the last one and using the tanh activation function, to suppress overfitting and introduce nonlinearities to the model. The output dimension of table 1 is based on Dataset2, with Dataset1 being larger in its output dimension because it contains a greater number of APs.
In summary, aiming at the problem that WiFi data fluctuates along with time and is easy to be blocked by a human body, a regression network model based on a denoising self-encoder and a convolutional neural network is provided, and the denoising self-encoder extracts robust features from noisy WiFi fingerprints through unsupervised training, so that the WiFi fingerprint positioning model has higher anti-noise performance.
Step S130, preprocessing the inertial navigation data set and training a track tracking model.
As shown in fig. 6, the preprocessing process of the inertial navigation data set mainly includes linear interpolation, coordinate conversion, data synchronization, data segmentation and other processes on the inertial sensor data and the gold standard data, where the RoNIN public data set has been subjected to interpolation and synchronization operations, and no further processing is required.
Interpolation of data: the sensor may have a phenomenon of short loss of connection in the data acquisition process, so that the data is lost or abnormal values exist; in addition, the acquisition frequencies of the inertial sensor data and AREngine are respectively 50HZ and 30HZ, and the frequency of the data input by the model is 200HZ, so that interpolation processing is needed for the two data.
Coordinate conversion: the inertial sensor data coordinates need to be rotated multiple times. Pass through The inertial sensor data after data interpolation is [ a, [ omega ]]Where a represents the acceleration signal, ω represents the angular velocity signal, and the first quaternion of the rotation vector sensor output is q, then [ qaq ] can be used * ,qωq * ]The initial attitude of the sensor is adjusted into a geographic coordinate system, the quaternion deltaq of each sampling point is calculated by using angular velocity data, and the inertial sensor data is further rotated into { deltaq [ qaq ] by using deltaq * ,qωq * ]Δq * The data for each sample point can then be roughly converted into a geographic coordinate system. It is worth mentioning that this way of rotating the coordinates can not completely depend on the rotation vector sensor, and the magnetic material in the indoor environment affects the output value of the rotation vector sensor to a large extent.
Data segmentation method referring to fig. 7, the preprocessed inertial sensor data (IMU) and gold standard data (ground trunk) are shown. For a real track, the window length is set to 200 (duration 1 second), t i The moment is at the point B, t i+200 The moment is positioned at the point C, the coordinates of the point C and the point C are differentiated to obtain the average speed on the two-dimensional planeAnd->These two numbers are the tag values. In addition to intercepting t i To t i+200 The data segment of the inertial sensor between the two data segments also intercepts t i-α To t i Inertial sensor data of the room is taken as input of the model, wherein alpha is set to 200, and beta is set to 0. Each of the segmented samples includes a 400 x 6 inertial sensor data sequence, a 2 x 1 average speed tag vector, the inertial sensor data sequence is input into the network model, and the average speed tag vector is used as a reference tag for the final average speed.
The input of the track tracking model is a preprocessed inertial sensor sequence, the output is a velocity vector, and the velocity vector is given by a formula (6), wherein the dimensions of a and omega are n multiplied by 3, and n is equal to the number of sampling points from A point to D point;data out is an estimate of the model output and,and calculating from the group trunk track data.
In one embodiment, the overall structure of the constructed trajectory tracking model (or depth-of-inertia meter network) is shown in fig. 8, where the model contains two branches, each branch containing a one-dimensional convolution layer, CBAM (Convolutional Block Attention Module, attention module of convolution block), attention module of bi-directional LSTM (BiLSTM). The model input data are preprocessed triaxial acceleration and triaxial angular velocity data, the dimension is b multiplied by 6 multiplied by 400, and b represents the size of batch_size; the data is input into one-dimensional convolution layers of two branches, the two convolution kernels are different in size and are respectively set to be 3 and 7, local dependency relationship among variables and short-term characteristics in time dimension can be extracted, the number of the convolution kernels is set to be 64, and the step length is set to be 2; the convolution layer is followed by a batch normalization layer (BatchNormalization, BN), a ReLU layer and a maximum pooling layer (MaxPool), which can be used to accelerate the convergence speed of the model, introduce nonlinear operation into the model, extract important features and dimension reduction on the data, wherein the step size of the maximum pooling layer is 2, and the data dimension is changed into bX 64X 100; then, the CBAM attention module is used on each branch to correct the input characteristics, the CBAM attention module combines the channel attention and the space attention mechanism, the useful information in the space dimension and the channel dimension can be enhanced, and the irrelevant characteristics are restrained, and the generated two kinds of characteristic diagram information are multiplied with the original input characteristics to restrain the irrelevant characteristics; then, a bi-directional LSTM is used on each branch to extract long-term features in the time dimension, the number of layers is 2, the number of hidden layers of each layer is set to 64, dropout is set to 0.4 to prevent overfitting, and the input data dimension needs to be adjusted to b×100×64 first, and the output dimension is b×100×128; next, a time attention module (Temporal Attention Module, TAM) is used to extract useful features in the time dimension from the many-to-many output of the bi-directional LSTM by multiplying the feature vector of the last time step by the transpose of the entire feature map to obtain a bx 100 x 1 time step value, then calculating the weight parameter for each time step using the softmax function, multiplying the weights by the feature map, and accumulating along the time dimension, with the output dimension being bx 128; splicing the two branch outputs together, the data dimension becomes bx256; finally, three fully connected layers are used as regression calculation modules, the number of hidden neurons is 512, 512 and 2 respectively, each fully connected layer except the last layer uses a RelU layer and dropout is set to 0.5.
In summary, aiming at the problem that the conventional pedestrian dead reckoning (Pedestrian Dead Reckoning, PDR) algorithm is easy to cause gait false detection and missing detection, a deep inertial odometer network model is provided, the model uses inertial sensor data with a fixed window length as input, and the gait detection step of the PDR is not needed; the model comprises two branches, wherein each branch extracts short-term and long-term dependence of an inertial sensor data sequence in a time dimension by a one-dimensional convolution layer and a two-way long-short-term memory network (BiLSTM); in addition, the convolution kernels of the two branches are different in size, and feature information of different scales can be extracted. The model structural design ensures the precision and simultaneously has the minimum parameter, and is more suitable for being deployed in embedded equipment with limited memory.
The present invention is not limited to the specific structure of the model. For example, a different number of convolutional layers, pooling layers, fully-connected layers, etc., or different activation functions or pooling may be employed. In addition, the input dimension of the model can be set according to actual needs.
Step S140, aiming at target equipment, a global coordinate observation value is obtained by utilizing a WiFi fingerprint positioning model, a relative displacement observation value is obtained by utilizing a track tracking model, and then fusion is carried out by expanding Kalman filtering, so that a positioning result is obtained.
FIG. 9 is a schematic diagram of a process of fusion positioning, for clarity, or referred to as a fusion positioning system, and is described in terms of functional modules, wherein the fingerprint positioning is represented by a WiFi fingerprint positioning module, the positioning based on inertial sensor data is represented by an inertial odometer module (or inertial positioning module), the output results of the fusion WiFi fingerprint positioning module and the inertial odometer module are represented by a fusion module, and the final positioning result is obtained by the fusion module.
In one embodiment, a fusion algorithm of Extended Kalman Filter (EKF) is adopted in the fusion module to fuse the global coordinate observed value output by the WiFi fingerprint positioning module and the relative displacement observed value output by the inertial odometer module.
Specifically, the deep learning-based inertial odometer module can output the average speed in two vertical directions on a two-dimensional horizontal plane in unit time, and a complete walking track can be obtained by accumulating the output of the module, but the initial heading of the track is unknown. For example, the initial heading may be simply corrected using the first data output by a built-in rotation vector sensor of the smartphone, but the rotation vector sensor may be affected by magnetic material or have design defects itself, resulting in a certain angle ψ between the heading and the true heading. The system state vector selected by the invention comprises the position coordinates x, y and the angle psi of the pedestrian:
X=[x y ψ] T (7)
The observation vector is set as:
Z=[x wifi y wifi s imu t imu ] T (8)
wherein x is wifi And y wifi Is the coordinate output by the WiFi fingerprint positioning module, and the relative displacement on the two-dimensional plane output by the depth inertial odometer module is accumulated to obtain s imu And t imu Then the system state equation is:
wherein W is k-1 Is process noise. The jacobian matrix can be expressed as:
the observation matrix is expressed as:
the linearized state equation and observation equation are:
process noise W of system k-1 Corresponding covariance matrix Q and observation noise V k The corresponding covariance matrix can be expressed by the following formula:
wherein,and->Is the variance of WiFi positioning,>and->Is inertial positioningThe variance of the relative displacement of the module, the initial value of the state covariance matrix P is less important, and the actual state is approximated after iteration. For example, the state covariance matrix is set as:
in summary, considering that the dead reckoning method based on the IMU can only predict a relatively accurate track, the positioning function is realized by relying on a magnetometer to perform initial heading calibration once, and the magnetometer is easily interfered by indoor magnetic materials, so that calibration fails. Aiming at the problem, the invention provides an extended Kalman filtering-based fusion positioning system which is robust to initial heading errors, and the system uses the prediction results of a fingerprint positioning model and a track tracking model as observation values, and the system state is set as the angle difference between the initial heading and the real heading of a pedestrian position and an inertial odometer. Through verification, the fusion positioning system can provide accurate positioning results in a local data set.
To further verify the performance of the present invention, an accumulated probability distribution function (CumulativeDistribution Function, CDF) and an average positioning error were used for evaluation. The CDF describes the probability that a random variable value is less than or equal to a given value, the x-axis of the CDF curve represents the positioning error, and the y-axis represents the cumulative probability value for each value on the x-axis, i.e., the probability that the value is less than or equal to the value on the x-axis. The faster the CDF curve rises, the smaller the positioning error for most samples, and vice versa.
Training a deep inertial odometer network model on a RoNIN public data set to obtain; the indoor positioning regression network model of the WiFi fingerprint positioning module is obtained by training on an autonomously acquired data set; the fusion positioning module adopts an autonomously acquired data set as test data to measure the performance of the fusion positioning system. Fig. 10 shows the positioning error cumulative distribution function curves of the WiFi fingerprint positioning module, the inertial odometer module and the EKF fusion module (corresponding to the invention). As can be seen from fig. 10, the probability of the positioning error of the WiFi fingerprint and the inertial odometer within 2 meters is lower than 25%, the probability within 4 meters is lower than 60%, and the probability of the positioning error of the WiFi fingerprint and the inertial odometer is more than 30%; the probability of the EKF fusion positioning error within 2m is about 40%, the probability of the EKF fusion positioning error within 4m is about 85%, and the probability of the EKF fusion positioning error greater than 5m is within 6%, so that the EKF fusion positioning error has higher positioning precision, is far greater than the positioning errors of the other two single technologies, and achieves the best positioning performance.
Table 2 is a table of average, minimum and maximum positioning error statistics for three positioning techniques. Since the initial coordinates of the inertial odometer module and the EKF fusion module are set as the real coordinates of the pedestrians, the minimum positioning error index is not calculated. The average positioning error of the EKF fusion positioning method is 2.53 meters, and compared with the positioning under the single technology of WiFi fingerprint and inertial odometer, the positioning is improved by about 34% and 42%, and in addition, the maximum positioning error is greatly reduced by the fusion method. The average positioning error of the WiFi fingerprint is smaller, but the minimum and maximum positioning errors of 0.09 meter and 15.01 meter indicate the instability of WiFi positioning, and positioning jump and discontinuous positioning results are easy to occur. The maximum error of the inertial odometer module is 14.98 meters, and the analysis reasons are that the error is accumulated when the inertial odometer calculates the track, and the error is larger when the initial heading is calculated by using a rotation vector sensor, and fig. 11 shows the positioning result calculated by a certain data sequence in different modules.
Table 2: positioning error comparison of different algorithms
As can be seen from fig. 11, the WiFi fingerprint positioning results are more scattered and discontinuous, but are distributed substantially around the real track; the track calculated by the inertial odometer module is actually very similar to the real track in shape, which shows that the depth inertial odometer network model has higher performance, but the whole track is seriously deviated from the real track due to larger error between the initial heading and the real heading output by the rotation vector sensor, and the average positioning error is about 8.8 meters. As can be seen from FIG. 11, the overlap ratio of the EKF fusion positioning result and the real track is high, the average positioning error is about 1.6 meters, and is improved by 81.8% compared with the inertial odometer, so that the influence of the initial heading error on the positioning performance can be effectively reduced by the EKF fusion system.
In order to further illustrate the robustness of the EKF fusion system to the initial heading error of the inertial odometer module, different offset angles are added to the initial heading of the inertial odometer module, the offset angles are different from-180 degrees to 180 degrees, the step length is set to be 10 degrees, the anticlockwise direction is the positive direction, and an experimental result is shown in FIG. 12. As can be seen from fig. 12, as the offset angle increases, the error of the positioning result of the inertial odometer module shows a tendency of descending and then ascending, and the best positioning effect is obtained at the offset angle of about-20 degrees, and the positioning error is greatly changed; the WiFi fingerprint positioning module is not affected by the offset angle and is always kept at a stable value; the EKF fusion module obtains the best positioning effect between the offset angle of 140 degrees and 140 degrees, and the positioning error is smaller than WiFi fingerprint positioning as a whole; when the offset angle is greater than 140 degrees or less than-140 degrees, the EKF fusion positioning performance is lower than the WiFi fingerprint positioning performance, and the main reason is that the EKF fusion system needs time adjustment, and the performance is basically lost in the early positioning stage. Fig. 13 shows a comparison of tracks with a clockwise course angle offset by 30 degrees, 60 degrees, 120 degrees and 180 degrees for a certain data sequence, and it can be seen that the four tracks calculated by the EKF fusion system only slightly differ at the beginning of the track, and the later tracks are substantially identical.
In summary, the invention effectively fuses the inertial sensor information integrated in the portable intelligent device and the received WiFi fingerprint information, improves the accuracy and robustness of indoor positioning, and has the following advantages compared with the prior art:
1) For indoor positioning based on WiFi fingerprints, the WiFi fingerprint data is rearranged by extracting rough position information of the AP to reduce the influence of high sparsity of WiF fingerprints. In addition, a regression network model based on a denoising self-encoder and a convolutional neural network is provided for solving the problems that WiFi data fluctuates along with time and is easy to be blocked by a human body, and the model extracts characteristics of the position irrelevant to noise from noisy WiFi fingerprint data, so that the robustness of the noise data is improved.
2) For the depth inertial mileage calculation method based on the deep learning, inertial sensor data with fixed window length is used as input, so that the effects of gait false detection and missing detection of the traditional PDR are avoided. And the model uses two branches of different convolution kernel sizes to extract feature information of different scale sizes, each branch containing a convolution layer and a two-way long and short-term memory network to extract spatial dimension and time dimension information.
3) The WiFi fingerprint and inertial odometer fusion positioning method based on the extended Kalman filtering is provided, and the influence of the initial heading error of the inertial odometer on a fusion system is effectively eliminated by setting the pedestrian position based on the WiFi fingerprint and the angle difference between the initial heading and the real heading based on the inertial odometer as the fusion system state.
The present invention may be a system, method, and/or computer program product. The computer program product may include a computer readable storage medium having computer readable program instructions embodied thereon for causing a processor to implement aspects of the present invention.
The computer readable storage medium may be a tangible device that can hold and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: portable computer disks, hard disks, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), static Random Access Memory (SRAM), portable compact disk read-only memory (CD-ROM), digital Versatile Disks (DVD), memory sticks, floppy disks, mechanical coding devices, punch cards or in-groove structures such as punch cards or grooves having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media, as used herein, are not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., optical pulses through fiber optic cables), or electrical signals transmitted through wires.
The computer readable program instructions described herein may be downloaded from a computer readable storage medium to a respective computing/processing device or to an external computer or external storage device over a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmissions, wireless transmissions, routers, firewalls, switches, gateway computers and/or edge servers. The network interface card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium in the respective computing/processing device.
Computer program instructions for carrying out operations of the present invention may be assembly instructions, instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, c++, python, and the like, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer readable program instructions may be executed entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present invention are implemented by personalizing electronic circuitry, such as programmable logic circuitry, field Programmable Gate Arrays (FPGAs), or Programmable Logic Arrays (PLAs), with state information for computer readable program instructions, which can execute the computer readable program instructions.
Various aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable medium having the instructions stored therein includes an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. It is well known to those skilled in the art that implementation by hardware, implementation by software, and implementation by a combination of software and hardware are all equivalent.
The foregoing description of embodiments of the invention has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the technical improvements in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. The scope of the invention is defined by the appended claims.

Claims (10)

1. An indoor positioning and navigation method based on WiFi fingerprint and inertial sensor information fusion comprises the following steps:
collecting WiFi fingerprint data of target equipment, inputting the WiFi fingerprint data into a trained fingerprint positioning model, and obtaining a global coordinate observation value;
acquiring acceleration signals and angular velocity signals of target equipment by using an inertial sensor, and inputting the acceleration signals and the angular velocity signals into a trained track tracking model to obtain a relative displacement observed value, wherein the relative displacement observed value reflects the moving track of the target equipment;
and fusing the global coordinate observation value and the relative displacement observation value by using an extended Kalman filter to obtain fused position information as a positioning result of the target equipment.
2. The method of claim 1, wherein the fingerprint positioning model comprises a denoising encoder and a regression network model, wherein the denoising encoder is provided with an encoder and a decoder, the encoder is used for extracting a feature map from WiFi fingerprint data, and the decoder reconstructs the feature map into fingerprint data of a full size; the regression network model takes the feature diagram extracted by the encoder as input, and obtains a fingerprint positioning result through regression calculation.
3. The method of claim 2, wherein the encoder comprises two convolutional layers and a pooled layer, and the decoder comprises an upsampling layer and a convolutional layer; the regression network model comprises three convolution layers, two pooling layers and a full-connection module, wherein the three convolution layers are used for extracting spatial characteristics of WiFi fingerprint data, the full-connection module is used for carrying out regression calculation to obtain a fingerprint positioning result, the full-connection module comprises a plurality of full-connection layers, and except the last full-connection layer, the back of other full-connection layers is connected with a batch normalization layer and a tanh activation function is used.
4. The method according to claim 1, wherein the trajectory tracking model comprises two branches, each branch being provided with a one-dimensional convolution layer, an attention module CBAM of a convolution block, a bi-directional LSTM, wherein input data of the trajectory tracking model are tri-axial acceleration data and tri-axial angular velocity data acquired by an inertial sensor, the data being input into the one-dimensional convolution layers of the two branches to extract short term features in local dependency and time dimension between variables, the convolution layers being followed by a batch normalization layer, a ReLU layer and a maximum pooling layer; on each branch, the input features are modified using the attention module CBAM of the convolution block, which combines the channel attention mechanism and the spatial attention mechanism for enhancing the useful information in the spatial dimension and the channel dimension; and on each branch, extracting long-term features in the time dimension using the bi-directional LSTM; next, a time attention module is utilized for extracting useful features in a time dimension from the many-to-many output of the bi-directional LSTM; and performing regression calculation by using a plurality of full-connection layers to obtain a calculation result of the relative displacement.
5. The method of claim 1, wherein the fingerprint positioning model is trained according to the steps of:
acquiring a WiFi fingerprint data set, wherein the WiFi fingerprint data set comprises position coordinates of a reference point of an access point coverage area, and the position coordinates of the reference point are used as labels;
adding noise to WiFi fingerprint data, wherein the added noise comprises masking noise and Gaussian noise, wherein the masking noise is used for randomly setting part of the data to be a fixed value so as to simulate signal change caused by random shielding of target equipment, and the Gaussian noise refers to noise obeying normal distribution;
normalizing the WiFi fingerprint data added with noise;
calculating a rough position of the access points, and reordering the access points along a predetermined direction based on the rough position;
normalizing the position coordinates of the reference points serving as the labels, so as to obtain an enhanced WiFi fingerprint data set;
training the fingerprint positioning model using the enhanced WiFi fingerprint data set.
6. The method of claim 1, wherein the trajectory tracking model is trained according to the steps of:
acquiring an inertial navigation data set, wherein the inertial navigation data set comprises inertial sensing data and corresponding 3D motion tracks, and the inertial sensing data comprises acceleration signals and angular velocity signals;
Performing linear interpolation, coordinate conversion and data segmentation on the inertial sensor data to obtain an inertial sensor data sequence;
performing linear interpolation and data segmentation on the 3D motion track to obtain an average speed label vector;
the inertial sensor data sequence is input into the trajectory tracking model and trained using the average speed tag vector as a reference tag for average speed.
7. The method of claim 5, wherein the coarse position of the access point is calculated according to the steps of:
set the WiFi fingerprint data seti access points AP are APs i And find APs from the WiFi fingerprint dataset i The reference position RP covered by the signal is expressed as:
wherein,representing AP i The kth RP to which the signal is covered;
acquisition from the WiFi fingerprint datasetThe corresponding list of RP position coordinates is expressed as:
wherein,representation->Position coordinates of (c);
the AP is calculated according to the following formula i Is defined by the coarse position coordinates of:
wherein, pos i Representing the coarse position of the i-th access point AP.
8. The method of claim 6, wherein the fused location information is obtained according to the steps of:
a state vector describing the state of the target device is selected, comprising position coordinates x, y and an angle ψ, expressed as:
X=[x y ψ] T
Setting an observation vector, expressed as:
Z=[x wifi y wifi s imu t imu ] T
wherein x is wifi And y wifi Is the coordinate obtained by using fingerprint positioning model s imu And t imu And obtaining the relative displacement on the two-dimensional plane output by the accumulated track tracking model, wherein the state equation at the moment k is as follows:
wherein W is k-1 Is the process noise at time k-1, the jacobian matrix is expressed as:
the observation matrix is expressed as:
the linearized state equation and observation equation are expressed as:
wherein V is k Is observation noise, W k-1 Corresponding covariance matrix Q and observation noise V k Corresponding toThe covariance matrix R of (a) is expressed as:
wherein,and->Is the fingerprint location variance>And->Is the variance of the relative displacement, the state covariance matrix P is set to:
9. a computer readable storage medium having stored thereon a computer program, wherein the computer program when executed by a processor realizes the steps of the method according to any of claims 1 to 8.
10. A computer device comprising a memory and a processor, on which memory a computer program is stored which can be run on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 8 when the computer program is executed.
CN202310786281.7A 2023-06-29 2023-06-29 Indoor positioning and navigation method based on WiFi fingerprint and inertial sensor information fusion Pending CN117098224A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310786281.7A CN117098224A (en) 2023-06-29 2023-06-29 Indoor positioning and navigation method based on WiFi fingerprint and inertial sensor information fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310786281.7A CN117098224A (en) 2023-06-29 2023-06-29 Indoor positioning and navigation method based on WiFi fingerprint and inertial sensor information fusion

Publications (1)

Publication Number Publication Date
CN117098224A true CN117098224A (en) 2023-11-21

Family

ID=88778441

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310786281.7A Pending CN117098224A (en) 2023-06-29 2023-06-29 Indoor positioning and navigation method based on WiFi fingerprint and inertial sensor information fusion

Country Status (1)

Country Link
CN (1) CN117098224A (en)

Similar Documents

Publication Publication Date Title
Tian et al. Pedestrian dead reckoning for MARG navigation using a smartphone
Asraf et al. PDRNet: A deep-learning pedestrian dead reckoning framework
CN112639502A (en) Robot pose estimation
Edel et al. An advanced method for pedestrian dead reckoning using BLSTM-RNNs
EP3414524A1 (en) Method and system for using offline map information aided enhanced portable navigation
CN111707260B (en) Positioning method based on frequency domain analysis and convolutional neural network
CN110260885B (en) Satellite/inertia/vision combined navigation system integrity evaluation method
CN109164411A (en) A kind of personnel positioning method based on multi-data fusion
Ren et al. Movement pattern recognition assisted map matching for pedestrian/wheelchair navigation
Hasan et al. Smart phone based sensor fusion by using Madgwick filter for 3D indoor navigation
Manos et al. Walking direction estimation using smartphone sensors: A deep network-based framework
Deng et al. Heading estimation fusing inertial sensors and landmarks for indoor navigation using a smartphone in the pocket
US20220155402A1 (en) Transition Detection
Wang et al. Recent advances in pedestrian inertial navigation based on smartphone: A review
Li et al. Research on the UWB/IMU fusion positioning of mobile vehicle based on motion constraints
Koroglu et al. Multiple hypothesis testing approach to pedestrian INS with map-matching
CN111623797B (en) Step number measuring method based on deep learning
Chen et al. Pedestrian positioning with physical activity classification for indoors
Zhang et al. An Indoor Localization Method Based on the Combination of Indoor Map Information and Inertial Navigation with Cascade Filter
CN117098224A (en) Indoor positioning and navigation method based on WiFi fingerprint and inertial sensor information fusion
Kosyanchuk et al. Navigation system for a wide range of tasks based on IMU aided with heterogeneous additional information
CN114674317A (en) Self-correcting dead reckoning system and method based on activity recognition and fusion filtering
Hua et al. SmartFPS: Neural network based wireless-inertial fusion positioning system
Lin et al. A Cnn-Speed-Based Gnss/Pdr Integrated System For Smartwatch
Shoushtari et al. L5in+: From an analytical platform to optimization of deep inertial odometry

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination