CN112653991A - WLAN indoor positioning method of TebNet neural network model based on deep learning - Google Patents
WLAN indoor positioning method of TebNet neural network model based on deep learning Download PDFInfo
- Publication number
- CN112653991A CN112653991A CN202011531295.7A CN202011531295A CN112653991A CN 112653991 A CN112653991 A CN 112653991A CN 202011531295 A CN202011531295 A CN 202011531295A CN 112653991 A CN112653991 A CN 112653991A
- Authority
- CN
- China
- Prior art keywords
- data
- tebnet
- neural network
- indoor positioning
- deep learning
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 32
- 238000003062 neural network model Methods 0.000 title claims abstract description 18
- 238000013135 deep learning Methods 0.000 title claims abstract description 15
- 238000012549 training Methods 0.000 claims abstract description 23
- 238000007405 data analysis Methods 0.000 claims abstract description 7
- 239000011159 matrix material Substances 0.000 claims description 22
- 230000006870 function Effects 0.000 claims description 17
- 238000012360 testing method Methods 0.000 claims description 11
- 230000002159 abnormal effect Effects 0.000 claims description 10
- 238000010586 diagram Methods 0.000 claims description 7
- 238000007781 pre-processing Methods 0.000 claims description 5
- 230000004913 activation Effects 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 claims description 2
- 238000002372 labelling Methods 0.000 claims description 2
- 238000010998 test method Methods 0.000 claims 1
- 238000009825 accumulation Methods 0.000 abstract description 2
- 238000002790 cross-validation Methods 0.000 abstract description 2
- 230000002194 synthesizing effect Effects 0.000 abstract 1
- 238000004458 analytical method Methods 0.000 description 4
- 238000011160 research Methods 0.000 description 3
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 230000002787 reinforcement Effects 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000007418 data mining Methods 0.000 description 1
- 238000003066 decision tree Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000000149 penetrating effect Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/02—Services making use of location information
- H04W4/021—Services related to particular areas, e.g. point of interest [POI] services, venue services or geofences
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/047—Probabilistic or stochastic networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/30—Services specially adapted for particular environments, situations or purposes
- H04W4/33—Services specially adapted for particular environments, situations or purposes for indoor environments, e.g. buildings
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W64/00—Locating users or terminals or network equipment for network management purposes, e.g. mobility management
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Molecular Biology (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Probability & Statistics with Applications (AREA)
- Mobile Radio Communication Systems (AREA)
Abstract
A WLAN indoor positioning method of a TebNet neural network model based on deep learning relates to the field of indoor positioning. Performing EDA exploration data in an offline stage, exploring data rules and distribution by using a matplotlib data analysis tool, and selecting and synthesizing characteristics by using characteristic engineering to generate statistical characteristics; and then training data by using methods such as cross validation and the like to finally obtain a positioning prediction model. The invention can strengthen learning along with data accumulation, and has stronger environment adaptivity and positioning precision.
Description
Technical Field
The invention relates to the field of WLAN indoor positioning, in particular to a WLAN indoor positioning method.
Background
In recent years, with the breakthrough of communication technology and intelligent devices, the development of derivative services of intelligent mobile terminals has been accelerated. Meanwhile, people increasingly demand stable and effective positioning services. Statistics data show that people are in an indoor environment most of the time, however, a traditional GPS positioning system cannot achieve a reliable positioning function in an indoor scene with serious building shielding, so research on indoor positioning becomes an increasingly popular research direction in recent years.
Due to the fact that the number of Wi-Fi base stations in the living environment of people is increased day by day, the device is easy to build and low in cost, the WIFI indoor positioning has the advantages of being convenient in data acquisition, simple in positioning model and free of progressive errors, and the precision of a WIFI positioning system is generally low due to the fact that multipath effects, the strength of a WIFI signal penetrating through an object is weakened and the like. Currently, most mainstream WIFI indoor positioning research adopts a position fingerprint matching method, a signal strength RSSI value of an AP access point is collected on a reference point in an off-line stage, a piece of fingerprint information is formed with position coordinates of the reference point and stored in a fingerprint database, and real-time position information is acquired by matching the real-time position fingerprint information of a user with the fingerprint information in the off-line database in an on-line stage. However, because the workload of acquiring the off-line fingerprint data is huge, theoretically, the positions to be positioned by the user all need to acquire position information in advance and store the position information in a fingerprint database, so that the positioning system established by the method has great spatial limitation and is difficult to expand and popularize.
At present, the deep neural network has achieved great success in images, texts and audio, but still is the main field of a lifting tree model for a table data set, and in numerous data mining competitions, xgboost and lightgbm become the first choice among numerous algorithms by virtue of good fitting of hyperplane boundaries, interpretability and high training speed in table data. However, for conventional DNNs, stacking the network layers at once can easily result in over-parameterization of the model, resulting in less than satisfactory DNN performance on the table dataset. In the 8 th month in 2019,the tabnet network proposed by Ar1k, which retains the end-to-end and representation learning features of DNN, has the advantages of tree model interpretability and sparse feature selection, and is becoming the first choice for the task of table data.
According to the problems of the existing WIFI indoor positioning system and the performance breakthrough of a new decision tree model, the invention provides a WLAN indoor positioning method based on a deep learning TebNet neural network model.
Disclosure of Invention
A WLAN indoor positioning method based on a deep learning TebNet neural network model is characterized by comprising the following steps:
(1) establishing an indoor coordinate system according to a reference image of an area to be positioned, setting reference points at fixed intervals, recording coordinates of each reference point, and collecting RSSI data, a base station MAC address, CSI amplitude and phase information.
(2) Generating a statistical data analysis thermodynamic diagram by using a matplotlib tool as shown in fig. 2, obtaining a missing value of a black area and an abnormal value of a white area of statistical data, performing data preprocessing by using a K neighbor algorithm and median filtering, wherein the preprocessed statistical data is a training set of position fingerprint data.
(3) Preprocessing statistical data, estimating a missing value according to Euclidean distance and a Mahalanobis distance function by using a distance-based filling method k-nearest neighbor method, detecting an abnormal value by using a Grubbs detection method, labeling and repeating until no abnormal value exists, and setting a preprocessed fingerprint data set as a training set.
(4) Training feature engineering and confrontation verification data, comprising the following steps:
1) introducing a regular term to preprocessed data to enhance model sparsity, wherein a calculation formula of the regular term is as follows:
wherein N isstepsFor the total time step, B and D are dimensions of Mask matrix corresponding to the number of rows of the statistical data in (3) and the number of base stations (i.e. columns) of different MAC addresses, Mask is a Mask matrix composed of 0 and 1, and M isb,j[i]An ith (i 1, 2.. once.n) of the fingerprint data corresponding to the B (B1, 2.. so, B) th and j (j 1, 2.. so, D) th base station positions in the Mask matrixsteps) Assigning attention weights to the samples, the attention weights being assigned from a training setInputting a sparse probability activation function Sparsemax to obtain the function, wherein the function is a sparse Softmax function, epsilon is white Gaussian noise, and a mean entropy value is calculated by a regular term totality to reflect Mb,j[i]The sparse degree of the TabNet network attention transform layer is set as an instance-wise parameter of the TabNet network attention transform layer after training.
2) The Borderline SMOTE algorithm is adopted to improve data imbalance.
An encoder model of the table data is trained by applying an automatic supervision learning method, a normalized mean square error is used as an error value, and the form is as follows:
where S is for {0,1}B×DFor Mask matrix, B, D is matrix dimension, B (B is 1,2,.. ang., B) and j (j is 1, 2.. ang., D) are Mask matrix corresponding to abscissa and ordinate values, f isb,jIs the characteristic data of the b-th row and the j-th column,is (1-S). fb,jAnd inputting the characteristic output after the encoder model is input. The addition vector f that will not go through FC full connection layerb,jAs input to the decoder model, the whole time step N is passedstepsAre added to obtain a reconstruction feature fnewAnd adding the reconstruction characteristics to the fingerprint data training set.
3) Carrying out balance analysis on the data obtained in the step 2), detecting the WIFI signal intensity, taking a base station with the signal intensity larger than 50dbm as a real-time data source, collecting fingerprint data at a reference point as a test set, adding a label to the test set in the step 1), merging the fingerprint data with the test set, training a lightgbm model to carry out classification prediction on the data, and obtaining a result that AUC is balanced within 0.4-0.6;
(5) to ensure the interpretability of TabNet, the characteristic attribute f after (4) balance is required to be solvedb,jGlobal importance of, normalized feature global importance MaggExpressed as:
whereinDenotes the ith (i ═ 1, 2.., N)steps) Contribution of individual time steps to the final result, where NdIs the sum of the time steps, clocked by a clock, ReLU (..) is a linear rectification function, db,c[i]For the contribution at i, from the feature data f obtained in (4)b,jM is obtained by inputting the Feature attribute module of the TabNet modelb,j[i]Is the attention weight for the ith time step. Characteristic fb,jContribution in time stepAnd sum of contributionsIs the global importance M of the feature attributeagg-b,jAnd training the M set as Feature attribute of the TabNet networkaggAnd (4) parameters.
(6) The regular term L obtained in (4) is usedsparseGlobal significance of the features obtained in (5) Magg-b.jInputting a TabNet model, and selecting other parameters:
where n _ d, n _ a, and n _ steps are important parameters for determining the capacity of the model, it is generally considered that the time step n _ steps is set to 3-10 as a reasonable parameter, and n _ d and n _ a are respectively the decision prediction layer width and the attention embedding mask matrix width, and it is usually reasonable to set n _ d to n _ a. The optimization optimizer is set as Adam algorithm, the learning rate of learning _ rate is preferably 0.01-0.001, and this time, 0.01 is selected. gamma determines the selection strength of sparse features, when the selection strength is 1, the correlation of the mask matrix among layers is minimum, and the value range is 1.0-2.0.
(7) And acquiring data such as the RSSI value of the AP on the test point, the MAC address of the base station and the like in an online stage, and inputting the data into a model to obtain specific test point coordinates.
The WLAN indoor positioning method of the TebNet neural network model based on deep learning disclosed by the invention has stronger environment adaptivity and positioning accuracy, has a function of reinforcement learning along with data accumulation, and has the following beneficial effects:
(1) the deep neural network model is adopted, so that the deep neural network model has the capability of reinforcement learning, and the positioning accuracy and the environmental adaptability can be improved along with the improvement of the data volume;
(2) the method adopts the model to predict the positioning result, and does not need to carry out position fingerprint acquisition on each point in a positioning area, thereby saving time and labor cost;
(3) the prediction model filters noise and abnormal values, fills missing values and improves positioning accuracy.
Drawings
FIG. 1 is a flow chart of a positioning algorithm model
FIG. 2 is a thermodynamic diagram for statistical data analysis
FIG. 3 is a graph of a data set sample equilibrium analysis
FIG. 4 is a general flow chart of the prediction of the present invention
Detailed Description
The indoor positioning scene is preset to be a corridor and an elevator room, and the area is 80 square meters.
The general flow chart of the positioning prediction of the invention is shown in figure 4. In the off-line stage, signal strength RSSI data of the AP are collected at a reference point and EDA exploration data is performed, and a matplotlib tool is used to generate a statistical data analysis thermodynamic diagram as shown in fig. 2, so that a missing value of a black area and an abnormal value of a white area of the statistical data can be obtained, and data preprocessing is required. And then, performing feature engineering work on the preprocessed data, selecting specific features, performing feature synthesis, generating statistical features, delivering the statistical features to a TabNet neural network model, and training the data by using a cross validation method. The specific implementation steps are as follows:
(1) firstly, an indoor coordinate system is established, reference points are set at certain intervals to collect RSSI data, base station MAC addresses and CSI amplitude and phase information, and the reference points and coordinates form an ordered vector, and the vector is position fingerprint data of the reference points.
(2) Generating a statistical data analysis thermodynamic diagram by using a matplotlib tool as shown in fig. 2, obtaining a missing value of a black area and an abnormal value of a white area of statistical data, performing data preprocessing by using a K neighbor algorithm and median filtering, wherein the preprocessed statistical data is a training set of position fingerprint data.
(3) And performing characteristic engineering work on the preprocessed data.
1) In order to enhance the capability of the model for feature sparse selection, a regular term is introduced:
wherein N isstepsFor the total time step, B and D are dimensions of Mask matrix corresponding to the number of rows of the statistical data in (3) and the number of base stations (i.e. columns) of different MAC addresses, Mask is a Mask matrix composed of 0 and 1, and M isb,j[i]An ith (i 1, 2.. once.n) of the fingerprint data corresponding to the B (B1, 2.. so, B) th and j (j 1, 2.. so, D) th base station positions in the Mask matrixsteps) Distributing attention weight of each sample, wherein the attention weight is obtained by a sparse probability activation function Sparsemax, the function is a sparse Softmax function, epsilon is white Gaussian noise, an average entropy value is calculated by a regular term overall, and M [ i [ ] is reflected]The sparse degree of the TabNet network attention transform layer is set as an instance-wise parameter of the TabNet network attention transform layer after training.
2) Model training the Borderline SMOTE algorithm was used to improve data imbalance, and the sample balance analysis graph is shown in FIG. 3. An encoder model of the table data is trained by applying an automatic supervision learning method, a normalized mean square error is used as an error value, and the form is as follows:
where S is for {0,1}B×DFor Mask matrix, B, D is matrix dimension, B (B is 1,2,.. ang., B) and j (j is 1, 2.. ang., D) are Mask matrix corresponding to abscissa and ordinate values, f isb,jIs the characteristic data of the b-th row and the j-th column,is (1-S). fb,jAnd inputting the characteristic output after the encoder model is input. The addition vector f that will not go through FC full connection layerb,jAs input to the decoder model, the whole time step N is passedstepsAre added to obtain a reconstruction feature fnewAnd adding the reconstruction characteristics to the fingerprint data training set.
3) Carrying out balance analysis on the data obtained in the step 2), detecting the WIFI signal intensity, taking a base station with the signal intensity larger than 50dbm as a real-time data source, collecting fingerprint data at a reference point as a test set, adding a label to the test set in the step 1), merging the fingerprint data with the test set, training a lightgbm model to carry out classification prediction on the data, and obtaining a result that AUC is balanced within 0.4-0.6;
(4) to ensure the interpretability of TabNet, the characteristic attribute f balanced by (3) needs to be solvedb,jGlobal importance of, normalized feature global importance MaggExpressed as:
whereinDenotes the ith (i ═ 1, 2.., N)steps) Contribution of individual time steps to the final result, where NdIs the sum of the time steps, clocked by a clock, ReLU (..) is a linear rectification function, db,c[i]For the contribution at i, from the feature data f obtained in (4)b,jM is obtained by inputting the Feature attribute module of the TabNet modelb,j[i]Is the attention weight for the ith time step. Characteristic fb,jContribution in time stepAnd sum of contributionsIs the global importance M of the feature attributeagg-b,jAnd training the M set as Feature attribute of the TabNet networkaggAnd (4) parameters.
(5) The regular term L obtained in (4) is usedsparseGlobal significance of the features obtained in (5) Magg-b.jInputting a TabNet model, and selecting other parameters:
where n _ d, n _ a, and n _ steps are important parameters for determining the capacity of the model, it is generally considered that the time step n _ steps is set to 3-10 as a reasonable parameter, and n _ d and n _ a are respectively the decision prediction layer width and the attention embedding mask matrix width, and it is usually reasonable to set n _ d to n _ a. The optimization optimizer is set as Adam algorithm, the learning rate of learning _ rate is preferably 0.01-0.001, and this time, 0.01 is selected. gamma determines the selection strength of sparse features, when the selection strength is 1, the correlation of the mask matrix among layers is minimum, and the value range is 1.0-2.0.
(6) And acquiring data such as the RSSI value of the AP on the test point, the MAC address of the base station and the like in an online stage, and inputting the data into a model to obtain specific test point coordinates.
Claims (9)
1. A WLAN indoor positioning method based on a deep learning TebNet neural network model is characterized by comprising the following steps:
(1) establishing an indoor coordinate system;
(2) analyzing the data and generating a statistical data analysis thermodynamic diagram;
(3) preprocessing data;
(4) performing characteristic engineering work on the preprocessed data;
(5) training countercheck data;
(6) fine adjustment of parameters;
(7) and inputting the model to obtain specific test point coordinates.
2. The WLAN indoor positioning method for the deep learning TebNet neural network model according to claim 1, wherein the implementation method of step (1) is as follows: and establishing an indoor coordinate system according to the reference image of the area to be positioned, setting reference points at fixed intervals, counting the coordinates of each reference point, and collecting RSSI data, the MAC address of the base station, the amplitude value of CSI and phase information.
3. The WLAN indoor positioning method for the deep learning TebNet neural network model according to claim 1, wherein the implementation method of the step (2) is as follows: and (3) performing EDA exploration data, exploring data rules and distribution by using a matplotlib tool, and generating a statistical data analysis thermodynamic diagram.
4. The WLAN indoor positioning method for the deep learning TebNet neural network model according to claim 1, wherein the implementation method of the step (3) is as follows: and processing the missing values of the black areas of the statistical data thermodynamic diagrams, estimating the missing values, and then checking all abnormal values.
5. The WLAN indoor positioning method for the deep learning TebNet neural network model according to claim 1, wherein the missing value estimation method is as follows: and estimating a missing value according to the Euclidean distance and the Mahalanobis distance function by using a distance-based filling method k-nearest neighbor method.
6. The WLAN indoor positioning method for the deep learning TebNet neural network model according to claim 5, wherein the abnormal value is detected by:
A. forming an ordered vector by the position fingerprint data acquired in the step (1) and the coordinates of the reference point, wherein the vector is the position fingerprint data of the reference point;
B. abnormal values are detected by using a Grubbs test method, labeling is carried out, the steps are repeated until no abnormal value exists, and the preprocessed fingerprint data set is set as a training set.
7. The WLAN indoor positioning method for the deep learning TebNet neural network model according to claim 1, wherein the step (4) is implemented by:
introducing a regular term to preprocessed data to enhance model sparsity, wherein a calculation formula of the regular term is as follows:
wherein N isstepsThe total time step is measured by a timing clock, B and D are dimensions of a Mask matrix, corresponding to the number of rows of statistical data in (3) and the number of columns of base stations with different MAC addresses, the Mask is a Mask matrix consisting of 0 and 1, and M is a time stepb,j[i]Attention weights for samples, where B and j are rows and columns in a Mask matrix, B is 1, 2.. times, B, j is 1, 2.. times, D, which correspond to fingerprint data and MAC addresses of base stations, respectively, i is a time step, and i is 1, 2.. timesstepsThe attention weight is obtained by inputting a training set obtained in the step (3) into a sparse probability activation function Sparsemax, the function is a sparse Softmax function, epsilon is white Gaussian noise, an average entropy value is calculated by a regular term overall, and M is reflectedb,j[i]The sparse degree of the TabNet network attention transform layer is set as an instance-wise parameter of the TabNet network attention transform layer after training.
8. The WLAN indoor positioning method for the deep learning TebNet neural network model according to claim 1, wherein the step (5) is implemented by:
1) improving data imbalance by adopting a Borderline SMOTE algorithm;
an encoder model of the table data is trained by applying an automatic supervision learning method, a normalized mean square error is used as an error value, and the form is as follows:
wherein Sb,j∈(0,1)B×DIs a Mask matrix, B, D is the matrix dimension, B and j are the Mask matrix row and column values, B is 1,2b,jFor the characteristic data of the jth column of the jth row of the data set,is (1-S). fb,jInputting the feature output after the encoder model, and adding the vector f which does not pass through the FC full connection layerb,jAs input to the decoder model, the whole time step N is passedstepsAre added to obtain a reconstruction feature fnewAdding reconstruction characteristics to the fingerprint data training set;
2) to ensure the interpretability of TabNet, 1) balanced feature attribute f needs to be solvedb,jThe normalized feature global importance is expressed as:
whereinDenotes the contribution of the ith time step to the final result, i 1,2stepsIn which N isdIs the sum of the time steps, counted by a timing clock, ReLU (..) is a linear rectification function, db,c[i]For the contribution at i, from the feature data f obtained in (4)b,jM is obtained by inputting the Feature attribute module of the TabNet modelb,j[i]Attention weight, feature f, for the ith time stepb,jTributes in time stepsDocument (A)And sum of contributionsIs the global importance M of the feature attributeagg-b.jAnd training the M set as Feature attribute of the TabNet networkaggAnd (4) parameters.
9. The WLAN indoor positioning method for the deep learning TebNet neural network model according to claim 1, wherein the step (6) is implemented by: the regular term L obtained in (4) is usedsparseGlobal significance of the features obtained in (5) Magg-b.jInputting a TabNet model, and selecting other parameters:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011531295.7A CN112653991A (en) | 2020-12-23 | 2020-12-23 | WLAN indoor positioning method of TebNet neural network model based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011531295.7A CN112653991A (en) | 2020-12-23 | 2020-12-23 | WLAN indoor positioning method of TebNet neural network model based on deep learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112653991A true CN112653991A (en) | 2021-04-13 |
Family
ID=75359278
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011531295.7A Pending CN112653991A (en) | 2020-12-23 | 2020-12-23 | WLAN indoor positioning method of TebNet neural network model based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112653991A (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113379818A (en) * | 2021-05-24 | 2021-09-10 | 四川大学 | Phase analysis method based on multi-scale attention mechanism network |
CN114004059A (en) * | 2021-09-24 | 2022-02-01 | 雅砻江流域水电开发有限公司 | Health portrait method for hydroelectric generating set |
CN114067256A (en) * | 2021-11-24 | 2022-02-18 | 西安交通大学 | Human body key point detection method and system based on Wi-Fi signals |
CN114143874A (en) * | 2021-12-06 | 2022-03-04 | 上海交通大学 | Accurate positioning method based on field intensity frequency of wireless base station |
CN114203306A (en) * | 2021-12-03 | 2022-03-18 | 医渡云(北京)技术有限公司 | Medical event prediction model training method, medical event prediction method and device |
CN115049053A (en) * | 2022-06-20 | 2022-09-13 | 航天宏图信息技术股份有限公司 | Loess region landslide susceptibility assessment method based on TabNet network |
CN117406170A (en) * | 2023-12-15 | 2024-01-16 | 中科华芯(东莞)科技有限公司 | Positioning method and system based on ultra-wideband |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101815308A (en) * | 2009-11-20 | 2010-08-25 | 哈尔滨工业大学 | WLAN indoor positioning method for neural network regional training |
CN108540929A (en) * | 2018-03-29 | 2018-09-14 | 马梓翔 | Indoor fingerprint location system based on the sequence of RSSI signal strengths |
CN109151995A (en) * | 2018-09-04 | 2019-01-04 | 电子科技大学 | A kind of deep learning recurrence fusion and positioning method based on signal strength |
CN110381440A (en) * | 2019-06-16 | 2019-10-25 | 西安电子科技大学 | The fingerprint indoor orientation method of joint RSS and CSI based on deep learning |
-
2020
- 2020-12-23 CN CN202011531295.7A patent/CN112653991A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101815308A (en) * | 2009-11-20 | 2010-08-25 | 哈尔滨工业大学 | WLAN indoor positioning method for neural network regional training |
CN108540929A (en) * | 2018-03-29 | 2018-09-14 | 马梓翔 | Indoor fingerprint location system based on the sequence of RSSI signal strengths |
CN109151995A (en) * | 2018-09-04 | 2019-01-04 | 电子科技大学 | A kind of deep learning recurrence fusion and positioning method based on signal strength |
CN110381440A (en) * | 2019-06-16 | 2019-10-25 | 西安电子科技大学 | The fingerprint indoor orientation method of joint RSS and CSI based on deep learning |
Non-Patent Citations (2)
Title |
---|
SERCAN O¨. ARIK等: "TabNet: Attentive Interpretable Tabular Learning", 《ARXIV:1908.07442V5》 * |
芒果冰麦: "EDA数据探索", 《HTTPS://BLOG.CSDN.NET/QQ_43655841/ARTICLE/DETAILS》 * |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113379818A (en) * | 2021-05-24 | 2021-09-10 | 四川大学 | Phase analysis method based on multi-scale attention mechanism network |
CN113379818B (en) * | 2021-05-24 | 2022-06-07 | 四川大学 | Phase analysis method based on multi-scale attention mechanism network |
CN114004059A (en) * | 2021-09-24 | 2022-02-01 | 雅砻江流域水电开发有限公司 | Health portrait method for hydroelectric generating set |
CN114067256A (en) * | 2021-11-24 | 2022-02-18 | 西安交通大学 | Human body key point detection method and system based on Wi-Fi signals |
CN114067256B (en) * | 2021-11-24 | 2023-09-12 | 西安交通大学 | Wi-Fi signal-based human body key point detection method and system |
CN114203306A (en) * | 2021-12-03 | 2022-03-18 | 医渡云(北京)技术有限公司 | Medical event prediction model training method, medical event prediction method and device |
CN114143874A (en) * | 2021-12-06 | 2022-03-04 | 上海交通大学 | Accurate positioning method based on field intensity frequency of wireless base station |
CN114143874B (en) * | 2021-12-06 | 2022-09-23 | 上海交通大学 | Accurate positioning method based on field intensity frequency of wireless base station |
CN115049053A (en) * | 2022-06-20 | 2022-09-13 | 航天宏图信息技术股份有限公司 | Loess region landslide susceptibility assessment method based on TabNet network |
CN115049053B (en) * | 2022-06-20 | 2023-03-24 | 航天宏图信息技术股份有限公司 | Loess region landslide susceptibility assessment method based on TabNet network |
CN117406170A (en) * | 2023-12-15 | 2024-01-16 | 中科华芯(东莞)科技有限公司 | Positioning method and system based on ultra-wideband |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112653991A (en) | WLAN indoor positioning method of TebNet neural network model based on deep learning | |
CN109800863B (en) | Logging phase identification method based on fuzzy theory and neural network | |
CN114092832B (en) | High-resolution remote sensing image classification method based on parallel hybrid convolutional network | |
CN110346517B (en) | Smart city industrial atmosphere pollution visual early warning method and system | |
CN108627798B (en) | WLAN indoor positioning algorithm based on linear discriminant analysis and gradient lifting tree | |
CN100595782C (en) | Classification method for syncretizing optical spectrum information and multi-point simulation space information | |
CN109143408B (en) | Dynamic region combined short-time rainfall forecasting method based on MLP | |
CN106203505B (en) | Method for judging moving and staying states of user by utilizing mobile phone positioning data | |
CN110536257B (en) | Indoor positioning method based on depth adaptive network | |
CN112465243A (en) | Air quality forecasting method and system | |
CN107037399A (en) | A kind of Wi Fi indoor orientation methods based on deep learning | |
CN112712169A (en) | Model building method and application of full residual depth network based on graph convolution | |
Khassanov et al. | Finer-level sequential wifi-based indoor localization | |
CN112580479A (en) | Geomagnetic indoor positioning system based on cavity convolution neural network | |
CN110716998B (en) | Fine scale population data spatialization method | |
CN105890600A (en) | Subway passenger position inferring method based on mobile phone sensors | |
CN109116300B (en) | Extreme learning positioning method based on insufficient fingerprint information | |
Qin et al. | A wireless sensor network location algorithm based on insufficient fingerprint information | |
CN108668254B (en) | WiFi signal characteristic area positioning method based on improved BP neural network | |
CN117173573A (en) | Urban building type change remote sensing detection method | |
CN109583513B (en) | Method, system and device for detecting similar frame and readable storage medium | |
CN114063150B (en) | ML-KNN algorithm-based 'seismic source-station' speed model selection method | |
CN112348700A (en) | Line capacity prediction method combining SOM clustering and IFOU equation | |
CN112040408A (en) | Multi-target accurate intelligent positioning and tracking method suitable for supervision places | |
Chen et al. | Combining random forest and graph wavenet for spatial-temporal data prediction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210413 |
|
RJ01 | Rejection of invention patent application after publication |