CN114897063A - Indoor positioning method based on-line pseudo label semi-supervised learning and personalized federal learning - Google Patents
Indoor positioning method based on-line pseudo label semi-supervised learning and personalized federal learning Download PDFInfo
- Publication number
- CN114897063A CN114897063A CN202210467468.6A CN202210467468A CN114897063A CN 114897063 A CN114897063 A CN 114897063A CN 202210467468 A CN202210467468 A CN 202210467468A CN 114897063 A CN114897063 A CN 114897063A
- Authority
- CN
- China
- Prior art keywords
- model
- local
- user
- data set
- initial
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
- G06F18/2155—Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the incorporation of unlabelled data, e.g. multiple instance learning [MIL], semi-supervised techniques using expectation-maximisation [EM] or naïve labelling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/02—Services making use of location information
- H04W4/021—Services related to particular areas, e.g. point of interest [POI] services, venue services or geofences
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/02—Services making use of location information
- H04W4/023—Services making use of location information using mutual or relative location information between multiple location based services [LBS] targets or of distance thresholds
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/30—Services specially adapted for particular environments, situations or purposes
- H04W4/33—Services specially adapted for particular environments, situations or purposes for indoor environments, e.g. buildings
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W64/00—Locating users or terminals or network equipment for network management purposes, e.g. mobility management
- H04W64/006—Locating users or terminals or network equipment for network management purposes, e.g. mobility management with additional information processing, e.g. for direction or speed determination
Abstract
The invention relates to the technical field of indoor positioning, and discloses an indoor positioning method based on-line pseudo label semi-supervised learning and personalized federal learning, which comprises the following steps: s1, a user constructs a local data set, and a server constructs a cloud data set; s2, the server is provided with machine learning models, the server respectively issues the machine learning models to each user, the machine learning model located at the user side is called a local model, and the machine learning model located at the user side is called a local model; s3, training a local model by a user through data with labels in a local data set to obtain an initial local model; s4, the server trains a global model through data with labels in the cloud data set to obtain an initial global model; s5, obtaining a trained local model and an updated global model through federal learning; and S6, the user carries out personalized positioning through the mixed expert model according to the trained local model and the updated global model. The invention solves the problem that the prior art ignores the difference of the height dynamic state and the positioning requirement of the local data and can not carry out personalized positioning, and has the characteristics of high efficiency and high precision.
Description
Technical Field
The invention relates to the technical field of federal learning, in particular to an indoor positioning method based on-line pseudo label semi-supervised learning and personalized federal learning.
Background
With the continuous development of the information communication technology field, the communication perception integrated technology is widely concerned as a representative application and scheme of 6G, and the information processing flow of the service shows a trend that wireless communication and wireless perception tend to overlap. Based on ubiquitous wireless communication signals in the environment, wireless sensing applications such as wireless fingerprint positioning and wireless human behavior identification are developed vigorously. Meanwhile, the method is beneficial to the development of an artificial intelligence algorithm represented by deep learning, and the wireless indoor fingerprint positioning technology which can be effectively modeled as a supervised learning task can train a deep learning model by utilizing the fingerprint matching characteristic between the characteristics and the position coordinates of a wireless signal so as to efficiently and accurately complete the positioning task.
However, the wireless indoor positioning technology enabled by artificial intelligence faces the bottleneck of further development. For artificial intelligence technologies represented by deep learning, training a high-performance model with high recognition accuracy requires high-density computing resources and massive labeled data. Thus, current deep learning based wireless location technologies typically need to be deployed in cloud computing data centers with powerful computing capabilities and rely on service providing platforms to hire workers to acquire and annotate wireless signal data offline. With the high popularity and development of the current mobile terminal devices, the organic combination of the edge intelligence technology and the wireless sensing technology has attracted high attention in academia and industry. The edge intelligent technology represented by the federal learning provides a technology which allows a deep learning model to be deployed on a terminal device, and further can sink tasks of calculating and perceiving signals to an edge side and an end side together. The technology of 'last kilometer' for solving artificial intelligence landing can effectively promote the further development of the wireless positioning technology.
In order to solve the problem, a federal learning-based building floor indoor positioning method is provided, wherein a radio frequency fingerprint positioning model is constructed by using a distributed deep learning technology which is participated by an edge server and a plurality of mobile clients together. The server firstly initializes the model and carries out centralized pre-training by using a small amount of fingerprint data, each client uses local fingerprint data to carry out further model training, then the trained local model is transmitted to the server, and the server aggregates the local models collected from each client to obtain a global model for radio frequency fingerprint positioning
However, the prior art has the problem that personalized positioning cannot be performed by neglecting the difference between the local data height dynamic and the positioning requirement, so how to invent an indoor positioning scheme which can cover the difference between the local data height dynamic and the positioning requirement and perform personalized positioning is a problem to be solved in the technical field.
Disclosure of Invention
The invention provides an indoor positioning method based on-line pseudo label semi-supervised learning and personalized federal learning, which aims to solve the problem that personalized positioning cannot be carried out due to neglect of difference of local data height dynamics and positioning requirements in the prior art, and has the characteristics of high efficiency and high precision.
In order to achieve the purpose of the invention, the technical scheme is as follows:
an indoor positioning method based on-line pseudo label semi-supervised learning and personalized federal learning comprises the following steps:
s1, a user constructs a local data set, and a server constructs a cloud data set;
s2, the server is provided with machine learning models, the server respectively issues the machine learning models to each user, the machine learning model located at the user side is called a local model, and the machine learning model located at the user side is called a local model;
s3, training a local model by a user through data with labels in a local data set to obtain an initial local model;
s4, training a global model by the server through data with labels in the cloud data set to obtain an initial global model;
s5, obtaining a trained local model and an updated global model through federal learning;
and S6, the user carries out personalized positioning through the mixed expert model according to the trained local model and the updated global model.
Optimally, a user constructs a local data set, specifically: supposing that K users are provided, K wireless routers are arranged in an area to be detected, the wireless routers transmit and receive signals, received signal strength obtained by the wireless routers and position labels obtained by the inertial navigation technology are used as input, and K received signal strength vectors xi ═ r are constructed i1 ,r i2 ,…,r ik ]And simultaneously recording a position coordinate vector yi, wherein components xi and yi are respectively an abscissa and an ordinate of the indoor plan, integrating the signal intensity vector and the position coordinate vector into a local data set, and calling the received signal intensity as RSS.
Further, the machine learning model is a multilayer perceptron model with fixed hyper-parameters, and the multilayer perceptron model with fixed hyper-parameters is called an MLP model.
Furthermore, the trained local model and the updated global model are obtained through federal learning, and the method specifically comprises the following steps:
s401, regarding the local data set as label-free, performing on-line pseudo label semi-supervised learning on the initial local model and the initial global model to obtain a pseudo label, and labeling the local data set by the pseudo label;
s402, further training an initial local model by combining the local data set marked by the pseudo label through a knowledge distillation technology to obtain a trained local model;
and S403, the server performs federated learning on the trained local model to obtain an average value of the local model weight, and updates the global model according to the average value of the local model weight.
Furthermore, regarding the local data set as label-free, obtaining a pseudo label by performing online pseudo label semi-supervised learning on the initial local model and the initial global model, and labeling the local data set by the pseudo label, specifically comprising the following steps:
K1. labeling RSS data in the local data set through the initial global model and the initial local model to obtain a pseudo label estimated by the initial global model and a pseudo label estimated by the initial local model:
wherein, the first and the second end of the pipe are connected with each other,the pseudo-label estimated for the initial global model,a pseudo label estimated for the initial local model; m is an MLP model, and M is an MLP model,characteristics of RSS data for user k in the t-th round,Weight parameter, ω, for global model of the t-th round user k k A local model weight for the kth user;
Wherein the sim function is the reciprocal of the euclidean distance;
K4. if it isAnd if the local data set is larger than the set threshold value, labeling the local data set by using the pseudo label of the global model, otherwise labeling the local data by using the pseudo label of the local model.
Furthermore, the initial local model is further trained by combining the local data set labeled with the pseudo label through a knowledge distillation technology to obtain a trained local model, and the specific steps are as follows:
A1. taking the initial global model as a teacher network and the initial local model as a student network, and modifying a loss function used for training the local model;
wherein the content of the first and second substances,a federal knowledge distillation loss function used for training a local model, wherein beta is a knowledge distillation factor, y is a position label, Mk (x) is position prediction output by the local model, and MG (x) is position prediction output by the global model;
A2. further modifying the function used for training the local model to obtain the modified function used for training the local model:
A3. and training the local model by combining the obtained modified objective function with the labeled local data set.
Further, the objective function is further modified, specifically: by the formula
Obtaining a modified objective function:
furthermore, the server performs federated learning on the trained local model to obtain a trained local model weight, and updates the global model according to the trained local model weight, and the specific steps are as follows:
B1. obtaining a weight parameter omega capable of minimizing the global model loss through a loss formula of the global model, wherein the loss formula of the global model is as follows:
wherein F () is a global model loss function, N is a total number of users, F k () Is a local loss function;
B2. according to the optimal weight omega, the server outputs local loss through the MAE loss function, and obtains the trained local model weight capable of minimizing the local loss through a local loss formula, wherein the local loss formula is as follows:
wherein D is k Is the size of the kth user local data set, i is the ith sample in the user k local data set,a location label for the ith sample of user k,sample feature, ω, for the ith sample of user k k A local model weight for the kth user;
B3. aggregating the weights of the local models of the users through a federal averaging algorithm, and updating the global model weights through the weights of the local models, wherein the federal averaging algorithm is as follows:
wherein the content of the first and second substances,for user k, the local dataset size of the t' th round, n t The sum of the sizes of the local data sets of all the users in the t round;
B4. the user receives the updated global model weight, and updates the local model weight according to the random gradient decrease of the updated global model weight:
furthermore, the user performs personalized positioning through the hybrid expert model according to the trained local model and the updated global model, specifically:
s601. inputTraining in a neural network with a sigmoid activation function as an output layer to obtain a gate control network: the formula for the gated network is as follows:
wherein, the first and the second end of the pipe are connected with each other,g () is a local gating network function, which is a probability weight of 0 to 1 for the gating network output;
s602, a user applies for positioning service;
s603, the user downloads the global model which is trained from the server;
s604, inputting the received RSS vector to the local model which is trained and the global model which is trained by the user, and respectively outputting position prediction;
s605, predicting probability weight through a gate control network;
s606, weighting the position prediction output by the local model after training and the position prediction output by the global model after training according to the probability weight to obtain a final position prediction value, wherein a final position prediction formula is as follows:
a computer system comprising a plurality of computer devices, each computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processors of said plurality of computer devices when executing said computer program collectively implement the steps of said block chain based secure data sharing and value transfer method.
The invention has the following beneficial effects:
the method comprises the steps that a user constructs a local data set, an initial local model is trained through a machine learning model distributed by a server, the server simultaneously trains an initial global model, the user and the server respectively train the local model and update the global model through a federal learning algorithm, and finally, the user obtains final position prediction through a mixed expert model to perform positioning.
Drawings
Fig. 1 is a schematic flow chart of an indoor positioning method based on-line pseudo label semi-supervised learning and personalized federal learning.
Detailed Description
The invention is described in detail below with reference to the drawings and the detailed description.
Example 1
As shown in fig. 1, an indoor positioning method based on-line pseudo tag semi-supervised learning and personalized federal learning includes the following steps:
s1, a user constructs a local data set, and a server constructs a cloud data set;
s2, the server is provided with machine learning models, the server respectively issues the machine learning models to all users, the machine learning model located at the user side is called a local model, and the machine learning model located at the user side is called a local model;
s3, training a local model by a user through data with labels in a local data set to obtain an initial local model;
s4, training a global model by the server through data with labels in the cloud data set to obtain an initial global model;
s5, obtaining a trained local model and an updated global model through federal learning;
and S6, the user carries out personalized positioning through the mixed expert model according to the trained local model and the updated global model.
Example 2
As shown in fig. 1, an indoor positioning method based on-line pseudo tag semi-supervised learning and personalized federal learning includes the following steps:
s1, a user constructs a local data set, and a server constructs a cloud data set;
s2, the server is provided with machine learning models, the server respectively issues the machine learning models to each user, the machine learning model located at the user side is called a local model, and the machine learning model located at the user side is called a local model;
s3, training a local model by a user through data with labels in a local data set to obtain an initial local model;
s4, training a global model by the server through data with labels in the cloud data set to obtain an initial global model;
s5, obtaining a trained local model and an updated global model through federal learning;
and S6, the user carries out personalized positioning through the mixed expert model according to the trained local model and the updated global model.
In a specific embodiment, the local data set is constructed, specifically: assuming K users, the user is to be detectedArranging K wireless routers in the area, transmitting and receiving signals by the wireless routers, taking the received signal strength obtained by the wireless routers and the position label obtained by the inertial navigation technology as input, and constructing K received signal strength vectors xi ═ r i1 ,r i2 ,…,r ik ]While recording the position coordinate vector y i =[x i y i ] T The components xi and yi are respectively the abscissa and ordinate of the indoor plan, the signal intensity vector and the position coordinate vector are integrated into a local data set, and the received signal intensity is called RSS.
In one embodiment, the machine learning model is a multi-layered sensor model with fixed hyper-parameters, and the multi-layered sensor model with fixed hyper-parameters is referred to as an MLP model.
In a specific embodiment, the trained local model and the updated global model are obtained through federal learning, and the specific steps are as follows:
s401, regarding the local data set as label-free, performing on-line pseudo label semi-supervised learning on the initial local model and the initial global model to obtain a pseudo label, and labeling the local data set by the pseudo label;
s402, further training an initial local model by combining the local data set marked by the pseudo label through a knowledge distillation technology to obtain a trained local model;
and S403, the server performs federated learning on the trained local model to obtain an average value of the local model weight, and updates the global model according to the average value of the local model weight.
In a specific embodiment, the local data set is regarded as label-free, a pseudo label is obtained by performing online pseudo label semi-supervised learning on an initial local model and an initial global model, and the local data set is labeled by the pseudo label, which specifically comprises the following steps:
K1. labeling RSS data in the local data set through the initial global model and the initial local model to obtain a pseudo label estimated by the initial global model and a pseudo label estimated by the initial local model:
wherein the content of the first and second substances,the pseudo-label estimated for the initial global model,a pseudo label estimated for the initial local model; m is an MLP model, and M is an MLP model,characteristics of RSS data for user k in the t-th round,Weight parameter, ω, for global model of the t-th round user k k A local model weight for the kth user;
Wherein the sim function is the reciprocal of the euclidean distance;
K4. if it isAnd if the local data set is larger than the set threshold, the pseudo label of the global model is used for labeling the local data set, otherwise, the pseudo label of the local model is used for labeling the local data.
In a specific embodiment, the initial local model is further trained by combining the pseudo-label labeled local data set through a knowledge distillation technology to obtain a trained local model, and the specific steps are as follows:
A1. taking the initial global model as a teacher network and the initial local model as a student network, and modifying a loss function used for training the local model;
wherein the content of the first and second substances,a Federal knowledge distillation loss function used for training a local model, wherein beta is a knowledge distillation factor, y is a position label, Mk (x) is position prediction output by the local model, and MG (x) is position prediction output by the global model;
A2. further modifying the function used for training the local model to obtain the modified function used for training the local model:
A3. and training the local model by combining the obtained modified objective function with the labeled local data set.
In a specific embodiment, the objective function is further modified, specifically: by the formula
Obtaining a modified objective function:
in a specific embodiment, the server performs federal learning on the trained local model to obtain a trained local model weight, and updates the global model according to the trained local model weight, which specifically includes the following steps:
B1. obtaining a weight parameter omega capable of minimizing the global model loss through a loss formula of the global model, wherein the loss formula of the global model is as follows:
wherein F () is a global model loss function, N is a total number of users, F k () Is a local loss function;
B2. according to the optimal weight omega, the server outputs local loss through the MAE loss function, and obtains the trained local model weight capable of minimizing the local loss through a local loss formula, wherein the local loss formula is as follows:
wherein D is k Is the size of the kth user local data set, i is the ith sample in the user k local data set,a location label for the ith sample of user k,sample feature, ω, for the ith sample of user k k A local model weight for the kth user;
B3. aggregating the weights of the local models of the users through a federal averaging algorithm, and updating the global model weights through the weights of the local models, wherein the federal averaging algorithm is as follows:
wherein the content of the first and second substances,for user k, the local dataset size of the t' th round, n t The sum of the sizes of the local data sets of all the users in the t round;
B4. the user receives the updated global model weight, and updates the local model weight according to the random gradient decrease of the updated global model weight:
in a specific embodiment, the server obtains an optimal weight parameter ω by using a loss formula of the global model, specifically: obtaining a weight parameter omega capable of minimizing the global model loss through a loss formula of the global model, wherein the loss formula of the global model is as follows:
where F () is the global model loss function and N is the total number of users.
Example 3
As shown in fig. 1, an indoor positioning method based on-line pseudo tag semi-supervised learning and personalized federal learning includes the following steps:
s1, a user constructs a local data set, and a server constructs a cloud data set;
s2, the server is provided with machine learning models, the server respectively issues the machine learning models to each user, the machine learning model located at the user side is called a local model, and the machine learning model located at the user side is called a local model;
s3, training a local model by a user through data with labels in a local data set to obtain an initial local model;
s4, the server trains a global model through data with labels in the cloud data set to obtain an initial global model;
s5, obtaining a trained local model and an updated global model through federal learning;
and S6, the user carries out personalized positioning through the mixed expert model according to the trained local model and the updated global model.
In a specific embodiment, the local data set is constructed, specifically: supposing that K users are provided, K wireless routers are arranged in the area to be detected, the wireless routers transmit and receive signals, and the wireless routers receive the signalsThe signal intensity and the position label obtained by the inertial navigation technology are used as input to construct K received signal intensity vectors xi ═ r i1 ,r i2 ,…,r ik ]While recording the position coordinate vector y i =[x i y i ] T The components xi and yi are respectively the abscissa and ordinate of the indoor plan, the signal intensity vector and the position coordinate vector are integrated into a local data set, and the received signal intensity is called RSS.
In one embodiment, the machine learning model is a multi-layered sensor model with fixed hyper-parameters, and the multi-layered sensor model with fixed hyper-parameters is referred to as an MLP model.
In a specific embodiment, the trained local model and the updated global model are obtained through federal learning, and the specific steps are as follows:
s401, regarding the local data set as label-free, performing on-line pseudo label semi-supervised learning on the initial local model and the initial global model to obtain a pseudo label, and labeling the local data set by the pseudo label;
s402, further training an initial local model by combining the local data set marked by the pseudo label through a knowledge distillation technology to obtain a trained local model;
and S403, the server performs federated learning on the trained local model to obtain an average value of the local model weight, and updates the global model according to the average value of the local model weight.
In a specific embodiment, the local data set is regarded as label-free, a pseudo label is obtained by performing online pseudo label semi-supervised learning on an initial local model and an initial global model, and the local data set is labeled by the pseudo label, which specifically comprises the following steps:
K1. labeling RSS data in the local data set through the initial global model and the initial local model to obtain a pseudo label estimated by the initial global model and a pseudo label estimated by the initial local model:
wherein the content of the first and second substances,the pseudo-label estimated for the initial global model,a pseudo label estimated for the initial local model; m is an MLP model, and M is an MLP model,characteristics of RSS data for the t-th round of user k,Weight parameter, ω, for global model of the t-th round user k k A local model weight for the kth user;
Wherein the sim function is the reciprocal of the euclidean distance;
K4. if it isAnd if the local data set is larger than the set threshold, the pseudo label of the global model is used for labeling the local data set, otherwise, the pseudo label of the local model is used for labeling the local data.
In a specific embodiment, the initial local model is further trained by combining the pseudo-label labeled local data set through a knowledge distillation technology to obtain a trained local model, and the specific steps are as follows:
A1. taking the initial global model as a teacher network and the initial local model as a student network, and modifying a loss function used by training the local model;
wherein the content of the first and second substances,a federal knowledge distillation loss function used for training a local model, wherein beta is a knowledge distillation factor, y is a position label, Mk (x) is position prediction output by the local model, and MG (x) is position prediction output by the global model;
A2. further modifying the function used for training the local model to obtain the modified function used for training the local model:
A3. and training the local model by combining the obtained modified objective function with the labeled local data set.
In a specific embodiment, the objective function is further modified, specifically: by the formula
Obtaining a modified objective function:
the hybrid expert model and knowledge distillation techniques are both individualized federal learning techniques.
In a specific embodiment, the server performs federal learning on the trained local model to obtain a trained local model weight, and updates the global model according to the trained local model weight, which specifically includes the following steps:
B1. obtaining a weight parameter omega capable of minimizing the global model loss through a loss formula of the global model, wherein the loss formula of the global model is as follows:
wherein F () is a global model loss function, N is a total number of users, F k () Is a local loss function;
B2. according to the optimal weight omega, the server outputs local loss through the MAE loss function, and obtains the trained local model weight capable of minimizing the local loss through a local loss formula, wherein the local loss formula is as follows:
wherein D is k Is the size of the kth user local data set, i is the ith sample in the user k local data set,a location label for the ith sample of user k,sample feature, ω, for the ith sample of user k k A local model weight for the kth user;
B3. aggregating the weights of the local models of the users through a federal averaging algorithm, and updating the global model weights through the weights of the local models, wherein the federal averaging algorithm is as follows:
wherein the content of the first and second substances,for user k, the local dataset size of the t' th round, n t The sum of the sizes of the local data sets of all the users in the t round;
B4. the user receives the updated global model weight, and updates the local model weight according to the random gradient decrease of the updated global model weight:
in a specific embodiment, the user performs personalized positioning through the hybrid expert model according to the trained local model and the updated global model, specifically:
s601. inputTraining in a neural network with a sigmoid activation function as an output layer to obtain a gate control network: the formula for the gated network is as follows:
wherein the content of the first and second substances,g () is a local gating network function, which is a probability weight of 0 to 1 for the gating network output;
s602, a user applies for positioning service;
s603, the user downloads the global model which is trained from the server;
s604, inputting the received RSS vector to the local model which is trained and the global model which is trained by the user, and respectively outputting position prediction;
s605, predicting probability weight through a gate control network;
s606, weighting the position prediction output by the local model after training and the position prediction output by the global model after training according to the probability weight to obtain a final position prediction value, wherein a final position prediction formula is as follows:
in the embodiment, a user constructs a local data set, trains an initial local model through a machine learning model distributed by a server, the server trains an initial global model, the user and the server train the local model and update the global model through a federal learning algorithm respectively, and finally, a final position prediction is obtained by a user mixed expert model for positioning. The invention solves the problem that the prior art ignores the difference of the height dynamic state and the positioning requirement of the local data and can not carry out personalized positioning, and has the characteristics of high efficiency and high precision.
Example 4
A computer system comprising a plurality of computer devices, each computer device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processors of the plurality of computer devices when executing the computer program collectively implement the steps of the blockchain-based secure data sharing and value transfer method of embodiment 1, or embodiment 2, or embodiment 3.
It should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.
Claims (10)
1. An indoor positioning method based on-line pseudo label semi-supervised learning and personalized federal learning is characterized in that: the method comprises the following steps:
s1, a user constructs a local data set, and a server constructs a cloud data set;
s2, the server is provided with machine learning models, the server respectively issues the machine learning models to each user, the machine learning model located at the user side is called a local model, and the machine learning model located at the user side is called a local model;
s3, training a local model by a user through data with labels in a local data set to obtain an initial local model;
s4, training a global model by the server through data with labels in the cloud data set to obtain an initial global model;
s5, obtaining a trained local model and an updated global model through federal learning;
and S6, the user carries out personalized positioning through the mixed expert model according to the trained local model and the updated global model.
2. The indoor positioning method based on-line pseudo-label semi-supervised learning and personalized federal learning of claim 1, wherein; the user constructs a local data set, specifically: the user constructs K received signal strength vectors xi ═ r by receiving the signal strength transmitted by K wireless routers i1 ,r i2 ,…,r ik ]And simultaneously integrating the signal intensity vector and the position coordinate vector into a local data set by recording the position coordinate vector yi.
3. The indoor positioning method based on-line pseudo tag semi-supervised learning and personalized federal learning of claim 1, wherein: the machine learning model is a multilayer perceptron model with fixed hyper-parameters, and the multilayer perceptron model with the fixed hyper-parameters is called an MLP model.
4. The indoor positioning method based on-line pseudo-label semi-supervised learning and personalized federal learning of claim 2, characterized in that; obtaining a trained local model and an updated global model through federal learning, and the method comprises the following specific steps:
s401, regarding the local data set as label-free, performing on-line pseudo label semi-supervised learning on the initial local model and the initial global model to obtain a pseudo label, and labeling the local data set by the pseudo label;
s402, further training an initial local model by combining the pseudo label labeled local data set through a knowledge distillation technology to obtain a trained local model;
and S403, the server performs federated learning on the trained local model to obtain an average value of the local model weight, and updates the global model according to the average value of the local model weight.
5. The indoor positioning method based on-line pseudo tag semi-supervised learning and personalized federal learning of claim 4, wherein: regarding the local data set as label-free, performing on-line pseudo label semi-supervised learning on the initial local model and the initial global model to obtain a pseudo label, and labeling the local data set by the pseudo label, wherein the specific steps are as follows:
K1. labeling RSS data in the local data set through the initial global model and the initial local model to obtain a pseudo label estimated by the initial global model and a pseudo label estimated by the initial local model:
wherein the content of the first and second substances,the pseudo-label estimated for the initial global model,a pseudo label estimated for the initial local model; m is an MLP model, and M is an MLP model,characteristics of RSS data for the t-th round of user k,Weight parameter, ω, for global model of the t-th round user k k A local model weight for the kth user;
Wherein the sim function is the reciprocal of the euclidean distance;
6. The block chain based secure data sharing and value transfer method of claim 4, wherein: combining the pseudo label labeled local data set, further training the initial local model by a knowledge distillation technology to obtain a trained local model, and specifically comprising the following steps:
A1. taking the initial global model as a teacher network and the initial local model as a student network, and modifying a loss function used for training the local model;
wherein the content of the first and second substances,a federal knowledge distillation loss function used for training a local model, wherein beta is a knowledge distillation factor, y is a position label, Mk (x) is position prediction output by the local model, and MG (x) is position prediction output by the global model;
A2. further modifying the function used for training the local model to obtain the modified function used for training the local model:
A3. and training a local model by combining the obtained modified target function with the labeled local data set.
8. the block chain based secure data sharing and value transfer method of claim 6, wherein: the server performs federated learning on the trained local model to obtain a trained local model weight, and updates the global model through the trained local model weight, and the specific steps are as follows:
B1. obtaining a weight parameter omega capable of minimizing the global model loss through a loss formula of the global model, wherein the loss formula of the global model is as follows:
wherein F () is a global model loss function, N is a total number of users, F k () Is a local loss function;
B2. according to the optimal weight parameter omega, the server outputs local loss through the MAE loss function, and obtains the trained local model weight capable of minimizing the local loss through a local loss formula, wherein the local loss formula is as follows:
wherein D is k Is the size of the kth user local data set, i is the ith sample in the user k local data set,a location label for the ith sample of user k,sample feature, ω, for the ith sample of user k k A local model weight for the kth user;
B3. aggregating the weights of the local models of the users through a federal averaging algorithm, and updating the global model weights through the weights of the local models, wherein the federal averaging algorithm is as follows:
wherein the content of the first and second substances,for user k, the local dataset size of the t' th round, n t The sum of the sizes of the local data sets of all the users in the t round;
B4. the user receives the updated global model weight, and updates the local model weight according to the random gradient decrease of the updated global model weight:
9. the block chain based secure data sharing and value transfer method of claim 5, wherein: the user carries out personalized positioning through the mixed expert model according to the trained local model and the updated global model, and the personalized positioning method specifically comprises the following steps:
s601. inputTraining in a neural network with a sigmoid activation function as an output layer to obtain a gate control network: the formula for the gated network is as follows:
wherein the content of the first and second substances,g () is a local gating network function, which is a probability weight of 0 to 1 for the gating network output;
s602, a user applies for positioning service;
s603, the user downloads the global model which is trained from the server;
s604, inputting the received RSS vector to the local model which is trained and the global model which is trained by the user, and respectively outputting position prediction;
s605, predicting probability weight through a gate control network;
s606, weighting the position prediction output by the local model after training and the position prediction output by the global model after training according to the probability weight to obtain a final position prediction value, wherein the final position prediction formula is as follows:
10. a computer system comprising a plurality of computer devices, each computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processors of the plurality of computer devices when executing the computer program collectively implement the steps of the blockchain based secure data sharing and value transfer method of any one of claims 1 to 10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210467468.6A CN114897063A (en) | 2022-04-29 | 2022-04-29 | Indoor positioning method based on-line pseudo label semi-supervised learning and personalized federal learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210467468.6A CN114897063A (en) | 2022-04-29 | 2022-04-29 | Indoor positioning method based on-line pseudo label semi-supervised learning and personalized federal learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114897063A true CN114897063A (en) | 2022-08-12 |
Family
ID=82719232
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210467468.6A Pending CN114897063A (en) | 2022-04-29 | 2022-04-29 | Indoor positioning method based on-line pseudo label semi-supervised learning and personalized federal learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114897063A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115310130A (en) * | 2022-08-15 | 2022-11-08 | 南京航空航天大学 | Multi-site medical data analysis method and system based on federal learning |
-
2022
- 2022-04-29 CN CN202210467468.6A patent/CN114897063A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115310130A (en) * | 2022-08-15 | 2022-11-08 | 南京航空航天大学 | Multi-site medical data analysis method and system based on federal learning |
CN115310130B (en) * | 2022-08-15 | 2023-11-17 | 南京航空航天大学 | Multi-site medical data analysis method and system based on federal learning |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Gharehchopogh et al. | Advances in sparrow search algorithm: a comprehensive survey | |
US10451712B1 (en) | Radar data collection and labeling for machine learning | |
US20180268292A1 (en) | Learning efficient object detection models with knowledge distillation | |
Campbell et al. | The explosion of artificial intelligence in antennas and propagation: How deep learning is advancing our state of the art | |
CN113807399B (en) | Neural network training method, neural network detection method and neural network training device | |
CN110377587B (en) | Migration data determination method, device, equipment and medium based on machine learning | |
Liu et al. | Geomagnetism-based indoor navigation by offloading strategy in NB-IoT | |
CN114091554A (en) | Training set processing method and device | |
Yadav et al. | A systematic review of localization in WSN: Machine learning and optimization‐based approaches | |
CN113326826A (en) | Network model training method and device, electronic equipment and storage medium | |
CN114241226A (en) | Three-dimensional point cloud semantic segmentation method based on multi-neighborhood characteristics of hybrid model | |
Mallik et al. | Paving the way with machine learning for seamless indoor–outdoor positioning: A survey | |
CN114897063A (en) | Indoor positioning method based on-line pseudo label semi-supervised learning and personalized federal learning | |
Vahidnia et al. | A hierarchical signal-space partitioning technique for indoor positioning with WLAN to support location-awareness in mobile map services | |
Kota et al. | IOT‐HML: A hybrid machine learning technique for IoT enabled industrial monitoring and control system | |
Bai et al. | UAV-supported intelligent truth discovery to achieve low-cost communications in mobile crowd sensing | |
US20240095539A1 (en) | Distributed machine learning with new labels using heterogeneous label distribution | |
CN107124761B (en) | Cellular network wireless positioning method fusing PSO and SS-ELM | |
Janabi et al. | A new localization mechanism in IoT using grasshopper optimization algorithm and DVHOP algorithm | |
Gokulakrishnan et al. | Maliciously roaming person's detection around hospital surface using intelligent cloud-edge based federated learning | |
Ijaz et al. | A UAV assisted edge framework for real-time disaster management | |
US20230055079A1 (en) | Method of load forecasting via attentive knowledge transfer, and an apparatus for the same | |
Barolli et al. | Web, Artificial Intelligence and Network Applications: Proceedings of the Workshops of the 33rd International Conference on Advanced Information Networking and Applications (WAINA-2019) | |
KR102474974B1 (en) | Method, device and system for analyzing brand based on artificial intelligence | |
Hensel et al. | Object Detection and Mapping with Unmanned Aerial Vehicles Using Convolutional Neural Networks |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |