CN114077901A - User position prediction framework based on clustering and used for image federation learning - Google Patents

User position prediction framework based on clustering and used for image federation learning Download PDF

Info

Publication number
CN114077901A
CN114077901A CN202111397483.XA CN202111397483A CN114077901A CN 114077901 A CN114077901 A CN 114077901A CN 202111397483 A CN202111397483 A CN 202111397483A CN 114077901 A CN114077901 A CN 114077901A
Authority
CN
China
Prior art keywords
user
server
users
graph
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111397483.XA
Other languages
Chinese (zh)
Inventor
张啸
王麒麟
叶梓铭
于东晓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University filed Critical Shandong University
Priority to CN202111397483.XA priority Critical patent/CN114077901A/en
Publication of CN114077901A publication Critical patent/CN114077901A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2323Non-hierarchical techniques based on graph theory, e.g. minimum spanning trees [MST] or graph cuts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Medical Informatics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Bioethics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Discrete Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Hardware Design (AREA)
  • Computer Security & Cryptography (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention provides a user position prediction framework based on clustering graph federation learning, which comprises the following steps: s1, a user trains locally by using a sequence prediction model; s2, uploading the model parameters and the hidden state of the original sequence data after passing through the encoder to a server by a user; s3, learning the structure of the similarity graph by using the hidden state; s4, obtaining an embedded representation of the user through a graph convolution neural network; s5, dividing the users into a plurality of clusters through a clustering method, and executing a federal average algorithm by the users in each cluster; and S6, downloading the embedded representation and the averaged model parameters to corresponding users, splicing the hidden state and the embedded representation by each user, then outputting a prediction result, and updating the server model parameters. The method has the advantages that the federal study protects the data privacy; the graph convolution network solves the problem of insufficient training cost caused by label scarcity; the graph clustering algorithm enables more similar users to perform a federated averaging algorithm to solve the problem of inter-user heterogeneity.

Description

User position prediction framework based on clustering and used for image federation learning
Technical Field
The invention belongs to the technical field of intelligent equipment user position prediction, and particularly relates to a user position prediction framework based on clustering graph federation learning.
Background
User location prediction aims at predicting the movement trajectory or location of a user person in a real scene, which allows an intelligent system to assist the user person in improving the quality of life, and is now involved in many fields such as intelligent services, smart cities, and healthcare. In recent years, with the popularization of smart wearable devices and the advancement of location-based smart service technologies, the problem of user location prediction has become a research hotspot in academia and industry. In previous studies, user location prediction on a particular data set has been achieved with good results. However, the location prediction of users has its unique features and challenges: (1) and (4) privacy protection. The position data carries a large amount of privacy information of a user, the information is often very sensitive, the data is usually trained in a centralized batch processing mode in the existing method, the privacy of the user can be leaked, a very large storage space is required to store all user historical track data, and how to improve the effect of model prediction under the condition of protecting the privacy of the user data is one of the problems to be solved. (2) The scarcity of labels. Labeled activity data is always limited and it is expensive for the user to obtain enough movement trajectory data for model training. The limited amount of data is an extremely serious problem if the user trains alone. (3) User heterogeneity. Since the features or interests of users are diversified, the location movement patterns of different users are different, which means that the data of users are heterogeneous, i.e. not independently and equally distributed, and therefore the generalized model obtained by federal learning cannot achieve the best performance on a specific client. The current research on this problem has many deficiencies.
Disclosure of Invention
The patent provides a clustering-based graph federation learning framework GFedHMP to solve three problems of privacy protection, label scarcity and user heterogeneity in position prediction. The technical proposal is that the method comprises the following steps,
a user position prediction framework based on clustering graph federation learning comprises the following steps:
s1, a user trains locally by using a sequence prediction model;
s2, uploading the model parameters and the hidden state of the data after passing through the encoder to a server by a user;
s3, the server learns the similar graph structure by using the hidden state;
s4, the server obtains the embedded representation of each user through a graph convolution neural network;
s5, the server divides the users into a plurality of clusters through a clustering method, and the users in each cluster execute a federal average algorithm aiming at the model parameters;
s6, downloading the embedded representation and the averaged model parameters to each user by the server, splicing the hidden state and the embedded representation by each user, using the spliced hidden state and the embedded representation as the input of a decoder, then outputting a prediction result, and updating the model parameters of the server.
Further preferably, in step S1, each user is trained by using a sequence prediction model with a codec structure, which includes the following specific steps:
s11, initializing parameters of a sequence prediction model
Figure BDA0003370467600000021
Embedded representation of a user
Figure BDA0003370467600000022
Setting an iteration counter raSpecifying the number of iterations T of the user's local traininga
S12, judging whether the current wheel number reaches the iteration number Ta(ii) a If less than or equal to TaContinuing to execute; otherwise, ending to obtain user model parameters
Figure BDA0003370467600000023
S13, original historical track data is obtained
Figure BDA0003370467600000024
Inputting into encoder to obtain implicit state of data
Figure BDA0003370467600000025
S14, converting the hidden state
Figure BDA0003370467600000026
And an initial embedded representation of the user
Figure BDA0003370467600000027
Splicing as input to the decoder and obtaining the prediction result
Figure BDA0003370467600000028
S15, predicting results
Figure BDA0003370467600000029
And true value
Figure BDA00033704676000000210
Obtaining loss by calculating a cross entropy loss function;
s16, calculating gradient by using loss and updating the model, etaaLearning rate for user local model training;
s17, an iteration counter raAnd adding 1, and entering S12.
Further preferably, in step S2, the raw sequence data
Figure BDA00033704676000000211
Obtaining implicit states via an encoder
Figure BDA00033704676000000212
Figure BDA00033704676000000213
Is made byThe encoder part of the user local model combines the parameters of the user local model
Figure BDA00033704676000000214
And implicit state of data
Figure BDA00033704676000000215
The information is uploaded to a server and then transmitted to the server,
Figure BDA00033704676000000216
the parameters of the encoder part of the user local model,
Figure BDA00033704676000000217
parameters of the decoder part of the user local model.
Further preferably, in step S3, the server learns the similar graph structure through the graph learning layer by using the hidden state uploaded by the user, and the steps are as follows:
(1)M=tanh(αHθGL)
(2)A=ReLU(tanh(α(MMT)))
(3)idx=argtopk(A[i,:])(i=1,2,3,...,N)
A[i,-idx]=0
wherein M is a transition matrix after H is transformed, i represents the number of the users, N represents the total number of the users,
Figure BDA0003370467600000031
representing a matrix of hidden states formed by the hidden states of data uploaded by the users, DhidCharacteristic dimension, θ, for hidden statesGLRepresenting model parameters of a server-side graph structure learning layer, wherein alpha is a hyper-parameter for controlling the saturation rate of an activation function, and the index of k maximum values of an argtopk (·) return vector is represented as idx; through the steps, obtaining a similar graph structure representing the user relationship, and representing by using an adjacency matrix A; step (3) is a strategy for thinning the adjacency matrix, for each user, the most similar k users are selected as the neighbors of the user, and the edge weight of the non-connected user is set while the edge weight of the connected user is keptIs 0.
Further preferably, in step S4, the server obtains the embedded representation of each user through the convolutional neural network, which specifically includes the following steps, where the output of each layer of the convolutional neural network is
Figure BDA0003370467600000032
Figure BDA0003370467600000033
Wherein
Figure BDA0003370467600000034
INRepresenting an identity matrix of size N,
Figure BDA0003370467600000035
is a contiguous matrix
Figure BDA0003370467600000036
Degree matrix of, i.e.
Figure BDA0003370467600000037
The method comprises the following steps of (1) inputting as an initial layer of a graph convolution neural network, namely an implicit state matrix consisting of implicit states uploaded to a server by all users; w(l;)The weight of the ith layer of the graph convolution neural network is represented and output after the convolution of multiple layers
Figure BDA0003370467600000038
Figure BDA0003370467600000039
Wherein
Figure BDA00033704676000000310
As a new embedded representation of the user.
Further preferably, the server executes the Louvain algorithm by taking the user adjacency matrix a output in step S5 as an input of the graph clustering module, and the specific process is as follows:
users are divided into Q clusters
Figure BDA00033704676000000321
The user in each cluster is subjected to a federal average algorithm to obtain
Figure BDA00033704676000000311
Namely, it is
Figure BDA00033704676000000312
q∈Q,
Figure BDA00033704676000000313
Representative user uiThe amount of data of (a) is,
Figure BDA00033704676000000314
represents belonging to cluster CqThe total data volume of the user; then go through all users if ui∈CqThen, then
Figure BDA00033704676000000315
I.e. each belonging to a cluster CqUser u ofiUser model parameter values of
Figure BDA00033704676000000316
By using
Figure BDA00033704676000000317
Replace it and prepare it for return to the corresponding user.
More preferably, in step S6, the server stores the new embedded representation obtained in step S4
Figure BDA00033704676000000318
And the averaged user model parameters obtained in step S5
Figure BDA00033704676000000319
Downloading to corresponding users, and allowing the users to have implicit states
Figure BDA00033704676000000320
And new embedded representation
Figure BDA0003370467600000041
Splicing to obtain
Figure BDA0003370467600000042
Obtaining a gradient of the embedded representation by calculating a gradient of the loss obtained by the cross entropy loss function as an input to the decoder and outputting the prediction result
Figure BDA0003370467600000043
Returning to the server; the server will embed the gradient of the representation
Figure BDA0003370467600000044
Continue backward propagation to obtain thetaGLGradient of (2)
Figure BDA0003370467600000045
And thetaGCNGradient of (2)
Figure BDA0003370467600000046
The server will
Figure BDA0003370467600000047
And
Figure BDA0003370467600000048
is added up and is matched with the parameter thetaGLAnd thetaGCNAnd (6) updating.
Advantageous effects
The method adopts a clustering-based graph federation learning framework GFedHMP, and firstly utilizes federation learning to protect the privacy of users; through the constructed similar graph structure, the graph convolution network can enable a user to learn knowledge that other users are beneficial to training and predicting of the model, and the problem of insufficient training cost caused by scarce labels is solved to a certain extent; the graph clustering algorithm enables more similar users to perform a federated averaging algorithm to solve the problem of inter-user heterogeneity.
Drawings
FIG. 1 is a flow chart of the present application;
FIG. 2 is a flow chart of a user training locally;
FIG. 3 is a flow chart of a server model in connection with a user training;
fig. 4 is a flow chart of information according to the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. The user location prediction method based on clustering graph federation learning provided by the embodiment has the flow schematic diagrams shown in fig. 1-4, and mainly comprises the following steps:
in step S1, the user trains locally using the sequence prediction model, as shown in fig. 2. In this step, it specifically includes:
s11, initializing parameters of the model at first
Figure BDA0003370467600000049
Embedded representation of a user
Figure BDA00033704676000000410
Iteration counter r a1, the number of iterations T of the user training locally is specifieda
S12, judging whether the current wheel number reaches the specified wheel number Ta. If less than or equal to Ta(ra≤Ta) And the execution is continued. Otherwise, ending to obtain user model parameters
Figure BDA00033704676000000411
S13, original historical track data is obtained
Figure BDA0003370467600000051
Inputting into encoder to obtain implicit state of data
Figure BDA0003370467600000052
Namely, it is
Figure BDA0003370467600000053
Figure BDA0003370467600000054
S14, converting the hidden state
Figure BDA0003370467600000055
And embedded representation of the user
Figure BDA0003370467600000056
Splicing as input to the decoder and obtaining the prediction result
Figure BDA0003370467600000057
Namely, it is
Figure BDA0003370467600000058
S15, predicting results
Figure BDA0003370467600000059
And true value
Figure BDA00033704676000000510
Obtaining loss by computing a cross-entropy loss function, i.e.
Figure BDA00033704676000000511
Figure BDA00033704676000000512
S16, utilizing
Figure BDA00033704676000000513
Calculating gradients and updating the model, i.e.
Figure BDA00033704676000000514
ηaLearning rate for user local model training.
S17, an iteration counter raPlus 1, enterS12。
In step S2, raw sequence data
Figure BDA00033704676000000515
Obtaining implicit states via an encoder
Figure BDA00033704676000000516
Figure BDA00033704676000000517
Is a user local encoder, applies user model parameters
Figure BDA00033704676000000518
And implicit state of data
Figure BDA00033704676000000519
And uploading to a server.
The server model is trained in conjunction with the user to train out a generalized model and a new embedded representation of the user. Includes steps S3-S6, the specific training process is shown in FIG. 3:
(1) firstly, initializing a model parameter theta of a server-side graph structure learning layerGL(graph learning layer), model parameter theta of server-side graph convolution network layerGCN(graph convolution network), iteration counter r s1, appointing the iteration number T of training of the server model in combination with the users
(2) Judging whether the current wheel number reaches the specified wheel number Ts. If less than or equal to TsAnd the execution is continued. Otherwise, ending to obtain the final model parameters of the user
Figure BDA00033704676000000520
And the embedded representation matrix H of the userout
(3) The user performs model training locally, i.e., completes step S1. After training is finished, the user sets the model parameters
Figure BDA00033704676000000521
And implicit state of data
Figure BDA00033704676000000522
And uploading to a server.
(4) Server utilizing implicit states of users
Figure BDA00033704676000000523
And learning the structure of the similarity graph to obtain an adjacency matrix A representing the user relationship. (S3)
(5) Obtaining the embedded expression matrix of each user by using the hidden state of each user as input through a graph convolution neural network
Figure BDA00033704676000000524
By this step of operation, the user can learn the knowledge of other users. (S4)
(6) The server executes the graph clustering module, takes the adjacency matrix A obtained in the step (4) as input, and divides users into Q clusters
Figure BDA0003370467600000061
The user in each cluster is subjected to a federal average algorithm to obtain
Figure BDA0003370467600000062
Figure BDA0003370467600000063
q∈Q,
Figure BDA0003370467600000064
Representative user uiThe amount of data of (a) is,
Figure BDA0003370467600000065
represents belonging to cluster CqOf the user. Then go through all users if ui∈CqThen, then
Figure BDA0003370467600000066
I.e. each belonging to a cluster CqUser u ofiUser model parameter values of
Figure BDA0003370467600000067
Replacement of
Figure BDA0003370467600000068
And ready to be returned to the corresponding user. Through the operation, the parameter averaging is avoided being directly carried out on all users, and the problem of user heterogeneity is solved to a certain extent. (Note that (5) and (6) may be performed in parallel). (S5)
(7) The server represents the user's embedding HoutDownloading to corresponding users, and allowing the users to have implicit states
Figure BDA0003370467600000069
And embedded representation
Figure BDA00033704676000000610
Splicing is carried out to be used as the input of a decoder and the prediction result is output, and the gradient of the embedded expression is obtained through the loss calculation gradient obtained by the cross entropy loss function
Figure BDA00033704676000000611
And returning to the server.
(8) Server utilizing gradient of embedded representation
Figure BDA00033704676000000612
Continue backward propagation to obtain thetaGLGradient of (2)
Figure BDA00033704676000000613
And thetaGCNGradient of (2)
Figure BDA00033704676000000614
(9) Accumulating the gradients of the respective model parameters, i.e.
Figure BDA00033704676000000615
And
Figure BDA00033704676000000616
Figure BDA00033704676000000617
(10) updating server model parameters, i.e.
Figure BDA00033704676000000618
Figure BDA00033704676000000619
Wherein etasLearning rate for server model training. (S6)
Steps S3-S6 mainly illustrate how the server side performs model training in conjunction with the user without acquiring the user' S raw data.
And finally, the user can output the final position prediction result by inputting the historical data to be predicted into the model by using the trained generalization model and the embedded representation.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (7)

1. A user position prediction framework based on clustering graph federation learning is characterized by comprising the following steps:
s1, a user trains locally by using a sequence prediction model;
s2, uploading the model parameters and the hidden state of the data after passing through the encoder to a server by a user;
s3, the server learns the similar graph structure by using the hidden state;
s4, the server obtains the embedded representation of each user through a graph convolution neural network;
s5, the server divides the users into a plurality of clusters through a clustering method, and the users in each cluster execute a federal average algorithm aiming at the model parameters;
s6, downloading the embedded representation and the averaged model parameters to each user by the server, splicing the hidden state and the embedded representation by each user, using the spliced hidden state and the embedded representation as the input of a decoder, then outputting a prediction result, and updating the model parameters of the server.
2. The user location prediction framework of federated learning based on clustering charts of claim 1, wherein in step S1, each user is trained using a sequence prediction model with a codec structure, specifically comprising the following steps:
s11, initializing parameters of a sequence prediction model
Figure FDA0003370467590000011
Embedded representation of a user
Figure FDA0003370467590000012
Setting an iteration counter raSpecifying the number of iterations T of the user's local traininga
S12, judging whether the current wheel number reaches the iteration number Ta(ii) a If less than or equal to TaContinuing to execute; otherwise, ending to obtain user model parameters
Figure FDA0003370467590000013
S13, original historical track data is obtained
Figure FDA0003370467590000014
Inputting into encoder to obtain implicit state of data
Figure FDA0003370467590000015
S14, converting the hidden state
Figure FDA0003370467590000016
And an initial embedded representation of the user
Figure FDA0003370467590000017
Splicing as input to the decoder and obtaining the prediction result
Figure FDA0003370467590000018
S15, predicting results
Figure FDA0003370467590000019
And true value
Figure FDA00033704675900000114
Obtaining loss by calculating a cross entropy loss function;
s16, calculating gradient by using loss and updating the model, etaaLearning rate for user local model training;
s17, an iteration counter raAnd adding 1, and entering S12.
3. The framework of claim 1, wherein in step S2, the raw sequence data is generated by a user' S location prediction module based on cluster graph federation learning
Figure FDA00033704675900000110
Obtaining implicit states via an encoder
Figure FDA00033704675900000111
Figure FDA00033704675900000112
Figure FDA00033704675900000113
Is an encoder part of the user local model, and combines the parameters of the user local model
Figure FDA0003370467590000021
And implicit state of data
Figure FDA0003370467590000022
The information is uploaded to a server and then transmitted to the server,
Figure FDA0003370467590000023
the parameters of the encoder part of the user local model,
Figure FDA0003370467590000024
parameters of the decoder part of the user local model.
4. The user location prediction framework based on clustering graph federation learning of claim 1, wherein in step S3, the server learns the similarity graph structure through the graph learning layer by using the implicit state uploaded by the user, and the steps are as follows:
(1)M=tanh(αHθGL)
(2)A=ReLU(tanh(α(MMT)))
(3)idx=argtopk(A[i,:])(i=1,2,3,...,N)
A[i,-idx]=0
wherein M is a transition matrix after H is transformed, i represents the number of the users, N represents the total number of the users,
Figure FDA0003370467590000025
representing a matrix of hidden states formed by the hidden states of data uploaded by the users, DhidCharacteristic dimension, θ, for hidden statesGLRepresenting model parameters of a server-side graph structure learning layer, wherein alpha is a hyper-parameter for controlling the saturation rate of an activation function, and the index of k maximum values of an argtopk (·) return vector is represented as idx; through the steps, obtaining a similar graph structure representing the user relationship, and representing by using an adjacency matrix A; and (3) a strategy for thinning the adjacency matrix, wherein for each user, the most similar k users are selected as the neighbors of the user, and the edge weight of the non-connected user is set to be 0 while the edge weight of the connected user is kept.
5. The user location prediction framework for federated learning based on clustering graphs as claimed in claim 1, wherein in step S4, the server obtains the embedded representation of each user through the convolutional neural network, which specifically comprises the following steps, the output of each layer of the convolutional neural network is
Figure FDA0003370467590000026
Wherein
Figure FDA0003370467590000027
INRepresenting an identity matrix of size N,
Figure FDA0003370467590000028
is a contiguous matrix
Figure FDA0003370467590000029
Degree matrix of, i.e.
Figure FDA00033704675900000210
Figure FDA00033704675900000211
H(0)H is used as the initial layer input of the graph convolution neural network, namely, an implicit state matrix composed of implicit states uploaded to the server by each user; w(l)The weight of the ith layer of the graph convolution neural network is represented and output after the convolution of multiple layers
Figure FDA00033704675900000212
Wherein
Figure FDA00033704675900000213
As a new embedded representation of the user.
6. The user location prediction framework based on clustering graph federation learning of claim 4, wherein the server executes a Louvain algorithm with the user adjacency matrix A output in step S5 as an input of the graph clustering module, and the specific process is as follows:
users are divided into Q clusters
Figure FDA0003370467590000031
The user in each cluster is subjected to a federal average algorithm to obtain
Figure FDA0003370467590000032
Namely, it is
Figure FDA0003370467590000033
Figure FDA0003370467590000034
Representative user uiThe amount of data of (a) is,
Figure FDA0003370467590000035
represents belonging to cluster CqThe total data volume of the user; then go through all users if ui∈CqThen, then
Figure FDA0003370467590000036
I.e. each belonging to a cluster CqUser u ofiUser model parameter values of
Figure FDA0003370467590000037
By using
Figure FDA0003370467590000038
Replace it and prepare it for return to the corresponding user.
7. The framework of claim 4, wherein in step S6, the server stores the new embedded representation obtained in step S4
Figure FDA0003370467590000039
And the averaged user model parameters obtained in step S5
Figure FDA00033704675900000310
Downloading to corresponding users, and allowing the users to have implicit states
Figure FDA00033704675900000311
And new embedded representation
Figure FDA00033704675900000312
Splicing to obtain
Figure FDA00033704675900000313
Obtaining a gradient of the embedded representation by calculating a gradient of the loss obtained by the cross entropy loss function as an input to the decoder and outputting the prediction result
Figure FDA00033704675900000314
Returning to the server; the server will embed the gradient of the representation
Figure FDA00033704675900000315
Continue backward propagation to obtain thetaGLGradient of (2)
Figure FDA00033704675900000316
And thetaGCNGradient of (2)
Figure FDA00033704675900000317
The server will
Figure FDA00033704675900000318
And
Figure FDA00033704675900000319
is added up and is matched with the parameter thetaGLAnd thetaGCNAnd (6) updating.
CN202111397483.XA 2021-11-23 2021-11-23 User position prediction framework based on clustering and used for image federation learning Pending CN114077901A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111397483.XA CN114077901A (en) 2021-11-23 2021-11-23 User position prediction framework based on clustering and used for image federation learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111397483.XA CN114077901A (en) 2021-11-23 2021-11-23 User position prediction framework based on clustering and used for image federation learning

Publications (1)

Publication Number Publication Date
CN114077901A true CN114077901A (en) 2022-02-22

Family

ID=80284127

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111397483.XA Pending CN114077901A (en) 2021-11-23 2021-11-23 User position prediction framework based on clustering and used for image federation learning

Country Status (1)

Country Link
CN (1) CN114077901A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114564752A (en) * 2022-04-28 2022-05-31 蓝象智联(杭州)科技有限公司 Blacklist propagation method based on graph federation
CN115114980A (en) * 2022-06-28 2022-09-27 支付宝(杭州)信息技术有限公司 User clustering method and device for joint training of user clustering model
CN116204599A (en) * 2023-05-06 2023-06-02 成都三合力通科技有限公司 User information analysis system and method based on federal learning
CN116502709A (en) * 2023-06-26 2023-07-28 浙江大学滨江研究院 Heterogeneous federal learning method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190130305A1 (en) * 2017-10-27 2019-05-02 Intuit Inc. Methods, systems, and computer program product for implementing an intelligent system with dynamic configurability
CN111340247A (en) * 2020-02-12 2020-06-26 深圳前海微众银行股份有限公司 Longitudinal federated learning system optimization method, device and readable storage medium
CN112364943A (en) * 2020-12-10 2021-02-12 广西师范大学 Federal prediction method based on federal learning
WO2021189906A1 (en) * 2020-10-20 2021-09-30 平安科技(深圳)有限公司 Target detection method and apparatus based on federated learning, and device and storage medium
CN117371555A (en) * 2023-10-25 2024-01-09 华东师范大学 Federal learning model training method based on domain generalization technology and unsupervised clustering algorithm

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190130305A1 (en) * 2017-10-27 2019-05-02 Intuit Inc. Methods, systems, and computer program product for implementing an intelligent system with dynamic configurability
CN111340247A (en) * 2020-02-12 2020-06-26 深圳前海微众银行股份有限公司 Longitudinal federated learning system optimization method, device and readable storage medium
WO2021189906A1 (en) * 2020-10-20 2021-09-30 平安科技(深圳)有限公司 Target detection method and apparatus based on federated learning, and device and storage medium
CN112364943A (en) * 2020-12-10 2021-02-12 广西师范大学 Federal prediction method based on federal learning
CN117371555A (en) * 2023-10-25 2024-01-09 华东师范大学 Federal learning model training method based on domain generalization technology and unsupervised clustering algorithm

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
张啸;何小虎;: "中央空调远程监控系统的设计", 可编程控制器与工厂自动化, no. 08, 15 August 2011 (2011-08-15) *
樊敏;王晓锋;孟小峰;: "基于可穿戴设备的心电图自适应分类算法研究", 计算机科学, no. 12, 16 August 2019 (2019-08-16) *
潘如晟;韩东明;潘嘉铖;周舒悦;魏雅婷;梅鸿辉;陈为;: "联邦学习可视化:挑战与框架", 计算机辅助设计与图形学学报, no. 04, 14 January 2021 (2021-01-14) *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114564752A (en) * 2022-04-28 2022-05-31 蓝象智联(杭州)科技有限公司 Blacklist propagation method based on graph federation
CN115114980A (en) * 2022-06-28 2022-09-27 支付宝(杭州)信息技术有限公司 User clustering method and device for joint training of user clustering model
CN116204599A (en) * 2023-05-06 2023-06-02 成都三合力通科技有限公司 User information analysis system and method based on federal learning
CN116204599B (en) * 2023-05-06 2023-10-20 成都三合力通科技有限公司 User information analysis system and method based on federal learning
CN116502709A (en) * 2023-06-26 2023-07-28 浙江大学滨江研究院 Heterogeneous federal learning method and device

Similar Documents

Publication Publication Date Title
CN114077901A (en) User position prediction framework based on clustering and used for image federation learning
Zhang et al. Self-distillation: Towards efficient and compact neural networks
Chen et al. Distributed deep learning model for intelligent video surveillance systems with edge computing
CN113298191B (en) User behavior identification method based on personalized semi-supervised online federal learning
CN111091247A (en) Power load prediction method and device based on deep neural network model fusion
CN105701482A (en) Face recognition algorithm configuration based on unbalance tag information fusion
CN112927266B (en) Weak supervision time domain action positioning method and system based on uncertainty guide training
CN111723930A (en) System applying crowd-sourcing supervised learning method
CN115311605B (en) Semi-supervised video classification method and system based on neighbor consistency and contrast learning
CN117034100A (en) Self-adaptive graph classification method, system, equipment and medium based on hierarchical pooling architecture
CN114880538A (en) Attribute graph community detection method based on self-supervision
CN114359656A (en) Melanoma image identification method based on self-supervision contrast learning and storage device
CN114463340A (en) Edge information guided agile remote sensing image semantic segmentation method
Qiao et al. A framework for multi-prototype based federated learning: Towards the edge intelligence
CN112668633B (en) Adaptive graph migration learning method based on fine granularity field
CN113850012A (en) Data processing model generation method, device, medium and electronic equipment
CN117036760A (en) Multi-view clustering model implementation method based on graph comparison learning
CN116170746B (en) Ultra-wideband indoor positioning method based on depth attention mechanism and geometric information
CN116227578A (en) Unsupervised domain adaptation method for passive domain data
CN116151409A (en) Urban daily water demand prediction method based on neural network
CN109871835B (en) Face recognition method based on mutual exclusion regularization technology
CN117194966A (en) Training method and related device for object classification model
Sheng et al. Weakly supervised coarse-to-fine learning for human action segmentation in HCI videos
CN114881308A (en) Internet vehicle speed prediction method based on meta-learning
CN115131605A (en) Structure perception graph comparison learning method based on self-adaptive sub-graph

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination