CN111428885B - User indexing method in federated learning and federated learning device - Google Patents
User indexing method in federated learning and federated learning device Download PDFInfo
- Publication number
- CN111428885B CN111428885B CN202010244824.9A CN202010244824A CN111428885B CN 111428885 B CN111428885 B CN 111428885B CN 202010244824 A CN202010244824 A CN 202010244824A CN 111428885 B CN111428885 B CN 111428885B
- Authority
- CN
- China
- Prior art keywords
- users
- federal learning
- participate
- learning
- federal
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 45
- 230000003993 interaction Effects 0.000 claims abstract description 30
- 238000012545 processing Methods 0.000 claims description 19
- 230000004043 responsiveness Effects 0.000 claims description 13
- 230000000875 corresponding effect Effects 0.000 claims description 12
- 238000004590 computer program Methods 0.000 claims description 11
- 238000012216 screening Methods 0.000 claims description 8
- 230000006399 behavior Effects 0.000 claims description 7
- 230000002596 correlated effect Effects 0.000 claims description 5
- 238000003860 storage Methods 0.000 claims description 5
- 238000012360 testing method Methods 0.000 abstract description 11
- 238000013461 design Methods 0.000 description 20
- 238000010586 diagram Methods 0.000 description 12
- 230000008569 process Effects 0.000 description 10
- 238000009826 distribution Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000012549 training Methods 0.000 description 3
- 238000013473 artificial intelligence Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000001276 controlling effect Effects 0.000 description 1
- 238000013499 data model Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Medical Informatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Artificial Intelligence (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention discloses a user indexing method in federated learning and a federated learning device, wherein the method comprises the following steps: obtaining feedback data of a plurality of users after receiving a federal learning invitation historically, and obtaining user portrait data of the users respectively; determining the number of times that each of the plurality of users participates in the federated learning modeling within the last time window; calculating respective index values of the plurality of users according to respective feedback data of the plurality of users, user portrait data and the number of times that the plurality of users participate in federal learning modeling in a last time window; according to respective index values of the users, the users meeting the preset conditions in the users are invited to participate in the federal learning, so that the users meeting the preset conditions can be optimally led out and participate in the federal learning, the suitability of the led-out users and the federal learning is improved, repeated tests are not needed for contacting the users to participate in the federal learning, and the interaction efficiency of participants of the federal learning and a federal learning device is improved.
Description
Technical Field
The invention relates to the technical field of financial science and technology (Fintech) and the technical field of artificial intelligence, in particular to an index method for users in federated learning and a federated learning device.
Background
As a novel machine learning concept, federal learning ensures that user privacy data is protected to the maximum extent through a distributed training and encryption technology so as to improve the trust of users on an artificial intelligence technology. Under a federal learning mechanism, each participant contributes the encrypted data model to a federation, jointly trains a federal model, and opens the model for each participant to use. In the training process of the federal learning, how to improve the interaction efficiency of the participants of the federal learning and the federal learning device is of great significance for improving the model training efficiency of the federal learning.
In the current federal learning, the federal learning device is selected by a random method for the participants of the federal learning. For example, the federal learning apparatus employs a random algorithm to randomly index a corresponding number of users from users satisfying the constraint condition (users whose used terminal devices must satisfy the condition of being in a charged state and using a non-flow pricing network link such as WiFi, etc.) to participate in the federal learning. For example, if federal learning requires 8 participants to be selected, the federal learning device can use a stochastic algorithm to randomly index 8 users from among users satisfying constraints as the participants of federal learning.
However, the federal learning device indexes users participating in federal learning by using a random method, and optimization consideration is lacked, in other words, users randomly indexed by the federal learning device are not optimally screened by the federal learning device, so that the possibility that the indexed users refuse to participate in the federal learning is high (namely, the indexed users have low suitability with the federal learning), and a phenomenon that the federal learning device needs to repeatedly test to contact the users to participate in the federal learning easily occurs, so that the interaction efficiency between the participants of the federal learning and the federal learning device is reduced.
Disclosure of Invention
The invention provides a user indexing method in federated learning and a federated learning device, which are used for solving the problem that in the prior art, the interaction efficiency between a federated learning participant and the federated learning device is low.
In order to achieve the above object, in a first aspect, the present invention provides a method for indexing a user in federated learning, including:
obtaining feedback data of a plurality of users after receiving a federal learning invitation historically, and obtaining user portrait data of the users respectively;
determining a number of times that each of the plurality of users participated in federated learning modeling within a last time window;
calculating respective index values of the plurality of users according to respective feedback data of the plurality of users, the user portrait data and the number of times that the plurality of users participate in federal learning modeling in a last time window; the index value is used for representing the fitness value of each of the plurality of users participating in a new round of federal learning;
and inviting users meeting preset conditions in the plurality of users to participate in federal learning according to the respective index values of the plurality of users.
In one possible design, calculating the index values for each of the plurality of users based on the feedback data for each of the plurality of users, the user representation data, and the number of times each of the plurality of users has engaged in federal learning modeling within a previous time window includes:
determining user representation confidence levels for each of the plurality of users based on the user representation data for each of the plurality of users;
and respectively modeling according to the respective feedback data of the users, the user portrait confidence and the times of the users participating in the federal learning modeling in the last time window, and calculating to obtain respective index values of the users.
In one possible design, the modeling is performed according to the feedback data of each of the plurality of users, the confidence of the portrait of the user, and the number of times that each of the plurality of users participates in the federal learning modeling in the previous time window, and the index value of each of the plurality of users is obtained through calculation, including:
modeling according to the feedback data, and predicting a first probability that each of the plurality of users participates in a new round of federal learning invitation at the current time period; the value corresponding to any time point in a time window of a new round of federal learning of the first probability is positively correlated with the time-sharing responsiveness of the users at the time point; the time-sharing responsiveness is used for representing the speed of receiving the federal learning invitation fed back by each of the plurality of users;
modeling according to the frequency of the users participating in the federal learning modeling in the last time window respectively, and predicting the experience loss of the users respectively; wherein the experience loss is used for representing the satisfaction degree of the plurality of users for accepting participation in a new round of federal learning invitation behavior in the current time period;
modeling according to the user portrait confidence, and predicting the frequency of interaction between the federal learning server and the users in a new round of federal learning;
and calculating the index values of the users according to the first probability, the experience loss and the frequency.
In one possible design, calculating respective index values for the plurality of users based on the first probability, the loss of experience, and the frequency includes:
calculating corresponding average values among the first probability, the experience loss and the frequency; taking the average value as the index value of each of the plurality of users; or,
and according to a preset strategy, taking the maximum numerical value of the first probability, the experience loss and the frequency as the respective index value of the plurality of users.
In one possible design, taking a maximum value of the first probability, the experience loss, and the frequency as an index value of each of the plurality of users according to a preset policy includes:
comparing the average value with a preset threshold value, and determining whether the average value is greater than or equal to the preset threshold value;
and if the average value is determined to be greater than or equal to the preset threshold value, taking the maximum value of the first probability, the experience loss and the frequency as the respective index value of the plurality of users.
In one possible design, inviting users meeting a preset condition from the plurality of users to participate in federal learning according to respective index values of the plurality of users includes:
if the fact that the users with the frequency of being invited by the federal learning lower than a preset threshold value are preferentially mobilized to participate in the new federal learning is determined, screening N users with index values smaller than a first preset index value from the multiple users, and inviting the N users to participate in the new federal learning;
if the users with the preferential mobilization invited frequency higher than or equal to the preset threshold value by the federal learning are determined to participate in the new round of federal learning, screening N users with index values larger than a second preset index value from the plurality of users, and inviting the N users to participate in the new round of federal learning.
In one possible design, after inviting the N users to participate in a new round of federal learning, the method further comprises:
and receiving feedback data of the N users, updating the probability of the N users participating in a new round of federal learning invitation, the experience loss of the N users and the frequency of interaction between the federal learning server and the N users, and calculating the fitness value of the N users participating in the next round of federal learning respectively.
In a second aspect, the present invention provides a federal learning device, including:
the system comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for acquiring feedback data of a plurality of users after receiving federal learning invitation historically and acquiring user portrait data of the users;
the processing unit is used for determining the number of times that the plurality of users respectively participate in the federal learning modeling in the last time window; calculating respective index values of the plurality of users according to respective feedback data of the plurality of users, the user portrait data and the number of times that the plurality of users participate in federal learning modeling in a last time window; the index value is used for representing the fitness value of each of the plurality of users participating in a new round of federal learning;
and the inviting unit is used for inviting the users meeting the preset conditions in the plurality of users to participate in the federal study according to the respective index values of the plurality of users.
In one possible design, the processing unit is specifically configured to:
determining user representation confidence levels for each of the plurality of users based on the user representation data for each of the plurality of users;
and respectively modeling according to the respective feedback data of the users, the user portrait confidence and the times of the users participating in the federal learning modeling in the last time window, and calculating to obtain respective index values of the users.
In one possible design, the processing unit is specifically configured to:
modeling according to the feedback data, and predicting a first probability that each of the plurality of users participates in a new round of federal learning invitation at the current time period; the value corresponding to any time point in a time window of a new round of federal learning of the first probability is positively correlated with the time-sharing responsiveness of the users at the time point; the time-sharing responsiveness is used for representing the speed of receiving the federal learning invitation fed back by each of the plurality of users;
modeling according to the frequency of the users participating in the federal learning modeling in the last time window respectively, and predicting the experience loss of the users respectively; wherein the experience loss is used for representing the satisfaction degree of the plurality of users for accepting participation in a new round of federal learning invitation behavior in the current time period;
modeling according to the user portrait confidence, and predicting the frequency of interaction between the federal learning server and the users in a new round of federal learning;
and calculating the index values of the users according to the first probability, the experience loss and the frequency.
In one possible design, the processing unit is specifically configured to:
calculating corresponding average values among the first probability, the experience loss and the frequency; taking the average value as the index value of each of the plurality of users; or,
and according to a preset strategy, taking the maximum numerical value of the first probability, the experience loss and the frequency as the respective index value of the plurality of users.
In one possible design, the processing unit is specifically configured to:
comparing the average value with a preset threshold value, and determining whether the average value is greater than or equal to the preset threshold value;
and if the average value is determined to be greater than or equal to the preset threshold value, taking the maximum value of the first probability, the experience loss and the frequency as the respective index value of the plurality of users.
In one possible design, the invitation unit is specifically configured to:
if the fact that the users with the frequency of being invited by the federal learning lower than a preset threshold value are preferentially mobilized to participate in the new federal learning is determined, screening N users with index values smaller than a first preset index value from the multiple users, and inviting the N users to participate in the new federal learning;
if the users with the preferential mobilization invited frequency higher than or equal to the preset threshold value by the federal learning are determined to participate in the new round of federal learning, screening N users with index values larger than a second preset index value from the plurality of users, and inviting the N users to participate in the new round of federal learning.
In one possible design, the processing unit is further configured to:
and receiving feedback data of the N users, updating the probability of the N users participating in a new round of federal learning invitation, the experience loss of the N users and the frequency of interaction between the federal learning server and the N users, and calculating the fitness value of the N users participating in the next round of federal learning respectively.
In a third aspect, the present invention provides a federal learning device, including: at least one processor and memory; wherein the memory stores one or more computer programs; the memory stores one or more computer programs that, when executed by the at least one processor, enable the federated learning device to perform the first aspect described above or any one of the possible design methods of the first aspect described above.
In a fourth aspect, the present invention provides a computer-readable storage medium storing computer instructions that, when executed on a computer, enable the computer to perform the method of the first aspect or any one of the possible designs of the first aspect.
The invention has the following beneficial effects:
compared with the prior art, the index value of each of the plurality of users is calculated by the federal learning device according to the feedback data after the users receive the federal learning invitation according to the history, the user portrait data of each of the plurality of users and the frequency of the users participating in the federal learning modeling in the last time window, so that the index value of each of the plurality of users can be related to the probability that each of the plurality of users receives the federal learning invitation in different periods, the probability that each of the plurality of users participates in a new round of federal learning, and the satisfaction degree of each of the plurality of users participating in the federal learning modeling in the last time window, thereby improving the fitness of users meeting preset conditions selected from the plurality of users by the federal learning device according to the index value and the federal learning, and reducing the possibility that the users who are led out refuse to participate in the federal learning, repeated tests are not needed to contact the user to participate in the federal learning, and the interaction efficiency of the participants of the federal learning and the federal learning device can be effectively improved.
Drawings
Fig. 1 is a schematic flowchart of an indexing method for a user in federated learning according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a bang learning device according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a bang learning device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the present invention will be described in further detail with reference to the accompanying drawings, and it is apparent that the described embodiments are only a part of the embodiments of the present invention, not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The shapes and sizes of the various elements in the drawings are not to scale and are merely intended to illustrate the invention.
In the embodiments of the present invention, "first" and "second" are used to distinguish different objects, and are not used to describe a specific order. Furthermore, the term "comprises" and any variations thereof, which are intended to cover non-exclusive protection. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
In this embodiment of the present invention, "and/or" is only one kind of association relation describing an associated object, and indicates that three kinds of relations may exist, for example, a and/or B may indicate: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" in the embodiment of the present invention generally indicates that the preceding and following related objects are in an "or" relationship.
In the embodiment of the present invention, "a plurality" may mean at least two, for example, two, three, or more, and the embodiment of the present invention is not limited.
According to the content, the users participating in the federal learning are indexed by a random method in the conventional federal learning device, optimization consideration is lacked, the low fitness between the indexed users and the federal learning easily exists, the phenomenon that the federal learning device needs to be tested repeatedly to contact the users to participate in the federal learning occurs, and the interaction efficiency between the participants of the federal learning and the federal learning device is reduced. In order to solve the problem, the embodiment of the invention provides an index method for users in federated learning, so as to improve the interaction efficiency between the participants of federated learning and a federated learning device.
The following describes a specific process of indexing users participating in federal learning by a federal learning apparatus in an embodiment of the present invention.
Fig. 1 is a schematic flowchart illustrating an indexing method for a user in federated learning according to an embodiment of the present invention. Wherein the method may be applied to a federal learning device. As shown in fig. 1, the method flow includes:
s101, obtaining feedback data after a plurality of users accept federal learning invitation, and obtaining user portrait data of the users.
Generally, due to different factors such as working time or rest time, time periods of using the terminal by different users are different, so that probability distributions of users meeting basic conditions for participating in federal learning (such as the used terminal must meet conditions of being in a charging state and using a non-traffic pricing network link such as WiFi) at different time periods are also different, in other words, probability distributions of multiple users accepting federal learning invitation at different time periods are also different.
In the embodiment of the invention, the federal learning device can learn the probability distribution of a plurality of users accepting the federal learning invitation in different time periods by acquiring the feedback data of the plurality of users accepting the federal learning invitation historically.
For example, user a among the plurality of users is taken as an example. The federal learning device invites the user a to participate in the federal learning in the previous federal learning, and then, after the user a accepts the previous federal learning invitation, the user a may feed back data to the federal learning device to inform the federal learning device that the user a accepts the previous federal learning invitation at a certain time point, for example, the feedback data of the user a may be expressed as: user a has accepted the invitation for the last round of federal learning at 14: 05. Illustratively, if the number of turns for historical federal learning is 20, user a is at 8: 00-9: participate in 11 th and 16 th rounds of federal learning between 00, and at 11: 00-12: 12 th through 15 th federal learning is engaged between 00, at 14: 00-15: between 00 participating in federal learning round 1 to 10 and federal learning round 20 (i.e. the previous round), at 16: 00-17: and between 00, participating in the federal learning from the 17 th round to the 19 th round. Then, the user is at 8: 00-9: 00 the probability of accepting a federal learning invitation is 10%, at 11: 00-12: 00 has a 20% probability of accepting a federal learning invitation, at 14: 00-15: 00 the probability of accepting a federal learning invitation is 55%, at 16: 00-17: 00 the probability of accepting the federal learning invitation is 15%, and the probability of accepting the federal learning for the rest of the time is 0.
Certainly, the federal learning device may further obtain feedback data after the plurality of users reject the federal learning invitation historically, so as to know the probability distribution of the plurality of users rejecting the federal learning invitation at different time periods, and a specific implementation manner of the federal learning device may refer to a manner in which the federal learning device determines the probability distribution of the plurality of users accepting the federal learning invitation at different time periods, which is not described herein again.
Alternatively, the federal learning device may also acquire user portrait data for each of a plurality of users. For example, the federal learning device may acquire user portrait data stored in a server and/or acquire user portrait data stored in a terminal. The user profile data may include basic attributes (such as age, gender, region, etc.), social attributes (such as occupation, income, etc.), behavior attributes (such as shopping preference, viewing preference, etc.), psychological attributes (such as paying attention to cost performance, favoring nature, etc.), and the like of each of the plurality of users. The terminal may be any device that can participate in federal learning, such as a tablet, a mobile phone, a notebook computer, etc., and the embodiment of the present invention is not particularly limited.
In the embodiment of the invention, the federal learning device can analyze and determine the respective future requirements of a plurality of users by acquiring the respective user portrait data of the plurality of users, so as to know the respective future requirements of the plurality of users, such as financial requirements, product requirements, entertainment requirements and the like.
And S102, determining the frequency of participating in the federal learning modeling in the last time window of each user.
In general, a terminal may be used for other purposes besides participating in federal learning, such as playing videos, browsing web pages, etc. When a user uses a terminal to participate in the federal learning modeling, the running speed, the network loading speed and the like of the terminal are temporarily influenced by the transmission of the federal learning model parameters and are reduced, so that when the terminal is used to participate in the federal learning modeling and other applications (such as video applications and the like) are also run, the experience of the user using other applications of the terminal is reduced to a certain extent. Thus, for a certain time period within a certain time window, a user may not participate in federal learning modeling during that time period in order to enhance the experience of other applications using the terminal. The time window may be expressed as a time period required for one or more rounds of federal learning.
In the embodiment of the invention, the federal learning device can know the satisfaction degree of each of a plurality of users participating in the federal learning modeling in the last time window by determining the number of times that each of the plurality of users participates in the federal learning modeling in the last time window. For example, the user b among the plurality of users is taken as an example. If the number of times of the last time window of the federal learning modeling is 10, and the number of times of the user b using the terminal to participate in the federal learning modeling is only one, the satisfaction degree of the user b participating in the federal learning modeling in the last time window is 10%, namely, the satisfaction degree of the user b participating in the federal learning modeling in the last time window is lower.
It should be noted that, the execution sequence of S101 and S102 is not specifically limited in the embodiment of the present invention, for example, the federal learning device may execute S101 first and then S102, or execute S102 first and then S101, or execute S101 and S102 simultaneously.
S103, calculating respective index values of the users according to respective feedback data of the users, user portrait data and the number of times of the users participating in federal learning modeling in a last time window; the index value is used to characterize a fitness value for each of the plurality of users to participate in a new round of federal learning.
Alternatively, after obtaining the user portrait data of each of the plurality of users, the federal learning apparatus may determine the user portrait confidence of each of the plurality of users, that is, determine the probability that each of the plurality of users participates in a new round of federal learning. In other words, the federal learning device can determine future requirements of the users through user profile data of the users, and then determine probabilities of the users participating in a new round of federal learning according to the future requirements of the users.
Optionally, after the user portrait confidence of each of the multiple users is determined by federal learning, modeling may be performed according to the feedback data of each of the multiple users, the user portrait confidence, and the number of times that each of the multiple users participates in the federal learning modeling in the previous time window, and the index values of each of the multiple users are obtained by calculation. In the embodiment of the invention, the federal learning device can know the probability distribution of the users respectively receiving the federal learning invitation in different time periods through the respective feedback data of the users, can determine the probability of the users respectively participating in the new federal learning through the respective user portrait confidence of the users, and can know the satisfaction degree of the users respectively participating in the federal learning modeling in the last time window through the respective number of times of the users participating in the federal learning modeling in the last time window, therefore, the index values of the users obtained by calculation by the federal learning device can be related to the probability of the users respectively receiving the federal learning invitation in different time periods, the probability of the users respectively participating in the new federal learning, and the satisfaction degree of the users respectively participating in the federal learning modeling in the last time window, the method is beneficial to improving the subsequent suitability of the federate learning device for selecting the user meeting the preset condition from the plurality of users according to the index value and the federate learning, so that the phenomenon that the federate learning device needs to repeatedly test and contact the user to participate in the federate learning can be avoided, and the interaction efficiency of the federate learning participant and the federate learning device can be effectively improved.
In a specific implementation process, the federal learning device performs modeling according to respective feedback data of a plurality of users, and can be used for predicting a first probability that each of the plurality of users participates in a new round of federal learning invitation at the current time. The numerical value of the first probability corresponding to any time point in a time window of a new round of federal learning is positively correlated with the time-sharing responsiveness of the users at the time point, namely the higher the time-sharing responsiveness of the users at the time point is, the higher the first probability that the users participate in the new round of federal learning invitation at the same time point is, wherein the time-sharing responsiveness is used for representing the speed of accepting the federal learning invitation fed back by the users. For example, taking user a of the multiple users as an example, if the time-sharing responsiveness of user a at a time point a (located within time period a) in the time window of the new round of federal learning is 4, and the time-sharing responsiveness of user a at a time point b (located within time period b) in the time window of the new round of federal learning is 2, the federal learning device may determine that the first probability that user a participates in the new round of federal learning invitation at time point a is greater than the first probability that user a participates in the new round of federal learning invitation at time point b, that is, determine that the first probability that user a participates in the new round of federal learning invitation at time point a is greater than the first probability that user a participates in the new round of federal learning invitation at time point b.
In the implementation of the invention, the federal learning device predicts the first probability of each user participating in the new round of federal learning invitation at the current time period by modeling according to the respective feedback data of the users, and can know the probability of the users participating in the new round of federal learning at different time periods, so that the phenomenon that the federal learning device invites the users to participate in the federal learning within the time period with lower probability of the users participating in the federal learning can be avoided, the possibility that the invited users refuse to participate in the federal learning is reduced, the led-out fitness of the users participating in the federal learning can be improved, and the interaction efficiency of the participants in the federal learning and the federal learning device can be effectively improved.
In a specific implementation process, the federal learning device carries out modeling according to the frequency of the users participating in the federal learning modeling in the last time window respectively, and can be used for predicting the experience loss of each user; wherein, the experience loss is used for representing the satisfaction degree of the plurality of users to participate in a new round of federal learning invitation behavior in the current time period. For example, taking a user a of a plurality of users as an example, if the number of times of the federal learning modeling in the previous time window is 20, the number of times of the user a participating in the federal learning modeling in the previous time window is 2, and the times are respectively in the time period e and the time period f, the federal learning device may determine that the total experience loss of the user a participating in the federal learning modeling in the previous time window is ((20-2)/20)% or 90%, that is, the satisfaction degree is 10%, and the experience losses of the user a participating in the federal learning modeling in the time period e and the time period f in the previous time window are both ((20-1)/20)% or 95%, that is, the satisfaction degree is 5%. Then, when the federal learning device can model according to the number of times that the user a participates in the federal learning modeling in the last time window, the experience loss of the user a in the time period e, the time period f and other time periods in a plurality of time windows in the future can be predicted.
In the implementation of the invention, the federal learning device carries out modeling according to the frequency of the users participating in the federal learning modeling in the last time window respectively, so that the experience loss of the users is predicted, the satisfaction degree of the users participating in the federal learning modeling in different time periods can be known, the phenomenon that the federal learning device invites the users to participate in the federal learning modeling in the time period with lower satisfaction degree of the users participating in the federal learning modeling can be avoided, the possibility that the invited users refuse to participate in the federal learning is reduced, the fitness of the users and the federal learning which are educated by a user can be improved, and the interaction efficiency of the participants in the federal learning and the federal learning device can be effectively improved.
In a specific implementation process, the federal learning device performs modeling according to user portrait confidence, and can be used for predicting the frequency of interaction between the federal learning server and a plurality of users in a new round of federal learning. For example, the user b among the plurality of users is taken as an example. If the user portrait data volume of the user b is small or the user portrait data divergence is large (for example, the behavior attribute data volume is far larger than the psychological attribute data volume), the user portrait confidence of the user b may be low, which may cause a phenomenon that the accuracy of predicting that the user b uses the terminal to participate in a new round of federal learning by the federal learning device in the future is not high. Therefore, when the federal learning device carries out modeling according to the user portrait confidence of the user b and determines that the probability of the user b participating in the new federal learning is low, the federal learning device can determine that the frequency of interaction between the federal learning server and the user b is high, so that the accuracy of the federal learning device for subsequently predicting the probability of the user b participating in the new federal learning is improved.
In the embodiment of the invention, the federal learning device predicts the frequency of interaction between the federal learning server and a plurality of users in a new round of federal learning by modeling according to the user portrait confidence coefficients of the users, so that the user portrait confidence coefficient of the user with small user portrait data amount or large user portrait data divergence can be improved, the accuracy of the user with small user portrait data amount or large user portrait data divergence predicting the probability of participating in the new round of federal learning in the future by the federal learning device can be improved, and the phenomenon of non-uniformity of the respective federal learning opportunities of the users due to the factors of the user portrait data can be avoided.
Therefore, when the federal learning device calculates the index values of the users through the first probability that the users respectively participate in the new round of federal learning invitation at the current time period, the experience loss of the users and the frequency that the federal learning server needs to interact with the users in the new round of federal learning according to the prediction, the relevance between the index values of the users and the users can be improved, the fitness of the federal learning device for selecting the users participating in the federal learning from the users according to the index values and the federal learning can be improved, the phenomenon that the federal learning device needs to repeatedly test and contact the users to participate in the federal learning can be avoided, the possibility that the invited users refuse to participate in the federal learning is reduced, and the interaction efficiency between the participants of the federal learning device and the federal learning device can be effectively improved, in addition, the communication burden of the federal learning device can be reduced.
Optionally, in a specific implementation process, the federated learning apparatus may calculate the respective index values of the multiple users in multiple ways according to a first probability that the multiple users respectively participate in a new round of federated learning invitation at a current time period, respective experience losses of the multiple users, and a frequency with which the federated learning server needs to interact with the multiple users in a new round of federated learning. Such as:
in the mode 1, the federal learning device can calculate the corresponding average value among the first probability of each of the plurality of users participating in a new round of federal learning invitation at the current time, the experience loss of each of the plurality of users and the frequency of interaction between the federal learning server and each of the plurality of users in the new round of federal learning, and then the calculated average value is used as the index value of each of the plurality of users.
In the mode 1, by taking the average value corresponding to the first probability of each of the plurality of users participating in the new round of federal learning invitation at the current time, the experience loss of each of the plurality of users, and the frequency with which the federal learning server needs to interact with each of the plurality of users in the new round of federal learning as the index value of each of the plurality of users, the relationship between the index value of each of the plurality of users and the first probability of each of the plurality of users participating in the new round of federal learning invitation at the current time, the experience loss of each of the plurality of users, and the frequency with which the federal learning server needs to interact with each of the plurality of users in the new round of federal learning can be balanced, the lowest of the first probability of participating in the new round of federal learning invitation at the current time, the experience loss, and the frequency with which the federal learning server needs to interact with each of the plurality of users in the new round of federal learning can be avoided as the index value of each of the plurality of, the phenomenon that the relevance between the index values of the users and the users is low is caused, so that the fitness of the federate learning device for selecting the users participating in the federate learning from the users according to the index values and the federate learning can be improved, the phenomenon that the federate learning device needs to repeatedly test to contact the users to participate in the federate learning is avoided, the situation that the users are repeatedly tested to participate in the federate learning is not needed, and the interaction.
In the mode 2, the federal learning device may use, as the index values of the users, the maximum value of the frequency with which the federal learning server needs to interact with the users in the new round of federal learning, the experience loss of the users, and the new round of federal learning according to a preset policy. For example, the federal learning device may compare the calculated average value with a preset threshold value, and determine whether the average value is greater than or equal to the preset threshold value; and if the average value is determined to be larger than or equal to a preset threshold value, taking the maximum numerical value of the first probability, the experience loss and the frequency as the respective index value of the plurality of users. Otherwise, the average value is used as the index value of each of the plurality of users.
In the mode 2, when the calculated average value between the first probability of each of the plurality of users participating in the new round of federal learning invitation at the current time, the experience loss of each of the plurality of users, and the frequency of the federal learning server needing to interact with each of the plurality of users in the new round of federal learning is greater than or equal to the preset threshold value, the maximum value between the three is used as the index value of each of the plurality of users, or when the average value between the three is smaller than the preset threshold value, the average value is used as the index value of each of the plurality of users, so that the relevance between the index value of each of the plurality of users and the plurality of users can be improved, the fitness of the federal learning device for selecting the user participating in the federal learning from the plurality of users according to the index value and the federal learning can be improved, and the phenomenon that the federal learning device needs to repeatedly test the contact user to participate in the, repeated tests are not needed to contact the user to participate in the federal learning, and the interaction efficiency of the participants of the federal learning and the federal learning device can be effectively improved.
And S104, inviting users meeting preset conditions in the plurality of users to participate in federal learning according to the respective index values of the plurality of users.
Optionally, after the federated learning device calculates the respective index values of the multiple users, the users meeting the preset conditions among the multiple users may be invited to participate in federated learning according to different preset conditions. Such as:
example 1, if the preset condition is that users whose frequency of invitation by federal learning is lower than the preset threshold value are preferentially mobilized to participate in a new round of federal learning, the federal learning device may screen out N users whose index values are smaller than a first preset index value from among the plurality of users, and invite the N users to participate in the new round of federal learning. For example, the federal learning device may sort the index values of the multiple users in a descending order, and invite the N users to participate in a new round of federal learning based on the sorting, or may sort the index values of the multiple users in a descending order, and invite the N users to participate in a new round of federal learning based on the sorting, where the index values of the N users are all smaller than a first preset index value.
In example 1, the federal learning can improve the accuracy of the screened multiple users as users who participate in the federal learning with low frequency, so that the suitability of the federal learning device for selecting the users who participate in the federal learning from the multiple users according to the index value and the federal learning can be improved, the phenomenon that the federal learning device needs to repeatedly test to contact the users to participate in the federal learning is avoided, the users need not to be repeatedly tested to contact the users to participate in the federal learning, and the interaction efficiency of the participants in the federal learning and the federal learning device can be effectively improved.
Example 2, if the preset condition is that users whose frequency of invitation by federal learning is higher than or equal to the preset threshold value are preferentially mobilized to participate in a new round of federal learning, the federal learning device may screen out N users whose index values are greater than a second preset index value from among the plurality of users, and invite the N users to participate in the new round of federal learning. For example, the federal learning device may sort the index values of the multiple users in a descending order, and N users participate in a new round of federal learning after the invitation to sort the index values of the multiple users in a descending order, or may sort the index values of the multiple users in a descending order, and N users participate in a new round of federal learning before the invitation to sort the index values of the multiple users, where the index values of the N users are all greater than a second preset index value.
In example 2, the federal learning can improve the accuracy of the screened multiple users as users who participate in the federal learning with high frequency, so that the suitability of the federal learning device for selecting the users who participate in the federal learning from the multiple users according to the index value and the federal learning can be improved, repeated tests are not needed to contact the users to participate in the federal learning, the phenomenon that the federal learning device needs to be repeatedly tested to contact the users to participate in the federal learning is avoided, and the interaction efficiency between the participants in the federal learning and the federal learning device can be effectively improved.
Optionally, the federal learning device may receive feedback data of the N users after inviting the N users of the multiple users to participate in a new round of federal learning, and update, according to the feedback data of the N users, probabilities of the N users participating in the new round of federal learning invitation, experience losses of the N users, and frequencies of interaction between the federal learning server and the N users, for calculating fitness values of the N users participating in the next round of federal learning, so as to improve fitness of the N users participating in the next round of federal learning, and effectively improve interaction efficiency between participants of the N users of the next round of federal learning and the federal learning device.
It should be noted that the first preset index value and the second preset index value may be the same or different, and the embodiment of the present invention is not limited specifically.
It should be noted that the value of N may be set by a system administrator of the federal learning device, or may be determined by a preset upper index value limit or a preset lower index value limit, which is not specifically limited in the embodiment of the present invention.
As can be seen from the above description, in the technical solution provided in the embodiment of the present invention, since the index value of each of the plurality of users is calculated by the federal learning apparatus according to the feedback data after the federal learning invitation is received by the plurality of users according to the history of each of the plurality of users, the user image data of each of the plurality of users, and the number of times that each of the plurality of users participates in the federal learning modeling within the last time window, the index value of each of the plurality of users can be related to the probability that each of the plurality of users receives the federal learning invitation at different time intervals, the probability that each of the plurality of users participates in a new round of federal learning, and the satisfaction degree that each of the plurality of users participates in the federal learning modeling within the last time window, so that the suitability of a user meeting the preset condition selected from the plurality of users by the federal learning apparatus for the federal learning and the federal learning can be improved, and the, repeated tests are not needed to contact the user to participate in the federal learning, and the interaction efficiency of the participants of the federal learning and the federal learning device can be effectively improved.
Based on the same invention concept, the invention also provides a Federation learning device. Fig. 2 is a schematic structural diagram of a bang learning device according to an embodiment of the present invention.
As shown in fig. 2, the federal learning device 200 includes:
an obtaining unit 201, configured to obtain feedback data after a plurality of users have accepted federal learning invitation, and obtain user portrait data of each of the plurality of users;
a processing unit 202, configured to determine a number of times that each of the plurality of users participates in federated learning modeling within a last time window; calculating respective index values of the plurality of users according to respective feedback data of the plurality of users, the user portrait data and the number of times that the plurality of users participate in federal learning modeling in a last time window; the index value is used for representing the fitness value of each of the plurality of users participating in a new round of federal learning;
the inviting unit 203 is configured to invite a user, which meets a preset condition, of the multiple users to participate in federal learning according to respective index values of the multiple users.
In one possible design, the processing unit 202 is specifically configured to:
determining user representation confidence levels for each of the plurality of users based on the user representation data for each of the plurality of users;
and respectively modeling according to the respective feedback data of the users, the user portrait confidence and the times of the users participating in the federal learning modeling in the last time window, and calculating to obtain respective index values of the users.
In one possible design, the processing unit 202 is specifically configured to:
modeling according to the feedback data, and predicting a first probability that each of the plurality of users participates in a new round of federal learning invitation at the current time period; the value corresponding to any time point in a time window of a new round of federal learning of the first probability is positively correlated with the time-sharing responsiveness of the users at the time point; the time-sharing responsiveness is used for representing the speed of receiving the federal learning invitation fed back by each of the plurality of users;
modeling according to the frequency of the users participating in the federal learning modeling in the last time window respectively, and predicting the experience loss of the users respectively; wherein the experience loss is used for representing the satisfaction degree of the plurality of users for accepting participation in a new round of federal learning invitation behavior in the current time period;
modeling according to the user portrait confidence, and predicting the frequency of interaction between the federal learning server and the users in a new round of federal learning;
and calculating the index values of the users according to the first probability, the experience loss and the frequency.
In one possible design, the processing unit 202 is specifically configured to:
calculating corresponding average values among the first probability, the experience loss and the frequency; taking the average value as the index value of each of the plurality of users; or,
and according to a preset strategy, taking the maximum numerical value of the first probability, the experience loss and the frequency as the respective index value of the plurality of users.
In one possible design, the processing unit 202 is specifically configured to:
comparing the average value with a preset threshold value, and determining whether the average value is greater than or equal to the preset threshold value;
and if the average value is determined to be greater than or equal to the preset threshold value, taking the maximum value of the first probability, the experience loss and the frequency as the respective index value of the plurality of users.
In one possible design, the inviting unit 203 is specifically configured to:
if the fact that the users with the frequency of being invited by the federal learning lower than a preset threshold value are preferentially mobilized to participate in the new federal learning is determined, screening N users with index values smaller than a first preset index value from the multiple users, and inviting the N users to participate in the new federal learning;
if the users with the preferential mobilization invited frequency higher than or equal to the preset threshold value by the federal learning are determined to participate in the new round of federal learning, screening N users with index values larger than a second preset index value from the plurality of users, and inviting the N users to participate in the new round of federal learning.
In one possible design, the processing unit 202 is further configured to:
and receiving feedback data of the N users, updating the probability of the N users participating in a new round of federal learning invitation, the experience loss of the N users and the frequency of interaction between the federal learning server and the N users, and calculating the fitness value of the N users participating in the next round of federal learning respectively.
The federal learning device 200 in the embodiment of the present invention and the method for indexing a user in federal learning shown in fig. 1 are inventions based on the same concept, and through the foregoing detailed description of the method for indexing a user in federal learning, a person skilled in the art can clearly understand the implementation process of the federal learning device 200 in the embodiment, so for the brevity of the description, no further description is provided here.
Based on the same invention concept, the invention also provides a Federation learning device. Fig. 3 is a schematic structural diagram of a bang learning device according to an embodiment of the present invention.
As shown in fig. 3, the federal learning device 300 includes: a memory 301 and at least one processor 302. Wherein the memory 301 stores one or more computer programs; the one or more computer programs stored in the memory 301, when executed by the at least one processor 302, enable the federal learning device 300 to implement all or a portion of the steps in the embodiment shown in fig. 1.
Optionally, the memory 301 may include a high-speed random access memory, and may further include a nonvolatile memory, such as a magnetic disk storage device, a flash memory device, or other nonvolatile solid state storage devices, and the like, which is not limited in the embodiments of the present invention.
Alternatively, the processor 302 may be a general-purpose processor (CPU), an ASIC, or an FPGA, or may be one or more integrated circuits for controlling the execution of programs.
In some embodiments, the memory 301 and the processor 302 may be implemented on the same chip, or in other embodiments, they may be implemented on separate chips, and the embodiments of the present invention are not limited thereto.
Based on the same inventive concept, the present invention also provides a computer-readable storage medium storing computer instructions, which, when executed by a computer, enable the computer to perform the above-mentioned steps of the user indexing method in federal learning.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.
Claims (10)
1. An index method for a user in federated learning, comprising:
obtaining feedback data of a plurality of users after receiving a federal learning invitation historically, and obtaining user portrait data of the users respectively;
determining a number of times that each of the plurality of users participated in federated learning modeling within a last time window;
calculating respective index values of the plurality of users according to respective feedback data of the plurality of users, the user portrait data and the number of times that the plurality of users participate in federal learning modeling in a last time window; the index value is used for representing the fitness value of each of the plurality of users participating in a new round of federal learning;
and inviting users meeting preset conditions in the plurality of users to participate in federal learning according to the respective index values of the plurality of users.
2. The method of claim 1, wherein calculating the index values for each of the plurality of users based on the feedback data for each of the plurality of users, the user representation data, and the number of times each of the plurality of users has engaged in federal learning modeling within a previous time window comprises:
determining user representation confidence levels for each of the plurality of users based on the user representation data for each of the plurality of users;
and respectively modeling according to the respective feedback data of the users, the user portrait confidence and the times of the users participating in the federal learning modeling in the last time window, and calculating to obtain respective index values of the users.
3. The method of claim 2, wherein the calculating index values for each of the plurality of users based on the respective feedback data, user representation confidence, and the number of times each of the plurality of users has engaged in federated learning modeling within a previous time window, respectively comprises:
modeling according to the feedback data, and predicting a first probability that each of the plurality of users participates in a new round of federal learning invitation at the current time period; the value corresponding to any time point in a time window of a new round of federal learning of the first probability is positively correlated with the time-sharing responsiveness of the users at the time point; the time-sharing responsiveness is used for representing the speed of receiving the federal learning invitation fed back by each of the plurality of users;
modeling according to the frequency of the users participating in the federal learning modeling in the last time window respectively, and predicting the experience loss of the users respectively; wherein the experience loss is used for representing the satisfaction degree of the plurality of users for accepting participation in a new round of federal learning invitation behavior in the current time period;
modeling according to the user portrait confidence, and predicting the frequency of interaction between the federal learning server and the users in a new round of federal learning;
and calculating the index values of the users according to the first probability, the experience loss and the frequency.
4. The method of claim 3, wherein calculating the respective index values for the plurality of users based on the first probability, the loss of experience, and the frequency comprises:
calculating corresponding average values among the first probability, the experience loss and the frequency; taking the average value as the index value of each of the plurality of users; or,
and according to a preset strategy, taking the maximum numerical value of the first probability, the experience loss and the frequency as the respective index value of the plurality of users.
5. The method of claim 4, wherein using the maximum value of the first probability, the experience loss and the frequency as the index value of each of the plurality of users according to a preset policy comprises:
comparing the average value with a preset threshold value, and determining whether the average value is greater than or equal to the preset threshold value;
and if the average value is determined to be greater than or equal to the preset threshold value, taking the maximum value of the first probability, the experience loss and the frequency as the respective index value of the plurality of users.
6. The method according to any one of claims 1-5, wherein inviting users of the plurality of users who meet a preset condition to participate in federated learning according to their respective index values comprises:
if the fact that the users with the frequency of being invited by the federal learning lower than a preset threshold value are preferentially mobilized to participate in the new federal learning is determined, screening N users with index values smaller than a first preset index value from the multiple users, and inviting the N users to participate in the new federal learning;
if the users with the preferential mobilization invited frequency higher than or equal to the preset threshold value by the federal learning are determined to participate in the new round of federal learning, screening N users with index values larger than a second preset index value from the plurality of users, and inviting the N users to participate in the new round of federal learning.
7. The method of claim 6, wherein after inviting the N users to participate in a new round of federated learning, the method further comprises:
and receiving feedback data of the N users, updating the probability of the N users participating in a new round of federal learning invitation, the experience loss of the N users and the frequency of interaction between the federal learning server and the N users, and calculating the fitness value of the N users participating in the next round of federal learning respectively.
8. The utility model provides a bang learning device which characterized in that includes:
the system comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for acquiring feedback data of a plurality of users after receiving federal learning invitation historically and acquiring user portrait data of the users;
the processing unit is used for determining the number of times that the plurality of users respectively participate in the federal learning modeling in the last time window; calculating respective index values of the plurality of users according to respective feedback data of the plurality of users, the user portrait data and the number of times that the plurality of users participate in federal learning modeling in a last time window; the index value is used for representing the fitness value of each of the plurality of users participating in a new round of federal learning;
and the inviting unit is used for inviting the users meeting the preset conditions in the plurality of users to participate in the federal study according to the respective index values of the plurality of users.
9. A federated learning device, comprising at least one processor and a memory;
the memory stores one or more computer programs;
one or more computer programs stored in the memory that, when executed by the at least one processor, cause the federal learning device to perform the method of any of claims 1-7.
10. A computer-readable storage medium having stored thereon computer instructions which, when executed on a computer, cause the computer to perform the method of any one of claims 1-7.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010244824.9A CN111428885B (en) | 2020-03-31 | 2020-03-31 | User indexing method in federated learning and federated learning device |
PCT/CN2021/084610 WO2021197388A1 (en) | 2020-03-31 | 2021-03-31 | User indexing method in federated learning and federated learning device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010244824.9A CN111428885B (en) | 2020-03-31 | 2020-03-31 | User indexing method in federated learning and federated learning device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111428885A CN111428885A (en) | 2020-07-17 |
CN111428885B true CN111428885B (en) | 2021-06-04 |
Family
ID=71550052
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010244824.9A Active CN111428885B (en) | 2020-03-31 | 2020-03-31 | User indexing method in federated learning and federated learning device |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN111428885B (en) |
WO (1) | WO2021197388A1 (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111428885B (en) * | 2020-03-31 | 2021-06-04 | 深圳前海微众银行股份有限公司 | User indexing method in federated learning and federated learning device |
CN112508205B (en) * | 2020-12-04 | 2024-07-16 | 中国科学院深圳先进技术研究院 | Federal learning scheduling method, device and system |
CN116567702A (en) * | 2022-01-26 | 2023-08-08 | 展讯通信(上海)有限公司 | User equipment selection method, device, chip and module equipment |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109300050A (en) * | 2018-08-31 | 2019-02-01 | 平安科技(深圳)有限公司 | Insurance method for pushing, device and storage medium based on user's portrait |
CN110245510A (en) * | 2019-06-19 | 2019-09-17 | 北京百度网讯科技有限公司 | Method and apparatus for predictive information |
CN110297848A (en) * | 2019-07-09 | 2019-10-01 | 深圳前海微众银行股份有限公司 | Recommended models training method, terminal and storage medium based on federation's study |
CN110598870A (en) * | 2019-09-02 | 2019-12-20 | 深圳前海微众银行股份有限公司 | Method and device for federated learning |
CN110908893A (en) * | 2019-10-08 | 2020-03-24 | 深圳逻辑汇科技有限公司 | Sandbox mechanism for federal learning |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11823067B2 (en) * | 2017-06-27 | 2023-11-21 | Hcl Technologies Limited | System and method for tuning and deploying an analytical model over a target eco-system |
US11475350B2 (en) * | 2018-01-22 | 2022-10-18 | Google Llc | Training user-level differentially private machine-learned models |
CN110443063B (en) * | 2019-06-26 | 2023-03-28 | 电子科技大学 | Adaptive privacy-protecting federal deep learning method |
CN110610242B (en) * | 2019-09-02 | 2023-11-14 | 深圳前海微众银行股份有限公司 | Method and device for setting weights of participants in federal learning |
CN110572253B (en) * | 2019-09-16 | 2023-03-24 | 济南大学 | Method and system for enhancing privacy of federated learning training data |
CN111428885B (en) * | 2020-03-31 | 2021-06-04 | 深圳前海微众银行股份有限公司 | User indexing method in federated learning and federated learning device |
-
2020
- 2020-03-31 CN CN202010244824.9A patent/CN111428885B/en active Active
-
2021
- 2021-03-31 WO PCT/CN2021/084610 patent/WO2021197388A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109300050A (en) * | 2018-08-31 | 2019-02-01 | 平安科技(深圳)有限公司 | Insurance method for pushing, device and storage medium based on user's portrait |
CN110245510A (en) * | 2019-06-19 | 2019-09-17 | 北京百度网讯科技有限公司 | Method and apparatus for predictive information |
CN110297848A (en) * | 2019-07-09 | 2019-10-01 | 深圳前海微众银行股份有限公司 | Recommended models training method, terminal and storage medium based on federation's study |
CN110598870A (en) * | 2019-09-02 | 2019-12-20 | 深圳前海微众银行股份有限公司 | Method and device for federated learning |
CN110908893A (en) * | 2019-10-08 | 2020-03-24 | 深圳逻辑汇科技有限公司 | Sandbox mechanism for federal learning |
Also Published As
Publication number | Publication date |
---|---|
CN111428885A (en) | 2020-07-17 |
WO2021197388A1 (en) | 2021-10-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111428885B (en) | User indexing method in federated learning and federated learning device | |
CN110457589B (en) | Vehicle recommendation method, device, equipment and storage medium | |
CN106022505A (en) | Method and device of predicting user off-grid | |
CN108090208A (en) | Fused data processing method and processing device | |
Wang et al. | A decomposition-based approach to flexible flow shop scheduling under machine breakdown | |
Pilla | Optimal task assignment for heterogeneous federated learning devices | |
CN110807207A (en) | Data processing method and device, electronic equipment and storage medium | |
CN110417607A (en) | A kind of method for predicting, device and equipment | |
CN111176840A (en) | Distributed task allocation optimization method and device, storage medium and electronic device | |
CN107230090B (en) | Method and device for classifying net recommendation value NPS | |
CN114662705B (en) | Federal learning method, apparatus, electronic device, and computer-readable storage medium | |
CN106897282B (en) | User group classification method and device | |
CN108595526A (en) | Resource recommendation method and device | |
CN112700003A (en) | Network structure search method, device, equipment, storage medium and program product | |
US20160342899A1 (en) | Collaborative filtering in directed graph | |
CN105005501B (en) | A kind of second order optimizing and scheduling task method towards cloud data center | |
CN111260419A (en) | Method and device for acquiring user attribute, computer equipment and storage medium | |
CN110188123A (en) | User matching method and equipment | |
CN107659982B (en) | Wireless network access point classification method and device | |
CN116820709B (en) | Task chain operation method, device, terminal and computer storage medium | |
CN114385359B (en) | Cloud edge task time sequence cooperation method for Internet of things | |
CN111353015A (en) | Crowdsourcing question recommendation method, device, equipment and storage medium | |
CN114327925A (en) | Power data real-time calculation scheduling optimization method and system | |
CN115311001A (en) | Method and system for predicting user change tendency based on multiple voting algorithm | |
CN106793151B (en) | Distributed random handshake method and its system in a kind of wireless built network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |