CN113487043A - Federal learning modeling optimization method, apparatus, medium, and computer program product - Google Patents

Federal learning modeling optimization method, apparatus, medium, and computer program product Download PDF

Info

Publication number
CN113487043A
CN113487043A CN202110840832.4A CN202110840832A CN113487043A CN 113487043 A CN113487043 A CN 113487043A CN 202110840832 A CN202110840832 A CN 202110840832A CN 113487043 A CN113487043 A CN 113487043A
Authority
CN
China
Prior art keywords
encryption
model
model parameters
federal
equipment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110840832.4A
Other languages
Chinese (zh)
Inventor
万晟
鞠策
范力欣
杨强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
WeBank Co Ltd
Original Assignee
WeBank Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by WeBank Co Ltd filed Critical WeBank Co Ltd
Priority to CN202110840832.4A priority Critical patent/CN113487043A/en
Publication of CN113487043A publication Critical patent/CN113487043A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/602Providing cryptographic facilities or services

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Bioethics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Computer Security & Cryptography (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application discloses a federated learning modeling optimization method, equipment, a medium and a computer program product, wherein the federated learning modeling optimization method comprises the following steps: and receiving encryption model parameters and social information sent by each second device, wherein the encryption model parameters are generated by the second device through an iterative training local model, and according to each piece of social information, performing selective encryption aggregation on each encryption model parameter based on social association degree to obtain encryption aggregation model parameters corresponding to each piece of social information, and feeding back each encryption aggregation model parameter to each corresponding second device respectively so that the second device can iteratively optimize the local model according to the received encryption aggregation model parameters to obtain a federal learning model. The method and the device solve the technical problem of low federal learning modeling efficiency.

Description

Federal learning modeling optimization method, apparatus, medium, and computer program product
Technical Field
The present application relates to the field of machine learning technologies, and in particular, to a method, an apparatus, a medium, and a computer program product for optimizing federated learning modeling.
Background
With the development of computer technology, the application of federal learning is more and more extensive. Currently, in federal learning, all participants need to send updated model parameters to a server after completing local model updating, and then complete a model aggregation process through the server, at present, the server generally collects the model parameters uploaded by all the participants, and randomly selects part of the uploaded model parameters to be weighted and averaged to be used as updated global model parameters, however, the randomly selected part of the uploaded model parameters will cause the updated global model to be difficult to iteratively converge, and further cause the model aggregation process to have large calculation amount, so that the federal learning modeling efficiency becomes low.
Disclosure of Invention
The main purpose of the present application is to provide a method, an apparatus, a medium, and a computer program product for optimizing federated learning modeling, which aim to solve the technical problem of low federated learning modeling efficiency in the prior art.
In order to achieve the above object, the present application provides a federated learning modeling optimization method, where the federated learning modeling optimization method is applied to a first device, and the federated learning modeling optimization method includes:
receiving encryption model parameters and social information sent by each second device, wherein the encryption model parameters are generated by the second device through iterative training of a local model;
according to the social information, selective encryption aggregation based on social association degree is carried out on the encryption model parameters respectively to obtain encryption aggregation model parameters corresponding to the social information;
and feeding back the parameters of the encryption and aggregation model to the corresponding second equipment respectively so that the second equipment can iteratively optimize the local model according to the received parameters of the encryption and aggregation model to obtain a federated learning model.
In order to achieve the above object, the present application provides a federated learning modeling optimization method, where the federated learning modeling optimization method is applied to a second device, and the federated learning modeling optimization method includes:
performing iterative training on a local model according to a training sample set to obtain an encryption model parameter corresponding to the local model;
sending the encryption model parameters and the social information corresponding to the second devices to a first device, so that the first device selectively encrypts and aggregates the encryption model parameters sent by the second devices based on social information sent by the second devices respectively to obtain encryption aggregation model parameters corresponding to the second devices;
and receiving the encryption aggregation model parameters fed back by the first equipment, and iteratively optimizing the local model according to the encryption aggregation model parameters to obtain a federal learning model.
The application also provides a federated recommendation optimization method, wherein the federated recommendation optimization method uses a second device, and the federated recommendation optimization method comprises the following steps:
obtaining user data to be recommended, inputting the user data to be recommended into a federal recommendation model, and obtaining user characteristic representation variables, wherein the federal recommendation model is obtained by second equipment through iterative optimization of a local model based on encryption aggregation model parameters sent by first equipment, and the encryption aggregation model parameters are obtained by the first equipment through selective aggregation of encryption model parameters of the second equipment based on social information corresponding to the second equipment;
calculating a user similarity result corresponding to the user characteristic representation variable, and generating a similar user candidate set based on the similarity result;
and executing a preset article recommendation process based on the similar user candidate set.
The application also provides a federal learning optimization device that models, federal learning optimization device that models is virtual device, just federal learning optimization device that models is applied to first equipment, federal learning optimization device that models includes:
the receiving module is used for receiving encryption model parameters and social information sent by each second device, wherein the encryption model parameters are generated by the second devices through iterative training of local models;
the aggregation module is used for selectively encrypting and aggregating the encryption model parameters based on the social association degree according to the social information to obtain the encryption and aggregation model parameters corresponding to the social information;
and the feedback module is used for respectively feeding back the parameters of the encryption aggregation model to the corresponding second equipment so that the second equipment can iteratively optimize the local model according to the received parameters of the encryption aggregation model to obtain a federal learning model.
The application also provides a federal learning optimization device that models, federal learning optimization device that models is virtual device, just federal learning optimization device that models is applied to the second equipment, federal learning optimization device that models includes:
the training module is used for carrying out iterative training on a local model according to a training sample set so as to obtain an encryption model parameter corresponding to the local model;
a sending module, configured to send the encryption model parameters and social information corresponding to the second devices to a first device, so that the first device performs selective encryption aggregation based on social association degrees on the encryption model parameters sent by the second devices respectively based on the social information sent by the second devices, and obtains encryption aggregation model parameters corresponding to the second devices;
and the optimization module is used for receiving the encryption aggregation model parameters fed back by the first equipment, and iteratively optimizing the local model according to the encryption aggregation model parameters to obtain a federal learning model.
The application also provides an optimization device is recommended to federal, the optimization device is recommended to federal is virtual device, just the optimization device is recommended to federal is applied to the second equipment, the optimization device is recommended to federal includes:
the system comprises an acquisition module, a recommendation module and a recommendation module, wherein the acquisition module is used for acquiring user data to be recommended and inputting the user data to be recommended into a federal recommendation model to acquire user characteristic representation variables, the federal recommendation model is acquired by second equipment through iterative optimization of a local model based on encryption aggregation model parameters sent by first equipment, and the encryption aggregation model parameters are acquired by the first equipment through selective aggregation of the encryption model parameters of the second equipment based on social information corresponding to the second equipment;
the generating module is used for calculating a user similarity result corresponding to the user characteristic representing variable and generating a similar user candidate set based on the similarity result;
and the recommending module is used for executing a preset article recommending process based on the similar user candidate set.
The application also provides a federal learning modeling optimization device, the federal learning modeling optimization device is an entity device, the federal learning modeling optimization device includes: a memory, a processor, and a program of the federated learning modeling optimization method stored on the memory and executable on the processor, the program of the federated learning modeling optimization method when executed by the processor may implement the steps of the federated learning modeling optimization method as described above.
The application also provides a federal recommended optimization device, which is an entity device, and the federal recommended optimization device includes: a memory, a processor, and a program of the federal recommended optimization method stored in the memory and executable on the processor, the program of the federal recommended optimization method when executed by the processor implementing the steps of the federal recommended optimization method as described above.
The present application also provides a medium, which is a readable storage medium, on which a program for implementing the federal learning modeling optimization method is stored, and the program for implementing the federal learning modeling optimization method implements the steps of the federal learning modeling optimization method as described above when executed by a processor.
The present application also provides a medium, which is a readable storage medium, and the readable storage medium stores a program for implementing the federal recommended optimization method, and the program for implementing the federal recommended optimization method implements the steps of the federal recommended optimization method as described above when executed by a processor.
The present application also provides a computer program product comprising a computer program which, when executed by a processor, performs the steps of the method of federated learning modeling optimization as described above.
The present application also provides a computer program product comprising a computer program which, when executed by a processor, performs the steps of the federal recommended optimization method as described above.
Compared with the technical means of collecting model parameters uploaded by all participants based on a server and randomly selecting part of uploaded model parameters to be weighted and averaged to serve as updated global model parameters for carrying out federated learning modeling, the method has the advantages that when model parameters uploaded by the part of randomly selected models in the prior art are subjected to model aggregation, due to different relevance degrees among the model parameters uploaded by the federated learning participants, if model parameters with low relevance degrees are selected for aggregation, an updated global model is difficult to iteratively converge, the method firstly receives encrypted model parameters and social information sent by second equipment, wherein the encrypted model parameters are generated by the second equipment through iterative training of a local model, further, according to the social information, selective encryption aggregation based on the social association degree is respectively carried out on the encryption model parameters to obtain encryption aggregation model parameters corresponding to the social information, so that encryption aggregation of the encryption model parameters corresponding to second equipment with high social association degree can be realized based on the social information, that is, the encryption model parameters with low social association degree are quickly screened and removed, so that the encryption aggregation model parameters after encryption aggregation are not negatively affected by the encryption model parameters with low social association degree, further, the encryption aggregation model parameters are respectively fed back to the corresponding second equipment, so that the second equipment can iteratively optimize the local model according to the received encryption aggregation model parameters, the iterative convergence efficiency of the second equipment for iteratively optimizing the local model can be improved, that is, the iterative convergence efficiency of the global model is improved, and the technical defect that the updated global model is difficult to iteratively converge and further the calculated amount of the model aggregation process is large, so that the federal learning modeling efficiency is low is overcome, and the federal learning modeling efficiency is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a schematic flow chart diagram of a first embodiment of a federated learning modeling optimization method of the present application;
FIG. 2 is a schematic flow chart diagram of a second embodiment of the federated learning modeling optimization method of the present application;
FIG. 3 is a schematic flow chart diagram illustrating a third embodiment of the federated learning modeling optimization method of the present application;
FIG. 4 is a schematic device structure diagram of a hardware operating environment related to a federated learning modeling optimization method in an embodiment of the present application;
FIG. 5 is a schematic device structure diagram of a hardware operating environment related to a federal recommended optimization method in an embodiment of the present application;
the objectives, features, and advantages of the present application will be further described with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In a first embodiment of the federal learning modeling optimization method of the present application, referring to fig. 1, the federal learning modeling optimization method is applied to a first device, and the federal learning modeling optimization method includes:
step S10, receiving encryption model parameters and social information sent by each second device, wherein the encryption model parameters are generated by the second device through iterative training of a local model;
in this embodiment, it should be noted that the encrypted model parameter is a model parameter corresponding to a homomorphic encrypted local model, and the social information is information for recording a social association degree between the second devices, in an implementation, the social information may be represented by a social information vector, for example, the social association degree between the device a and the device B is 90%, the social association degree between the device a and the device C is 80%, and the social information of the device a may be denoted as a (0.9, 0.8), where the social association degree is a social association degree between the second devices, and may be determined by social behavior data between the second devices, such as a chat frequency, a chat record keyword, a friend circle praise record, and the like.
Receiving encryption model parameters and social information sent by each second device, wherein the encryption model parameters are generated by the second device through iterative training of a local model, specifically, each second device iteratively trains the respective local model to preset iteration times based on the respective training sample set to respectively obtain the local model parameters corresponding to the respective local model, homomorphic encryption is respectively performed on the respective model parameters to obtain the respective encryption model parameters, and then each second device sends the respective encryption model parameters and the respective social information to the first device, and further the first device receives the encryption model parameters and the social information sent by each second device.
Step S20, according to the social information, selectively encrypting and aggregating the encryption model parameters based on the social association degree to obtain the encryption and aggregation model parameters corresponding to the social information;
in this embodiment, it should be noted that the encryption aggregation manner includes weighted average, summation, and the like, and the social information at least includes a social relevance.
According to the social information, selective encryption aggregation based on social association is carried out on the encryption model parameters respectively to obtain encryption aggregation model parameters corresponding to the social information, and specifically, the following steps are carried out on each social information:
and selecting each social association degree which is greater than a preset association degree threshold value from the social information as a target social association degree, further extracting an encryption model parameter corresponding to a second device which sends the social information and encryption model parameters sent by other second devices corresponding to each target social association degree as each social association model parameter, wherein each other second device has one target social association degree with each second device, further carrying out encryption aggregation on each social association model parameter, obtaining an encryption aggregation model parameter corresponding to the social information, and further obtaining an encryption aggregation model parameter corresponding to each social information.
The step of respectively carrying out selective encryption and aggregation based on social association on each encryption model parameter according to each piece of social information to obtain an encryption and aggregation model parameter corresponding to each piece of social information includes:
step S21, selecting social association model parameters corresponding to the second devices from the encryption model parameters based on the social association degrees corresponding to the social information;
in this embodiment, it should be noted that the social association model parameter is an encryption model parameter whose social association degree is greater than a preset association degree threshold, where the social association model parameter includes an encryption model parameter of the second device itself and encryption model parameters corresponding to each other second device having a target social association degree with the second device, for example, if three second devices are assumed, the second devices are respectively a target device, a device a, and a device b, the social association degree between the target device and the device a is 90%, the social association degree between the target device and the device b is 50%, and if the preset association degree threshold is 60%, the encryption model parameter corresponding to the target device and the encryption model parameter corresponding to the device a are both used as the social association model parameters.
Selecting, from the encryption model parameters, social association model parameters corresponding to the second devices based on the social association degrees corresponding to the social information, specifically, performing the following steps for the social information sent by each second device:
and according to the social association degrees in the social information, selecting other second equipment with the social association degree larger than a preset association degree threshold value from other second equipment as social association equipment, and further taking the encryption model parameters corresponding to the second equipment and the social association equipment as the social association model parameters so as to obtain the social association model parameters corresponding to the second equipment.
The step of selecting, based on the social association degree corresponding to each piece of social information, each social association model parameter corresponding to each piece of second equipment from each encryption model parameter includes the steps of:
step S211, comparing the social relevance in each piece of social information with a preset relevance threshold respectively to obtain relevance comparison results corresponding to each piece of social information;
in this embodiment, the social relevance in each piece of social information is compared with a preset relevance threshold to obtain a relevance comparison result corresponding to each piece of social information, and specifically, the following steps are performed on the social information sent by each piece of second equipment:
comparing the social association degrees in the social information with a preset association degree threshold respectively, if the social association degrees are greater than the preset association degree threshold, generating a first relevance contrast result, if the social relevance is not greater than a preset relevance threshold, generating a second relevance contrast result, and then using each first relevance contrast result and each second relevance contrast result corresponding to the social information as the relevance contrast result, wherein the first relevance contrast result and the second relevance contrast result can be labels or identifications corresponding to the social relevance, the method is used for marking the result of the comparison of the social relevance and a preset relevance threshold, for example, setting 1 for marking that the social relevance is greater than the preset relevance threshold, and setting 0 for marking that the social relevance is not greater than the preset relevance threshold.
Step S212, according to the comparison result of each association degree, selecting each encryption model parameter whose social association degree satisfies a preset association degree threshold condition as each social association model parameter corresponding to each second device.
In this embodiment, it should be noted that the preset association threshold condition may be set that the social association degree is greater than the preset association degree threshold.
Selecting each encryption model parameter with the social relevance meeting a preset relevance threshold condition as each social relevance model parameter corresponding to each second device according to each relevance comparison result, and specifically executing the following steps for each relevance comparison result corresponding to each second device:
according to the correlation comparison result, eliminating the encryption model parameters of other second devices corresponding to each social correlation degree which is not greater than a preset correlation degree threshold, selecting the encryption model parameters corresponding to the second devices and the encryption model parameters corresponding to other second devices corresponding to each social correlation degree which is greater than the preset correlation degree threshold as social correlation model parameters, and further obtaining each social correlation model parameter corresponding to each second device, for example, M (i, j) represents the correlation strength between the target device i and the device j, if M (i, j) is greater than the preset correlation degree threshold, the encryption model parameters of the device j are taken as the social correlation model parameters, and if M (i, j) is not greater than the preset correlation degree threshold, the encryption model parameters of the user j are eliminated.
Step S22, respectively performing encryption aggregation on each social association model parameter corresponding to each second device, to obtain an encryption aggregation model parameter corresponding to each social information.
In this embodiment, the encryption and aggregation are performed on the social association model parameters respectively corresponding to the second devices, so as to obtain the encryption and aggregation model parameters corresponding to the social information, specifically, the following steps are performed for each second device:
and according to a preset encryption aggregation rule, carrying out encryption aggregation on the social association model parameters corresponding to the second equipment to obtain the encryption aggregation model parameters corresponding to the social information sent by the second equipment, wherein the preset encryption aggregation rule comprises weighted average, summation and the like.
And step S30, feeding back each encryption aggregation model parameter to the corresponding second device respectively, so that the second device can iteratively optimize the local model according to the received encryption aggregation model parameters to obtain a federal learning model.
In this embodiment, each encryption aggregation model parameter is fed back to the corresponding second device, so that the second device iteratively optimizes the local model according to the received encryption aggregation model parameter to obtain a federal learning model, and specifically, each encryption aggregation model parameter is sent to the corresponding second device, so that the second device replaces and updates the model parameter of the local model to the aggregation model parameter according to the received encryption aggregation model parameter to optimize the local model and judge whether the optimized local model meets a preset training end condition, where the preset training end condition includes conditions such as loss function convergence and reaching a maximum iteration threshold, and if so, the optimized local model is used as the federal learning model, and if not, returning to the execution step: and each second device iteratively trains the respective local model to a preset iteration number based on the respective training sample set so as to respectively obtain the local model parameters corresponding to the respective local model.
The embodiment of the application provides a federated learning modeling optimization method, compared with the technical means of collecting model parameters uploaded by all participants based on a server and randomly selecting part of the uploaded model parameters to be weighted and averaged to serve as updated global model parameters for federated learning modeling, the technical means of the federated learning modeling needs to be explained that when model parameters uploaded by the part of the randomly selected model parameters are subjected to model aggregation in the prior art, because the relevance degrees of the model parameters uploaded by the federated learning participants are different, if the model parameters with low relevance degrees are selected to be aggregated, the updated global model is difficult to iteratively converge, the method firstly receives encrypted model parameters and social information sent by each second device, wherein the encrypted model parameters are generated by the second device through iterative training of a local model, further, according to the social information, selective encryption aggregation based on the social association degree is respectively carried out on the encryption model parameters to obtain encryption aggregation model parameters corresponding to the social information, so that encryption aggregation of the encryption model parameters corresponding to second equipment with high social association degree can be realized based on the social information, that is, the encryption model parameters with low social association degree are quickly screened and removed, so that the encryption aggregation model parameters after encryption aggregation are not negatively affected by the encryption model parameters with low social association degree, further, the encryption aggregation model parameters are respectively fed back to the corresponding second equipment, so that the second equipment can iteratively optimize the local model according to the received encryption aggregation model parameters, the iterative convergence efficiency of the second equipment for iteratively optimizing the local model can be improved, that is, the iterative convergence efficiency of the global model is improved, and the technical defect that the updated global model is difficult to iteratively converge and further the calculated amount of the model aggregation process is large, so that the federal learning modeling efficiency is low is overcome, and the federal learning modeling efficiency is improved.
Further, referring to fig. 2, in another embodiment of the present application, the federal learning modeling optimization method is applied to a second device, and the federal learning modeling optimization method further includes:
step A10, performing iterative training on a local model according to a training sample set to obtain an encryption model parameter corresponding to the local model;
in this embodiment, it should be noted that the encryption model parameters are model parameters corresponding to a local model after the iterative training of the homomorphic encryption model.
Performing iterative training on a local model according to a training sample set to obtain an encryption model parameter corresponding to the local model, specifically, performing iterative training on the local model according to the training sample set until the iteration number of the local model reaches a preset iteration number, obtaining a model parameter of the local model, and performing homomorphic encryption on the model parameter to obtain the encryption model parameter.
Step a20, sending the encryption model parameters and the social information corresponding to the second devices to a first device, so that the first device selectively encrypts and aggregates the encryption model parameters sent by each second device based on social information sent by each second device, and obtains encryption aggregation model parameters corresponding to each second device;
in this embodiment, the encryption model parameters and the social information corresponding to the second devices are sent to a first device, so that the first device selectively encrypts and aggregates the encryption model parameters sent by the second devices based on social information sent by the second devices, respectively, based on social association degrees, to obtain encryption aggregation model parameters corresponding to the second devices, specifically, the encryption model parameters and the social information corresponding to the second devices are sent to the first device, so that the first device selects, based on the social association degrees corresponding to the social information, each encryption model parameter corresponding to each second device whose social association degree is greater than a preset association degree threshold, and further encrypts and aggregates each encryption model parameter corresponding to each second device, respectively, to obtain encryption aggregation model parameters corresponding to each second device, the specific content of the encryption aggregation model parameter corresponding to each second device generated by the first device may refer to the specific content in step S20, and is not described herein again.
And A30, receiving the encrypted aggregation model parameters fed back by the first equipment, and iteratively optimizing the local model according to the encrypted aggregation model parameters to obtain a federal learning model.
In this embodiment, receiving an encryption aggregation model parameter fed back by the first device, and iteratively optimizing the local model according to the encryption aggregation model parameter to obtain a federated learning model, specifically, receiving the encryption aggregation model parameter fed back by the first device, and decrypting the encryption aggregation model parameter to obtain an aggregation model parameter, and then replacing and updating the model parameter of the local model with the aggregation model parameter to optimize the local model, and determining whether the optimized local model meets a preset training end condition, where the preset training end condition includes conditions such as loss function convergence and reaching a maximum iteration threshold, if yes, the optimized local model is used as the federated learning model, and if not, the execution step is returned: and performing iterative training on the local model according to the training sample set to obtain the encryption model parameters corresponding to the local model.
Wherein the federated learning model comprises a federated recommendation model,
after the step of iteratively optimizing the local model according to the encryption aggregation model parameters to obtain a federated learning model, the federated learning modeling optimization includes:
step B10, acquiring user data, inputting the user data into the federal learning model, and acquiring user characteristic representation variables;
in this example, it should be noted that the user data is data for recording items of a target user, for example, the number of purchases of a certain commodity by the target user and the access amount of the target user to a certain web page, and the user feature representation corresponding to the user feature representation variable may be represented by a string of real numbers, and may uniquely represent a certain event, for example, if the user clicks a certain web page 5 times, the user feature representation variable may be set to be a real number 5.
The method comprises the steps of obtaining user data, inputting the user data into a federated learning model, obtaining a user characteristic representation variable, specifically, extracting the user data from a preset local database, inputting the user data into the federated learning model, and obtaining the user characteristic representation variable.
Step B20, calculating a user similarity result corresponding to the user characteristic representation variable, and generating a similar user candidate set based on the similarity result;
in this embodiment, it should be noted that the user similarity is a similarity between users, the federal learning model includes a scoring model, and the method for calculating the result of the user similarity includes euclidean distance, cosine (cosine function), and the like.
Calculating a user similarity result corresponding to the user characteristic representing variable, and generating a similar user candidate set based on the similarity result, specifically, based on the user characteristic representing variable, scoring each user to be selected corresponding to the target user through the scoring model to obtain the score of the user to be selected corresponding to each user to be selected, wherein, the higher the score of the user to be selected is, the higher the degree of similarity between the target user and the user to be selected is, and further based on the score of each user to be selected, sorting each user to be selected to obtain a list of users to be selected, that is, to obtain a result of user similarity, and selecting the similar user candidate set from the list of users to be selected, for example, and selecting 3 users to be selected with scores ranked in the top three from the list of the users to be selected to form the similar user candidate set.
And step B30, executing a preset item recommendation process based on the similar user candidate set.
In this embodiment, it should be noted that the preset item recommendation process is executed for performing personalized recommendation on a target user corresponding to the preset item recommendation process, so as to recommend an item that is interested in the user to be recommended to the target user.
Executing a preset item recommendation process based on the similar user candidate set, specifically, obtaining a target item set corresponding to the similar user candidate set, where the target item set includes one or more items associated with each similar user, for example, a movie watched by the similar user, a purchased commodity, a clicked webpage, and the like, further screening each item in the target item set to obtain an item set to be recommended, further, selecting an item to be recommended in the item set to be recommended, and recommending the item to be recommended to the target user, for example, assuming that the similar user candidate set includes a similar user a, a similar user B, and a similar user C, the similar user a has watched a movie a, a movie B, and a movie C, the similar user B has watched a movie a and a movie B, and the similar user C has a watched a, the number of occurrences of the movie a is 3, that is, the click rate of the movie a is 3, similarly, the click rate of the movie b is 2, the click rate of the movie c is 1, and if the preset click rate threshold is 1, the movie a and the movie b are selected to form the item set to be recommended, and the item set to be recommended is recommended to the user to be recommended.
The embodiment of the application provides a federated learning modeling optimization method, that is, compared with the technical means of collecting model parameters uploaded by all participants based on a server and randomly selecting part of the uploaded model parameters to be weighted and averaged to serve as updated global model parameters for federated learning modeling, it needs to be explained that when model parameters uploaded by the randomly selected part in the prior art are subjected to model aggregation, because the relevance degrees of the model parameters uploaded by the federated learning participants are different, if the model parameters with low relevance degrees are selected for aggregation, the updated global model is difficult to iteratively converge, the embodiment of the application iteratively trains a local model according to a training sample set to obtain encryption model parameters corresponding to the local model, and then sends the encryption model parameters and social information corresponding to a second device to a first device, the encryption model parameters sent by the second equipment are selectively encrypted and aggregated based on the social association degree by the first equipment based on the social information sent by the second equipment to obtain the encryption aggregation model parameters corresponding to the second equipment, so that the encryption model parameters corresponding to the second equipment with high social association degree are selected for encryption and aggregation based on the social information, namely, the encryption model parameters with low social association degree are quickly screened and eliminated, so that the encryption aggregation model parameters after encryption and aggregation are not negatively influenced by the encryption model parameters with low social association degree, further, the encryption aggregation model parameters fed back by the first equipment are received, the local model is iteratively optimized according to the encryption aggregation model parameters, and the iterative convergence efficiency of the second equipment for iteratively optimizing the local model can be improved, the iterative convergence efficiency of the global model is improved, and the federated learning model is obtained, so that the technical defect that the updated global model is difficult to iteratively converge due to the fact that model parameters uploaded by a random selection part in the prior art are large in calculated amount in the model aggregation process, and the federated learning modeling efficiency is low is overcome, and the federated learning modeling efficiency is improved.
Further, referring to fig. 3, in another embodiment of the present application, the federal recommended optimization method is applied to a second device, and the federal recommended optimization method includes:
step D10, obtaining user data to be recommended, inputting the user data to be recommended into a federal recommendation model, and obtaining user characteristic representation variables, wherein the federal recommendation model is obtained by second equipment through iterative optimization of a local model based on encryption aggregation model parameters sent by first equipment, and the encryption aggregation model parameters are obtained by the first equipment through selective aggregation of the encryption model parameters of the second equipment based on social information corresponding to the second equipment;
in this embodiment, it should be noted that the data of the user to be recommended is data recording items of the user to be recommended, for example, the number of times of purchase of a certain commodity by the user to be recommended and the visit amount of a certain webpage by the user to be recommended, and the federal recommendation optimization method is applied to the second device, and the second device updates the federal recommendation model periodically, that is, based on the existing user in the second device and a newly added user, iterative training is performed based on the federal learning modeling optimization method.
Obtaining user data to be recommended, inputting the user data to be recommended into a federal recommendation model, and obtaining user characteristic representation variables, wherein the federal recommendation model is obtained by second equipment through iterative optimization of a local model based on encryption aggregation model parameters sent by first equipment, the encryption aggregation model parameters are obtained by the first equipment through selective aggregation of encryption model parameters of the second equipment based on social information corresponding to the second equipment, specifically, the user data to be recommended are extracted from a preset local database, the user data to be recommended are input into the federal recommendation model, and then the user characteristic representation variables are obtained, wherein the federal recommendation model is obtained by the second equipment through iterative optimization of the local model based on the encryption aggregation model parameters sent by the first equipment, and the encryption aggregation model parameters are obtained by the first equipment through iterative optimization of the local model based on social information corresponding to the second equipment And selectively aggregating encryption model parameters of the device to obtain the federated recommendation model, where the process of constructing the federated recommendation model may specifically refer to the contents in steps S10 to S30, which are not described herein again.
Step D20, calculating a user similarity result corresponding to the user characteristic representation variable, and generating a similar user candidate set based on the similarity result;
in this embodiment, a user similarity result corresponding to the user feature representation variable is calculated, a similar user candidate set is generated based on the similarity result, specifically, based on the user feature representation variable, each to-be-selected user corresponding to the to-be-recommended user is scored through the scoring model, a score of the to-be-selected user corresponding to each to-be-selected user is obtained, then based on each score of the to-be-selected user, each to-be-selected user is ranked, a user similarity result is obtained, and the similar user candidate set is selected from the to-be-selected user list.
And D30, executing a preset item recommendation process based on the similar user candidate set.
In this embodiment, based on the candidate set of similar users, a preset item recommendation process is executed, specifically, a target item set corresponding to the candidate set of similar users is obtained, and then each item in the target item set is screened, an item set to be recommended is obtained, further, an item to be recommended is selected from the item set to be recommended, and the item to be recommended is recommended to the user to be recommended, for example, if the preset item recommendation process is executed to recommend a movie to the user to be recommended, movies watched by each similar user are the items to be recommended, and then movies with a front viewing frequency of each similar user and movies not watched by the user to be recommended can be selected from all movies and recommended to the user to be recommended, the specific implementation manners of steps D10 to D30 may refer to the contents in steps B10 to B30 in the second embodiment of this application, and will not be described in detail herein.
The method for optimizing the federal recommendation includes the steps of obtaining user data to be recommended, inputting the user data to be recommended into a federal recommendation model, and obtaining user characteristic representation variables, wherein the federal recommendation model is obtained by second equipment through iterative optimization of a local model based on encryption aggregation model parameters sent by first equipment, and before the step of selectively aggregating the encryption model parameters of the second equipment by the first equipment based on social information corresponding to the second equipment, the method for optimizing the federal recommendation further includes:
e10, performing iterative training on the local model according to the training sample set to obtain an encryption model parameter corresponding to the local model;
in this embodiment, an iterative training is performed on a local model according to a training sample set to obtain an encryption model parameter corresponding to the local model, and specifically, the iterative training is performed on the local model according to the training sample set until the iteration number of the local model reaches a preset iteration number, so as to obtain a model parameter of the local model, and the model parameter is homomorphic encrypted, so as to obtain the encryption model parameter.
Step E20, sending the encryption model parameters and the social information corresponding to the second devices to a first device, so that the first device selectively encrypts and aggregates the encryption model parameters sent by each second device based on social information sent by each second device, and obtains encryption aggregation model parameters corresponding to each second device;
in this embodiment, the encryption model parameters and the social information corresponding to the second devices are sent to a first device, so that the first device performs selective encryption and aggregation on the encryption model parameters sent by the second devices based on social information sent by the second devices, respectively, based on social association degrees, to obtain encryption and aggregation model parameters corresponding to the second devices, and specifically, sends each encryption model parameter and each social information to the first device, so that the first device performs encryption and aggregation on each encryption model parameter whose social association degree meets a preset association degree threshold, to obtain the encryption and aggregation model parameters.
And E30, receiving the encrypted aggregation model parameters fed back by the first device, and iteratively optimizing the local model according to the encrypted aggregation model parameters to obtain a federal recommendation model.
In this embodiment, the encryption aggregation model parameters fed back by the first device are received, and the local model is iteratively optimized according to the encryption aggregation model parameters to obtain a federal recommended model, specifically, the encryption aggregation model parameters fed back by the first device are received, the model parameters of the local model are replaced and updated with the aggregation model parameters to optimize the local model until the optimized local model meets a preset training end condition, so as to obtain the federal recommended model, and the specific implementation contents of steps E10 to E30 may refer to the specific contents of steps a10 to a30, which is not described herein again.
The embodiment of the application provides an article recommendation method, that is, user data to be recommended is obtained, the user data to be recommended is input into a federal recommendation model, and a user characteristic representation variable is obtained, wherein the federal recommendation model is obtained by second equipment through iterative optimization of a local model based on encryption aggregation model parameters sent by first equipment, the encryption aggregation model parameters are obtained by the first equipment through selective aggregation of the encryption model parameters of the second equipment based on social information corresponding to the second equipment, a user similarity result corresponding to the user characteristic representation variable is further calculated, a similar user candidate set is generated based on the similarity result, further, a preset article recommendation process is executed based on the similar user candidate set, and article recommendation process execution through the federal recommendation model is realized, the method comprises the steps that personalized recommendation of a user is completed, the federal recommendation model is obtained by iterative optimization of a local model by the second equipment based on encryption and aggregation model parameters sent by the first equipment, the encryption and aggregation model parameters are obtained by the first equipment through selective aggregation of the encryption model parameters of the second equipment based on social information corresponding to the second equipment, namely, based on the social information, the encryption model parameters corresponding to the second equipment with high social relevance are selected for federal learning modeling, so that the accuracy of the federal recommendation model constructed based on federal learning is not affected by the negative effects of the encryption model parameters with low social relevance, the accuracy of the federal recommendation model is improved, the accuracy of article recommendation is higher, and the recommendation effect of article recommendation is improved.
Referring to fig. 4, fig. 4 is a schematic device structure diagram of a hardware operating environment according to an embodiment of the present application.
As shown in fig. 4, the federal learning modeling optimization device may include: a processor 1001, such as a CPU, a memory 1005, and a communication bus 1002. The communication bus 1002 is used for realizing connection communication between the processor 1001 and the memory 1005. The memory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 1005 may alternatively be a memory device separate from the processor 1001 described above.
Optionally, the federal learning modeling optimization device may further include a rectangular user interface, a network interface, a camera, an RF (Radio Frequency) circuit, a sensor, an audio circuit, a WiFi module, and the like. The rectangular user interface may comprise a Display screen (Display), an input sub-module such as a Keyboard (Keyboard), and the optional rectangular user interface may also comprise a standard wired interface, a wireless interface. The network interface may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface).
Those skilled in the art will appreciate that the federated learning modeling optimization facility architecture illustrated in FIG. 4 does not constitute a limitation of the federated learning modeling optimization facility, and may include more or fewer components than those illustrated, or some components in combination, or a different arrangement of components.
As shown in fig. 4, the memory 1005, which is a type of computer storage medium, may include an operating system, a network communication module, and a federal learning modeling optimization program. The operating system is a program for managing and controlling hardware and software resources of the Federal learning modeling optimization equipment and supports the operation of the Federal learning modeling optimization program and other software and/or programs. The network communication module is used for realizing communication among components in the memory 1005 and communication with other hardware and software in the federal learning modeling optimization system.
In the federated learning modeling optimization apparatus shown in fig. 4, the processor 1001 is configured to execute a federated learning modeling optimization program stored in the memory 1005 to implement the steps of any of the federated learning modeling optimization methods described above.
The specific implementation of the federal learning modeling optimization device of the application is basically the same as that of each embodiment of the federal learning modeling optimization method, and details are not repeated herein.
Referring to fig. 5, fig. 5 is a schematic device structure diagram of a hardware operating environment according to an embodiment of the present application.
As shown in fig. 5, the federal recommended optimization device may include: a processor 1001, such as a CPU, a memory 1005, and a communication bus 1002. The communication bus 1002 is used for realizing connection communication between the processor 1001 and the memory 1005. The memory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 1005 may alternatively be a memory device separate from the processor 1001 described above.
Optionally, the federal recommended optimization device may further include a rectangular user interface, a network interface, a camera, RF (Radio Frequency) circuits, a sensor, an audio circuit, a WiFi module, and the like. The rectangular user interface may comprise a Display screen (Display), an input sub-module such as a Keyboard (Keyboard), and the optional rectangular user interface may also comprise a standard wired interface, a wireless interface. The network interface may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface).
Those skilled in the art will appreciate that the federally recommended optimization device configuration shown in fig. 5 does not constitute a limitation of the federally recommended optimization device and may include more or fewer components than shown, or some components in combination, or a different arrangement of components.
As shown in FIG. 5, memory 1005, which is a type of computer storage medium, may include an operating system, a network communication module, and a federal recommended optimization program. The operating system is a program for managing and controlling the hardware and software resources of the federal recommended optimized device, and supports the operation of the federal recommended optimized program and other software and/or programs. The network communication module is used for realizing communication among components in the memory 1005 and communication with other hardware and software in the federal recommended optimization system.
In the federal recommended optimization device shown in fig. 5, the processor 1001 is configured to execute the federal recommended optimization program stored in the memory 1005 to implement the steps of any one of the above-mentioned federal recommended optimization methods.
The specific implementation of the federal recommended optimization device of the application is basically the same as that of each embodiment of the federal recommended optimization method, and details are not repeated herein.
The application also provides a federal learning optimization device that models, federal learning optimization device that models is applied to first equipment, federal learning optimization device that models includes:
the receiving module is used for receiving encryption model parameters and social information sent by each second device, wherein the encryption model parameters are generated by the second devices through iterative training of local models;
the aggregation module is used for selectively encrypting and aggregating the encryption model parameters based on the social association degree according to the social information to obtain the encryption and aggregation model parameters corresponding to the social information;
and the feedback module is used for respectively feeding back the parameters of the encryption aggregation model to the corresponding second equipment so that the second equipment can iteratively optimize the local model according to the received parameters of the encryption aggregation model to obtain a federal learning model.
Optionally, the aggregation module is further configured to:
selecting social association model parameters respectively corresponding to the second equipment from the encryption model parameters based on the social association degrees corresponding to the social information;
and respectively carrying out encryption aggregation on the social association model parameters respectively corresponding to the second equipment to obtain the encryption aggregation model parameters corresponding to the social information.
Optionally, the federal learning modeling optimization device is further configured to:
respectively comparing the social association degrees in the social information with a preset association degree threshold value to obtain association degree comparison results corresponding to the social information;
and selecting each encryption model parameter with the social association degree meeting a preset association degree threshold value condition as each social association model parameter corresponding to each second device according to the comparison result of each association degree.
The specific implementation of the federal learning modeling optimization device of the application is basically the same as that of each embodiment of the federal learning modeling optimization method, and details are not repeated herein.
The application also provides a federal learning optimization device that models, federal learning optimization device that models is applied to the second equipment, federal learning optimization device that models includes:
the training module is used for carrying out iterative training on a local model according to a training sample set so as to obtain an encryption model parameter corresponding to the local model;
a sending module, configured to send the encryption model parameters and social information corresponding to the second devices to a first device, so that the first device performs selective encryption aggregation based on social association degrees on the encryption model parameters sent by the second devices respectively based on the social information sent by the second devices, and obtains encryption aggregation model parameters corresponding to the second devices;
and the optimization module is used for receiving the encryption aggregation model parameters fed back by the first equipment, and iteratively optimizing the local model according to the encryption aggregation model parameters to obtain a federal learning model.
Optionally, the federal learning modeling optimization device is further configured to:
acquiring user data, inputting the user data into the federal learning model, and acquiring a user characteristic representation variable;
calculating a user similarity result corresponding to the user characteristic representation variable, and generating a similar user candidate set based on the similarity result;
and executing a preset article recommendation process based on the similar user candidate set.
The specific implementation of the federal learning modeling optimization device of the application is basically the same as that of each embodiment of the federal learning modeling optimization method, and details are not repeated herein.
The application also provides an optimization device is recommended to federal, the optimization device is recommended to federal is applied to the second equipment, the optimization device is recommended to federal includes:
the system comprises an acquisition module, a recommendation module and a recommendation module, wherein the acquisition module is used for acquiring user data to be recommended and inputting the user data to be recommended into a federal recommendation model to acquire user characteristic representation variables, the federal recommendation model is acquired by second equipment through iterative optimization of a local model based on encryption aggregation model parameters sent by first equipment, and the encryption aggregation model parameters are acquired by the first equipment through selective aggregation of the encryption model parameters of the second equipment based on social information corresponding to the second equipment;
the generating module is used for calculating a user similarity result corresponding to the user characteristic representing variable and generating a similar user candidate set based on the similarity result;
and the recommending module is used for executing a preset article recommending process based on the similar user candidate set.
Optionally, the federal recommendation optimizing device is further configured to:
performing iterative training on a local model according to a training sample set to obtain an encryption model parameter corresponding to the local model;
sending the encryption model parameters and the social information corresponding to the second equipment to the first equipment, so that the first equipment can selectively encrypt and aggregate the encryption model parameters sent by the second equipment based on social association degrees respectively based on the social information sent by the second equipment to obtain the encryption aggregation model parameters corresponding to the second equipment;
and receiving the encryption aggregation model parameters fed back by the first equipment, and iteratively optimizing the local model according to the encryption aggregation model parameters to obtain the federal recommendation model.
The specific implementation of the federal recommended optimization device of the application is basically the same as that of each embodiment of the federal recommended optimization method, and is not described herein again.
The present application provides a medium, which is a readable storage medium, and the readable storage medium stores one or more programs, and the one or more programs are further executable by one or more processors for implementing the steps of any one of the above methods for federally learned modeling optimization.
The specific implementation of the readable storage medium of the application is substantially the same as that of each embodiment of the federated learning modeling optimization method, and is not described herein again.
The present application provides a medium, which is a readable storage medium, and the readable storage medium stores one or more programs, and the one or more programs are further executable by one or more processors for implementing the steps of any one of the above-mentioned federal recommended optimization methods.
The specific implementation of the readable storage medium of the application is substantially the same as that of each embodiment of the federal recommendation optimization method described above, and is not described herein again.
The present application provides a computer program product, and the computer program product includes one or more computer programs, which can also be executed by one or more processors for implementing the steps of any of the above methods for federated learning modeling optimization.
The specific implementation of the computer program product of the present application is substantially the same as the embodiments of the federated learning modeling optimization method described above, and is not described herein again.
The embodiments of the present application provide a computer program product, and the computer program product includes one or more computer programs, which may also be executed by one or more processors for implementing the steps of any one of the above-mentioned federal recommended optimization methods.
The specific implementation of the computer program product of the present application is substantially the same as the embodiments of the federal recommended optimization method described above, and will not be described herein again.
The above description is only a preferred embodiment of the present application, and not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings, or which are directly or indirectly applied to other related technical fields, are included in the scope of the present application.

Claims (12)

1. The federated learning modeling optimization method is applied to first equipment, and comprises the following steps:
receiving encryption model parameters and social information sent by each second device, wherein the encryption model parameters are generated by the second device through iterative training of a local model;
according to the social information, selective encryption aggregation based on social association degree is carried out on the encryption model parameters respectively to obtain encryption aggregation model parameters corresponding to the social information;
and feeding back the parameters of the encryption and aggregation model to the corresponding second equipment respectively so that the second equipment can iteratively optimize the local model according to the received parameters of the encryption and aggregation model to obtain a federated learning model.
2. The federal learning modeling optimization method of claim 1, wherein the step of performing selective encryption aggregation based on social relevance on each encryption model parameter according to each piece of social information to obtain an encryption aggregation model parameter corresponding to each piece of social information comprises:
selecting social association model parameters respectively corresponding to the second equipment from the encryption model parameters based on the social association degrees corresponding to the social information;
and respectively carrying out encryption aggregation on the social association model parameters respectively corresponding to the second equipment to obtain the encryption aggregation model parameters corresponding to the social information.
3. The federal learning modeling optimization method of claim 2, wherein the step of selecting, from the cryptographic model parameters, social association model parameters corresponding to the second devices, based on the social association degrees corresponding to the social information, includes:
respectively comparing the social association degrees in the social information with a preset association degree threshold value to obtain association degree comparison results corresponding to the social information;
and selecting each encryption model parameter with the social association degree meeting a preset association degree threshold value condition as each social association model parameter corresponding to each second device according to the comparison result of each association degree.
4. The federated learning modeling optimization method is applied to second equipment, and comprises the following steps:
performing iterative training on a local model according to a training sample set to obtain an encryption model parameter corresponding to the local model;
sending the encryption model parameters and the social information corresponding to the second devices to a first device, so that the first device selectively encrypts and aggregates the encryption model parameters sent by the second devices based on social information sent by the second devices respectively to obtain encryption aggregation model parameters corresponding to the second devices;
and receiving the encryption aggregation model parameters fed back by the first equipment, and iteratively optimizing the local model according to the encryption aggregation model parameters to obtain a federal learning model.
5. The federated learning modeling optimization method of claim 4, wherein the federated learning model includes a federated recommendation model,
after the step of iteratively optimizing the local model according to the encryption aggregation model parameters to obtain a federated learning model, the federated learning modeling optimization includes:
acquiring user data, inputting the user data into the federal learning model, and acquiring a user characteristic representation variable;
calculating a user similarity result corresponding to the user characteristic representation variable, and generating a similar user candidate set based on the similarity result;
and executing a preset article recommendation process based on the similar user candidate set.
6. A federated recommendation optimization method is applied to a second device, and comprises the following steps:
obtaining user data to be recommended, inputting the user data to be recommended into a federal recommendation model, and obtaining user characteristic representation variables, wherein the federal recommendation model is obtained by second equipment through iterative optimization of a local model based on encryption aggregation model parameters sent by first equipment, and the encryption aggregation model parameters are obtained by the first equipment through selective aggregation of encryption model parameters of the second equipment based on social information corresponding to the second equipment;
calculating a user similarity result corresponding to the user characteristic representation variable, and generating a similar user candidate set based on the similarity result;
and executing a preset article recommendation process based on the similar user candidate set.
7. The federal recommendation optimizing method as claimed in claim 6, wherein the federal recommendation optimizing method includes, before the step of obtaining user data to be recommended and inputting the user data to be recommended into a federal recommendation model to obtain user characteristic representation variables, the federal recommendation model being obtained by iteratively optimizing a local model by the second device based on encryption aggregation model parameters sent by the first device, and the encryption aggregation model parameters being obtained by selectively aggregating encryption model parameters of the second device by the first device based on social information corresponding to the second device, the federal recommendation optimizing method further includes:
performing iterative training on the local model according to a training sample set to obtain an encryption model parameter corresponding to the local model;
sending the encryption model parameters and the social information corresponding to the second equipment to the first equipment, so that the first equipment selectively encrypts and aggregates the encryption model parameters sent by the second equipment based on social information sent by the second equipment to obtain encryption aggregation model parameters corresponding to the second equipment;
and receiving the encryption aggregation model parameters fed back by the first equipment, and iteratively optimizing the local model according to the encryption aggregation model parameters to obtain the federal recommendation model.
8. The Federal learning modeling optimization apparatus is characterized by comprising: a memory, a processor, and a program stored on the memory for implementing the federated learning modeling optimization method,
the memory is used for storing a program for realizing the federal learning modeling optimization method;
the processor is configured to execute a program implementing the federated learning modeling optimization method to implement the steps of the federated learning modeling optimization method as recited in any one of claims 1 to 3 or 4 to 5.
9. The Federation recommendation optimization method equipment is characterized by comprising the following steps: a memory, a processor, and a program stored on the memory for implementing the federal recommended optimization methodology,
the memory is used for storing a program for realizing the federal recommended optimization method;
the processor is configured to execute a program implementing the federal recommended optimization methodology method to implement the steps of the federal learning modeling optimization methodology of any of claims 6 to 7.
10. A medium being a readable storage medium, characterized in that the readable storage medium has stored thereon a program for implementing a federal learning modeling optimization method, the program being executed by a processor to implement the steps of the federal learning modeling optimization method as claimed in any one of claims 1 to 3 or 4 to 5.
11. A medium that is a readable storage medium, wherein the readable storage medium has a program for implementing a federal recommended optimization method stored thereon, and the program for implementing the federal recommended optimization method is executed by a processor to implement the steps of the federal recommended optimization method as claimed in any one of claims 6 to 7.
12. A computer program product comprising a computer program, wherein the computer program when executed by a processor implements the steps of the federal learning modeling optimization methodology of any of claims 1 to 3 or 4 to 5 or the steps of the federal recommended optimization methodology of any of claims 6 to 7.
CN202110840832.4A 2021-07-22 2021-07-22 Federal learning modeling optimization method, apparatus, medium, and computer program product Pending CN113487043A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110840832.4A CN113487043A (en) 2021-07-22 2021-07-22 Federal learning modeling optimization method, apparatus, medium, and computer program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110840832.4A CN113487043A (en) 2021-07-22 2021-07-22 Federal learning modeling optimization method, apparatus, medium, and computer program product

Publications (1)

Publication Number Publication Date
CN113487043A true CN113487043A (en) 2021-10-08

Family

ID=77942472

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110840832.4A Pending CN113487043A (en) 2021-07-22 2021-07-22 Federal learning modeling optimization method, apparatus, medium, and computer program product

Country Status (1)

Country Link
CN (1) CN113487043A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109871702A (en) * 2019-02-18 2019-06-11 深圳前海微众银行股份有限公司 Federal model training method, system, equipment and computer readable storage medium
CN111553742A (en) * 2020-05-08 2020-08-18 深圳前海微众银行股份有限公司 Federal product recommendation method, device, equipment and computer storage medium
CN111598254A (en) * 2020-05-22 2020-08-28 深圳前海微众银行股份有限公司 Federal learning modeling method, device and readable storage medium
CN112329940A (en) * 2020-11-02 2021-02-05 北京邮电大学 Personalized model training method and system combining federal learning and user portrait
US20210073639A1 (en) * 2018-12-04 2021-03-11 Google Llc Federated Learning with Adaptive Optimization
CN113095512A (en) * 2021-04-23 2021-07-09 深圳前海微众银行股份有限公司 Federal learning modeling optimization method, apparatus, medium, and computer program product

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210073639A1 (en) * 2018-12-04 2021-03-11 Google Llc Federated Learning with Adaptive Optimization
CN109871702A (en) * 2019-02-18 2019-06-11 深圳前海微众银行股份有限公司 Federal model training method, system, equipment and computer readable storage medium
CN111553742A (en) * 2020-05-08 2020-08-18 深圳前海微众银行股份有限公司 Federal product recommendation method, device, equipment and computer storage medium
CN111598254A (en) * 2020-05-22 2020-08-28 深圳前海微众银行股份有限公司 Federal learning modeling method, device and readable storage medium
CN112329940A (en) * 2020-11-02 2021-02-05 北京邮电大学 Personalized model training method and system combining federal learning and user portrait
CN113095512A (en) * 2021-04-23 2021-07-09 深圳前海微众银行股份有限公司 Federal learning modeling optimization method, apparatus, medium, and computer program product

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
DONGDONG YE ET AL.: "Federated Learning in Vehicular Edge Computing: A Selective Model Aggregation Approach", SPECIAL SECTION ON COMMUNICATION AND FOG/EDGE COMPUTING TOWARDS INTELLIGENT CONNECTED VEHICLES (ICVS), 21 January 2020 (2020-01-21), pages 23920 - 23935, XP011771171, DOI: 10.1109/ACCESS.2020.2968399 *
程俊宏: "基于联邦学习的差分隐私保护方法", 中国硕士论文数据库 信息科技辑, 31 May 2021 (2021-05-31), pages 138 - 153 *
董业;侯炜;陈小军;曾帅;: "基于秘密分享和梯度选择的高效安全联邦学习", 计算机研究与发展, no. 10, 9 October 2020 (2020-10-09) *

Similar Documents

Publication Publication Date Title
US11711447B2 (en) Method and apparatus for real-time personalization
CN111079022A (en) Personalized recommendation method, device, equipment and medium based on federal learning
CN106339507B (en) Streaming Media information push method and device
CN111339412A (en) Longitudinal federal recommendation recall method, device, equipment and readable storage medium
CN112116008B (en) Processing method of target detection model based on intelligent decision and related equipment thereof
CN112100489B (en) Object recommendation method, device and computer storage medium
CN105159910A (en) Information recommendation method and device
CN110851699A (en) Deep reinforcement learning-based information flow recommendation method, device, equipment and medium
CN111324812B (en) Federal recommendation method, device, equipment and medium based on transfer learning
CN111291273A (en) Recommendation system optimization method, device, equipment and readable storage medium
CN111553742A (en) Federal product recommendation method, device, equipment and computer storage medium
KR102381330B1 (en) Recommend content providers to improve targeting and other settings
JP7160866B2 (en) Information provision system, information provision method, and program
CN113378067A (en) Message recommendation method, device, medium, and program product based on user mining
CN113065067A (en) Article recommendation method and device, computer equipment and storage medium
CN111553743A (en) Federal product recommendation method, device, equipment and computer storage medium
CN106168975B (en) The acquisition methods and device of target user's concentration
CN112308648A (en) Information processing method and device
CN113487043A (en) Federal learning modeling optimization method, apparatus, medium, and computer program product
CN110874639A (en) Method and device for acquiring operation information
CN112269942B (en) Method, device and system for recommending object and electronic equipment
Yu et al. Attributes coupling based item enhanced matrix factorization technique for recommender systems
CN108305097B (en) Data processing method, equipment and client
US9536199B1 (en) Recommendations based on device usage
JP2011227720A (en) Recommendation system, recommendation method and recommendation program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination