CN112329010A - Adaptive data processing method, device, equipment and storage medium based on federal learning - Google Patents

Adaptive data processing method, device, equipment and storage medium based on federal learning Download PDF

Info

Publication number
CN112329010A
CN112329010A CN202011115886.6A CN202011115886A CN112329010A CN 112329010 A CN112329010 A CN 112329010A CN 202011115886 A CN202011115886 A CN 202011115886A CN 112329010 A CN112329010 A CN 112329010A
Authority
CN
China
Prior art keywords
model
preset
trained
private
disturbance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011115886.6A
Other languages
Chinese (zh)
Inventor
范力欣
吴锦和
鞠策
金逸伦
张天豫
周雨豪
杨强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
WeBank Co Ltd
Original Assignee
WeBank Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by WeBank Co Ltd filed Critical WeBank Co Ltd
Priority to CN202011115886.6A priority Critical patent/CN112329010A/en
Publication of CN112329010A publication Critical patent/CN112329010A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/55Detecting local intrusion or implementing counter-measures
    • G06F21/554Detecting local intrusion or implementing counter-measures involving event detection and direct action
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Hardware Design (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application discloses a method, equipment and a storage medium for processing self-adaptive data based on federal learning, wherein the method comprises the following steps: when a data processing instruction is detected, acquiring data to be processed, and inputting the data to be processed into a preset dynamic federated network model; obtaining a prediction result of the data to be processed after the preset dynamic federated network model performs prediction processing on the data to be processed; the preset dynamic federated network model is a target model obtained by executing a preset dynamic federated flow on a first model to be trained on the basis of preset training data with preset labels and a first private disturbance factor. The method and the device improve the defense capacity of the deep neural network model against various attacks and improve the processing efficiency and accuracy of the data to be processed.

Description

Adaptive data processing method, device, equipment and storage medium based on federal learning
Technical Field
The application relates to the technical field of artificial intelligence of financial technology (Fintech), in particular to a federal learning-based adaptive data processing method, a device, equipment and a storage medium.
Background
With the continuous development of financial science and technology, especially internet science and technology and finance, more and more technologies are applied to the financial field, but the financial industry also puts higher requirements on the technologies, for example, the financial industry also has higher requirements on adaptive data processing based on federal learning.
The deep learning model is a very widely applied model, and is often used for data processing in the field of artificial intelligence, however, the existing deep learning model is often attacked by various interference learning attacks such as stealing of training data of model contributors, stealing of privacy information of the model contributors, data poisoning of the model contributors, and counterfeiting testing against samples, and the deep learning model in the prior art is still obtained by training and learning in an un-attacked mode (in an ideal state), which causes performance degradation of the deep learning model obtained by actual training, and causes reduction of data processing efficiency and accuracy.
Disclosure of Invention
The application mainly aims to provide a self-adaptive data processing method, equipment and a storage medium based on federated learning, and aims to solve the technical problem that the processing efficiency and accuracy of data are reduced because a deep learning model obtained by training and learning in an attack-free mode is used for data processing in the prior art.
In order to achieve the above object, the present application provides a federal learning-based adaptive data processing method applied to a first participant, where the federal learning-based adaptive data processing method includes:
when a data processing instruction is detected, acquiring data to be processed, and inputting the data to be processed into a preset dynamic federated network model;
obtaining a prediction result of the data to be processed after the preset dynamic federated network model performs prediction processing on the data to be processed;
the preset dynamic federated network model is a target model obtained by executing a preset dynamic federated flow on a first model to be trained on the basis of preset training data with preset labels and a first private disturbance factor.
Optionally, the preset dynamic federated network model comprises one or more dynamic federated sub-network models;
the step of obtaining the prediction result of the to-be-processed data after the preset dynamic federated network model performs prediction processing on the to-be-processed data includes:
determining private disturbance associated information in the data processing instruction;
if the private disturbance associated information comprises a plurality of disturbance factors, determining a disturbance weight corresponding to each disturbance factor;
acquiring each predictor result obtained after the preset dynamic disturbance neural sub-model corresponding to each disturbance factor performs prediction processing on the data to be processed in the preset dynamic disturbance neural network model based on the disturbance factors and the corresponding disturbance weights;
and obtaining the prediction result of the data to be processed based on each predictor.
Optionally, before the step of obtaining the prediction result of the to-be-processed data after the preset dynamic federated network model performs prediction processing on the to-be-processed data, the method includes:
acquiring preset training data with preset labels;
executing a preset dynamic federal flow on the first model to be trained based on the preset training data with the preset label and the first private disturbance factor to obtain a target model;
and setting the target model as the preset dynamic federated network model.
Optionally, the step of executing a preset dynamic federation flow on a first model to be trained based on the preset training data with the preset label and the first private disturbance factor to obtain a target model includes:
determining a second model to be trained based on the first private disturbance factor and the first model to be trained;
performing iterative training on the second model to be trained based on the preset training data with the preset labels to train and update model variables of the second model to be trained;
judging whether the second model to be trained of iterative training reaches a preset replacement updating condition, if so, performing replacement updating on the model variable updated by training by executing a preset dynamic federal flow to obtain a second model to be trained which is updated by replacement;
and continuously carrying out iterative training and replacement updating on the second model to be trained which is subjected to replacement updating until the second model to be trained meets a preset training completion condition, so as to obtain a target model.
Optionally, if the second model to be trained of the iterative training reaches a preset replacement update condition, performing a preset dynamic federation flow to perform replacement update on the model variable updated by training to obtain a replacement updated second model to be trained, including:
if the second model to be trained of the iterative training reaches a preset replacement updating condition, encrypting the model variable updated by the training and sending the model variable to a third party associated with the first participant so that the third party can aggregate the model variables sent by a plurality of other second participants to obtain an aggregated model variable, and feeding the aggregated model variable back to the first participant and each second participant;
the other second participants respectively determine respective model variables based on the corresponding second private disturbance factor sent by the third party and the first model to be trained;
and receiving the aggregation model variable fed back by the third party, replacing and updating the model variable updated by training into the aggregation model variable, and obtaining the preset prediction model to be trained updated by replacement.
Optionally, the first private perturbation factor comprises a first private network layer structure factor;
the step of determining a second model to be trained based on the first private perturbation factor and the first model to be trained includes:
determining weight information of the first private network layer structure factor;
determining a network layer structure to be added based on the weight information;
and determining a second model to be trained based on the network layer structure to be added and the first model to be trained.
Optionally, the first private perturbation factor comprises a second private network layer structure factor;
the step of determining a second model to be trained based on the first private perturbation factor and the first model to be trained includes:
receiving a second private network layer structure factor sent by a server;
determining a network layer structure to be removed based on the second private network layer structure factor;
and determining a second model to be trained based on the network layer structure to be removed and the first model to be trained.
Optionally, the first private perturbation factor comprises a private loss function factor,
the step of executing a preset dynamic federal flow to a first model to be trained based on the preset training data with the preset labels and the first private disturbance factor to obtain a target model comprises the following steps:
performing local iterative training on the first model to be trained based on the preset training data with the preset labels to obtain an intermediate training result;
determining a disturbance loss function based on the private loss function factor and a first preset disturbance amplitude;
carrying out private disturbance adjustment on a preset loss function in a local iterative training process randomly or in a self-adaptive manner based on the disturbance loss function to obtain a target loss function;
and performing local iterative training on the first model to be trained based on the intermediate training result, the preset label and the target loss function to obtain a target model.
Optionally, the first private perturbation factor comprises a private model parameter factor;
the step of executing a preset dynamic federal flow to a first model to be trained based on the preset training data with the preset labels and the first private disturbance factor to obtain a target model comprises the following steps:
performing local iterative training on the initial model parameters of the first model to be trained based on the preset training data with the preset labels to obtain intermediate model parameters;
determining a disturbance model parameter based on the private model parameter factor and a second preset disturbance amplitude;
and carrying out disturbance adjustment on the intermediate model parameters in the local iterative training process randomly or in a self-adaptive manner based on the disturbance model parameters to obtain a target model.
Optionally, the first private perturbation factor comprises a gradient factor;
the step of executing a preset dynamic federal flow to a first model to be trained based on the preset training data with the preset labels and the first private disturbance factor to obtain a target model comprises the following steps:
performing local iterative training on the first model to be trained based on the preset training data with the preset label to obtain an intermediate gradient;
determining a disturbance gradient based on the gradient factor and a third preset disturbance amplitude;
carrying out disturbance adjustment on the intermediate gradient in the iterative training process randomly or adaptively based on the disturbance gradient to obtain a target intermediate gradient;
and performing dynamic iterative training on a preset basic model based on the target intermediate gradient and preset training data with preset labels to obtain a target model.
The application also provides a self-adaptation data processing device based on federal learning, is applied to first participant, self-adaptation data processing device based on federal learning includes:
the first acquisition module is used for acquiring data to be processed when a data processing instruction is detected, and inputting the data to be processed into a preset dynamic federated network model;
the second obtaining module is used for obtaining a prediction result of the data to be processed after the preset dynamic federated network model carries out prediction processing on the data to be processed;
the preset dynamic federated network model is a target model obtained by executing a preset dynamic federated flow on a first model to be trained on the basis of preset training data with preset labels and a first private disturbance factor.
Optionally, the preset dynamic federated network model comprises one or more dynamic federated sub-network models;
the second acquisition module includes:
the first determining unit is used for determining private disturbance associated information in the data processing instruction;
a second determining unit, configured to determine a disturbance weight corresponding to each disturbance factor if the private disturbance associated information includes multiple disturbance factors;
the first obtaining unit is used for obtaining each predictor result obtained after the preset dynamic disturbance neural sub-model corresponding to each disturbance factor performs prediction processing on the data to be processed in the preset dynamic disturbance neural network model based on the disturbance factors and the corresponding disturbance weights;
and the second acquisition unit is used for obtaining the prediction result of the data to be processed based on each predictor.
Optionally, the adaptive data processing apparatus based on federal learning further includes:
the third acquisition module is used for acquiring preset training data with preset labels;
the fourth obtaining module is used for executing a preset dynamic federal process on the first model to be trained based on the preset training data with the preset label and the first private disturbance factor to obtain a target model;
and the setting module is used for setting the target model as the preset dynamic federated network model.
Optionally, the fourth obtaining module includes:
a third determining unit, configured to determine a second model to be trained based on the first private disturbance factor and the first model to be trained;
the first training unit is used for performing iterative training on the second model to be trained based on the preset training data with the preset labels so as to train and update model variables of the second model to be trained;
the judging unit is used for judging whether the second model to be trained of the iterative training reaches a preset replacement updating condition or not, and if the second model to be trained of the iterative training reaches the preset replacement updating condition, replacing and updating the model variable updated by training by executing a preset dynamic federal flow to obtain the second model to be trained which is replaced and updated;
and the second training unit is used for continuously carrying out iterative training and replacement updating on the second model to be trained which is replaced and updated until the second model to be trained meets a preset training completion condition to obtain a target model.
Optionally, the determining unit includes:
the aggregation subunit is configured to encrypt the model variable updated by the training and send the model variable to a third party associated with the first participant if the second model to be trained of the iterative training meets a preset replacement update condition, so that the third party performs aggregation processing on the model variables sent by a plurality of other second participants to obtain an aggregated model variable, and feeds the aggregated model variable back to the first participant and each second participant;
the other second participants respectively determine respective model variables based on the corresponding second private disturbance factor sent by the third party and the first model to be trained;
and the first receiving subunit is configured to receive the aggregation model variable fed back by the third party, replace and update the model variable updated by training to the aggregation model variable, and obtain the preset prediction model to be trained, which is replaced and updated.
Optionally, the first private perturbation factor comprises a first private network layer structure factor;
the third determination unit includes:
a first determining subunit, configured to determine weight information of the first private network layer structure factor;
the second determining subunit is used for determining the network layer structure to be added based on the weight information;
and the third determining subunit is used for determining a second model to be trained based on the network layer structure to be added and the first model to be trained.
Optionally, the first private perturbation factor comprises a second private network layer structure factor;
the third determination unit further includes:
the second receiving subunit is used for receiving a second private network layer structure factor sent by the server;
a fourth determining subunit, configured to determine, based on the second private network layer structure factor, a network layer structure to be removed;
and the fifth determining subunit is used for determining a second model to be trained based on the network layer structure to be removed and the first model to be trained.
Optionally, the first private perturbation factor comprises a private loss function factor,
the fourth obtaining module further comprises:
the third training unit is used for carrying out local iterative training on the first model to be trained based on the preset training data with the preset labels to obtain an intermediate training result;
a fourth determining unit, configured to determine a disturbance loss function based on the private loss function factor and a first preset disturbance amplitude;
the first adjusting unit is used for randomly or adaptively carrying out private disturbance adjustment on a preset loss function in the local iterative training process based on the disturbance loss function to obtain a target loss function;
and the third obtaining unit is used for carrying out local iterative training on the first model to be trained based on the intermediate training result, the preset label and the target loss function so as to obtain a target model.
Optionally, the first private perturbation factor comprises a private model parameter factor;
the fourth obtaining module further comprises:
the fourth training unit is used for carrying out local iterative training on the initial model parameters of the first model to be trained based on the preset training data with the preset labels to obtain intermediate model parameters;
a fifth determining unit, configured to determine a perturbation model parameter based on the private model parameter factor and a second preset perturbation amplitude;
and the second adjusting unit is used for carrying out disturbance adjustment on the intermediate model parameters in the local iterative training process randomly or in a self-adaptive manner based on the disturbance model parameters so as to obtain a target model.
Optionally, the first private perturbation factor comprises a gradient factor;
the fourth obtaining module further comprises:
the fifth training unit is used for carrying out local iterative training on the first model to be trained based on the preset training data with the preset labels to obtain an intermediate gradient;
a sixth determining unit, configured to determine a disturbance gradient based on the gradient factor and a third preset disturbance amplitude;
a third adjusting unit, configured to randomly or adaptively perform perturbation adjustment on an intermediate gradient in an iterative training process based on the perturbation gradient to obtain a target intermediate gradient;
and the sixth training unit is used for carrying out dynamic iterative training on a preset basic model based on the target intermediate gradient and preset training data with preset labels to obtain a target model.
The application also provides a self-adaptive data processing device based on federal learning, the self-adaptive data processing device based on federal learning is an entity node device, and the self-adaptive data processing device based on federal learning comprises: a memory, a processor, and a program of the adaptive data processing method based on federal learning stored in the memory and operable on the processor, wherein the program of the adaptive data processing method based on federal learning can implement the steps of the adaptive data processing method based on federal learning as described above when executed by the processor.
The present application also provides a storage medium, on which a program for implementing the above adaptive data processing method based on federal learning is stored, and when being executed by a processor, the program for implementing the above adaptive data processing method based on federal learning implements the steps of the above adaptive data processing method based on federal learning.
Compared with the prior art that the data processing efficiency and accuracy are reduced by using a deep learning model obtained by training and learning in an attack-free mode to perform data processing, the method and the device for processing the self-adaptive data based on the federated learning have the advantages that when a data processing instruction is detected, to-be-processed data are obtained and input into a preset dynamic federated network model; obtaining a prediction result of the data to be processed after the preset dynamic federated network model performs prediction processing on the data to be processed; the preset dynamic federated network model is a target model obtained by executing a preset dynamic federated flow on a first model to be trained on the basis of preset training data with preset labels and a first private disturbance factor. In the application, after the data to be processed is obtained, the data to be processed is processed based on a preset dynamic federal network model, and the preset dynamic federal network model is obtained by executing a preset dynamic federal process on a first model to be trained based on preset training data with a preset label and a first private disturbance factor, namely, the preset dynamic federal network model is obtained by federal learning after the first private disturbance factor of a first participant is added, namely, the first participant holds the first private disturbance factor, so that an attacker cannot effectively develop attacks when only knowing a public neural network model in the federal process, or even when the model is attacked, the attack influence on a deep neural network model is generalized due to the addition of the first private disturbance factor of the first participant, thereby improving the defense capability of the deep neural network model against various attacks, the data to be processed is processed based on the preset dynamic federated network model with defense capability against various attacks, so that the processing efficiency and accuracy of the data to be processed can be improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a schematic flow chart of a first embodiment of an adaptive data processing method based on federated learning according to the present application;
FIG. 2 is a schematic flow chart illustrating a detailed step of step S20 in the adaptive data processing method based on federated learning according to the present application;
fig. 3 is a schematic device structure diagram of a hardware operating environment according to an embodiment of the present application.
The objectives, features, and advantages of the present application will be further described with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In a first embodiment of the adaptive data processing method based on federal learning, referring to fig. 1, the method is applied to a first participant, and the adaptive data processing method based on federal learning includes:
step S10, when a data processing instruction is detected, acquiring data to be processed, and inputting the data to be processed into a preset dynamic federated network model;
step S20, obtaining a prediction result of the data to be processed after the preset dynamic federated network model carries out prediction processing on the data to be processed;
the preset dynamic federated network model is a target model obtained by executing a preset dynamic federated flow on a first model to be trained on the basis of preset training data with preset labels and a first private disturbance factor.
The method comprises the following specific steps:
step S10, when a data processing instruction is detected, acquiring data to be processed, and inputting the data to be processed into a preset dynamic federated network model;
in this embodiment, it should be noted that the adaptive data processing method based on federal learning may be applied to an adaptive data processing system based on federal learning, and in particular, to a first participant in the adaptive data processing system based on federal learning, the first participant being in communication connection with other second participants in the adaptive data processing system based on federal learning, or being in communication connection with other second participants in the adaptive data processing system based on federal learning through a server (in the adaptive data processing system based on federal learning), and the adaptive data processing system based on federal learning being subordinate to the adaptive data processing device based on federal learning. For the first participant, a preset dynamic federated network model is built in the first participant, so that after a data processing instruction is detected and data to be processed is obtained, the data to be processed can be input into the preset dynamic federated network model.
In this embodiment, it should be noted that the preset dynamic federated network model used may be a trained model, where the trained preset dynamic federated network model may be obtained by:
the first method is as follows: the preset dynamic federated network model is a target model obtained by adding a single first private disturbance factor to carry out disturbance training based on a first model to be trained in the training process;
the second method comprises the following steps: the preset dynamic federated network model is a target model obtained by adding a plurality of first private disturbance factors to carry out disturbance training simultaneously based on a first model to be trained in the training process, for example, the first private disturbance factor can be a first private network layer structure factor or a private loss function factor and the like;
the third method comprises the following steps: the preset dynamic federated network model is an overall model formed by a plurality of sub-network models, wherein each sub-network model is a target model obtained by adding a single first private perturbation factor to perturb and train in the training process based on a first model to be trained.
In this embodiment, the perturbation factor is a private perturbation factor of the first participant, that is, the private perturbation factor of the first participant is private information of the first participant.
In the present embodiment, the data to be processed may be chip data to be processed or financial data to be processed, or the like.
When a data processing instruction is detected, acquiring data to be processed, inputting the data to be processed into a preset dynamic federated network model, when the preset dynamic federated network model is an integral model composed of a plurality of sub-network models (preset dynamic federated sub-network models), inputting the data to be processed into a certain sub-network model, or inputting the data to be processed into each sub-network model, wherein the sub-network models can be obtained by random global/private information fusion of a first participant (model holder) in the prediction process, or random global/private information switching of the first participant (model holder), or adaptive global/private information fusion of the first participant (model holder), or adaptive global/private information switching of the first participant (model holder), thereby making the model a dynamically changing model.
Step S20, obtaining a prediction result of the data to be processed after the preset dynamic federated network model carries out prediction processing on the data to be processed;
the preset dynamic federated network model is a target model obtained by executing a preset dynamic federated flow on a first model to be trained on the basis of preset training data with preset labels and a first private disturbance factor.
In this embodiment, it is emphasized that the first private perturbation factor is a private perturbation factor added by the first participant in the local model training process.
In this embodiment, after the preset dynamic federated network model is obtained to perform prediction processing on the to-be-processed data, the obtained prediction result of the to-be-processed data includes:
if the preset dynamic federated network model comprises a plurality of sub-network models (dynamic federated sub-network models), performing prediction processing on the data to be processed based on a certain sub-network model to obtain a prediction sub-result as a prediction result, or if the preset dynamic federated network model comprises a plurality of sub-network models, performing mean processing on a plurality of prediction sub-results obtained after performing prediction processing on the data to be processed based on the plurality of sub-network models to obtain a prediction result;
referring to fig. 2, the step of obtaining the prediction result of the to-be-processed data after the preset dynamic federated network model performs prediction processing on the to-be-processed data includes steps S21-S24:
step S21, determining private disturbance associated information in the data processing instruction;
in this embodiment, the private disturbance associated information is extracted from the data processing instruction, and the private disturbance associated information includes information of the number of disturbance factors (private disturbance factors), that is, whether it is determined to be multiple or single private disturbance associated information.
Step S22, if the private disturbance associated information includes a plurality of disturbance factors, determining a disturbance weight corresponding to each disturbance factor;
if the private disturbance associated information includes a plurality of disturbance factors, determining a disturbance weight corresponding to each disturbance factor, for example, the disturbance weight of 3 disturbance factors may be 50%, 25%, or 25%, where the 3 disturbance factors may be a first private network layer structure factor, a model parameter factor, or a gradient factor, and it should be noted that the disturbance factor is associated with a first private disturbance factor added in the model training process, for example, if the disturbance factor is the first private network layer structure factor, the first private disturbance factor is the first private network layer structure factor, and if the disturbance factor is the model parameter factor, the first private disturbance factor is the model parameter factor.
Step S23, acquiring each predictor result obtained after the preset dynamic federal sub-network model corresponding to each disturbance factor performs prediction processing on the data to be processed in the preset dynamic disturbance neural network model based on the disturbance factors and the corresponding disturbance weights;
specifically, a corresponding preset dynamic federal sub-network model (dynamic federal sub-network model) is determined based on the disturbance factors, for example, the corresponding preset dynamic federal sub-network model is determined to be a preset dynamic model parameter disturbance federal sub-network model based on model parameter disturbance factors, and each predictor (each initial predictor multiplied by the corresponding disturbance weight) is obtained after the preset dynamic federal sub-network model corresponding to each disturbance factor performs prediction processing on the data to be processed in the preset dynamic federal network model based on the disturbance factors and the corresponding disturbance weights.
And step S24, obtaining the prediction result of the data to be processed based on each predictor.
And adding or fusing the predictor results to obtain a prediction result.
In this embodiment, it should be noted that, if the preset dynamic federated network model does not include multiple sub-network models, the data to be processed is subjected to prediction processing based on the preset dynamic federated network model, so as to obtain a prediction result.
In this embodiment, it should be noted that the preset dynamic federal network model is obtained by performing dynamic iterative training on a first model to be trained based on preset training data with a preset tag and a first private disturbance factor, that is, the preset dynamic federal network model is a trained model capable of accurately processing data to be processed, so that a prediction result can be accurately obtained after prediction processing is performed on the data to be processed based on the preset dynamic federal network model.
Wherein, before the step of obtaining the prediction result of the to-be-processed data after the preset dynamic federated network model performs prediction processing on the to-be-processed data, the method includes steps S01-S03:
step S01, acquiring preset training data with preset labels;
step S02, executing a preset dynamic federal process on the first model to be trained based on the preset training data with the preset label and the first private disturbance factor to obtain a target model;
in this embodiment, preset training data with a preset label and a first private perturbation factor are obtained, where the first private perturbation factor includes factors such as a model parameter, an objective function, a gradient, and a first private network layer structure.
Based on the preset training data with the preset labels and the first private disturbance factor, executing a preset dynamic federal flow on a first model to be trained, and obtaining a target model comprises the following steps:
the first method is as follows: performing dynamic iterative training on the first model to be trained based on the preset training data with the preset label and a first private disturbance factor to obtain a target model;
based on the preset training data with the preset label and a first private disturbance factor, performing dynamic iterative training on a first model to be trained to obtain a target model, wherein the method comprises the following steps:
the first sub-mode is as follows: performing dynamic iterative training on the first model to be trained based on the preset training data with the preset label, and randomly adding a first private disturbance factor to perform disturbance in the iterative training process to obtain a target model;
the second sub-mode: and performing dynamic iterative training on the first model to be trained based on the preset training data with the preset labels, and adding a first private disturbance factor to each iteration training to perform disturbance in the iterative training process to obtain a target model.
The second method comprises the following steps: and performing dynamic iterative training on the first model to be trained based on the preset training data with the preset labels and a plurality of first private disturbance factors to obtain a target model.
And step S03, setting the target model as the preset dynamic federated network model.
In this embodiment, after a target model is obtained through iteration, the target model is set as the preset dynamic federated network model.
The step of executing a preset dynamic federal process on the first model to be trained based on the preset training data with the preset label and the first private disturbance factor to obtain a target model comprises the following steps A1-A4:
step A1, determining a second model to be trained based on the first private disturbance factor and the first model to be trained;
and determining a second model to be trained based on the first private disturbance factor and the first model to be trained, namely, modifying (adding or removing) the network layer structure of the first model to be trained based on the first private disturbance factor to obtain the second model to be trained.
Step A2, performing iterative training on the second model to be trained based on the preset training data with preset labels to train and update model variables of the second model to be trained;
step A3, judging whether the second model to be trained of iterative training reaches a preset replacement updating condition, if so, performing replacement updating on the model variable updated by training by executing a preset dynamic federal flow to obtain the second model to be trained updated by replacement;
specifically, in this embodiment, the first party and the other second parties are in federated communication connection, and the preset dynamic federated procedure requires the first party and the other second parties to participate together.
In this embodiment, first, the first participant performs iterative training on the second model to be trained based on the preset training data with the preset label to train and update the model variables of the second model to be trained, and in this embodiment, the iterative training method includes, but is not limited to, a gradient descent method.
It should be noted that the preset replacement update condition includes that a threshold of a first iteration number is reached, a threshold of a first training round number is reached, and the like, in this embodiment, if the trained second model to be trained reaches the preset replacement update condition, the model variable updated by training is replaced and updated by executing the preset dynamic federal process, so as to obtain the second model to be trained, and specifically, replacing and updating the model variable updated by training includes: obtaining other model variables corresponding to other second participants, and further obtaining a polymerization model variable based on other model variables of other second participants and the model variable of the first participant, after obtaining the polymerization model variable, replacing and updating the model variable of the first participant based on the polymerization model variable, specifically, if the second model to be trained reaches the preset replacement and update condition, directly replacing the model variable which is being trained and updated in the second model to be trained with the polymerization model variable, and if the second model to be trained does not reach the preset replacement and update condition, iteratively training the second model to be trained until the second model to be trained reaches the preset replacement and update condition.
Step A4, continuously performing iterative training and replacement updating on the second model to be trained after replacement updating until the second model to be trained meets a preset training completion condition, and obtaining a target model.
In this embodiment, based on the model variable after replacement update, the iterative training of the second model to be trained and the judgment on whether the second model to be trained reaches the preset replacement update condition are performed again until the second model to be trained reaches a preset training completion condition, where the preset training completion condition includes reaching a second iteration number threshold, reaching a second training round number threshold, and the like.
Wherein the first private perturbation factor comprises an input data factor;
the first private perturbation factor comprises a private loss function factor;
the step of executing a preset dynamic federal process on the first model to be trained based on the preset training data with the preset label and the first private disturbance factor to obtain the target model comprises the following steps B1-B4:
step B1, performing local iterative training on the first model to be trained based on the preset training data with the preset labels to obtain an intermediate training result;
step B2, determining a disturbance loss function based on the private loss function factor and a first preset disturbance amplitude;
step B3, based on the disturbance loss function, randomly or adaptively carrying out private disturbance adjustment on a preset loss function in the local iterative training process to obtain a target loss function;
and B4, performing local iterative training on the first model to be trained based on the intermediate training result, the preset label and the target loss function to obtain a target model.
In this embodiment, the first private disturbance factor is a private loss function factor, and based on the preset training data with the preset label, a process of performing iterative training (one-time iterative training or multiple-time iterative training) on the first model to be trained (in the local iterative training process of the first participant) is performed to obtain an intermediate training result, after the first private disturbance factor is determined to be the private loss function factor, based on the private loss function factor and the first preset disturbance amplitude, a disturbance loss function is determined, and based on the disturbance loss function, the private disturbance adjustment is performed randomly or adaptively on the preset loss function in the local iterative training process to obtain a target loss function, that is, to determine a disturbed Δ ltAfter adding perturbed Δ ltThen, the loss function is preset and the delta l of the disturbance is determinedtAnd integrating the two to obtain a target loss function, and performing local iterative training on the first model to be trained based on the intermediate training result (intermediate prediction label), the preset label and the target loss function in the process of performing iterative training (one-time iterative training or multiple iterative training) on the first model to be trained to obtain the target model.
Compared with the prior art that the data processing efficiency and accuracy are reduced by using a deep learning model obtained by training and learning in an attack-free mode to perform data processing, the method and the device for processing the self-adaptive data based on the federated learning have the advantages that when a data processing instruction is detected, to-be-processed data are obtained and input into a preset dynamic federated network model; obtaining a prediction result of the data to be processed after the preset dynamic federated network model performs prediction processing on the data to be processed; the preset dynamic federated network model is a target model obtained by executing a preset dynamic federated flow on a first model to be trained on the basis of preset training data with preset labels and a first private disturbance factor. In the application, after the data to be processed is obtained, the data to be processed is processed based on a preset dynamic federal network model, and the preset dynamic federal network model is obtained by executing a preset dynamic federal process on a first model to be trained based on preset training data with a preset label and a first private disturbance factor, namely, the preset dynamic federal network model is obtained by federal learning after the first private disturbance factor of a first participant is added, namely, the first participant holds the first private disturbance factor, so that an attacker cannot effectively develop attacks when only knowing a public neural network model in the federal process, or even when the model is attacked, the attack influence on a deep neural network model is generalized due to the addition of the first private disturbance factor of the first participant, thereby improving the defense capability of the deep neural network model against various attacks, the data to be processed is processed based on the preset dynamic federated network model with defense capability against various attacks, so that the processing efficiency and accuracy of the data to be processed can be improved.
Further, based on the first embodiment of the present application, another embodiment of the present application is provided, in which the first private perturbation factor includes a private model parameter factor;
the step of executing a preset dynamic federal process on the first model to be trained based on the preset training data with the preset label and the first private disturbance factor to obtain the target model comprises the following steps of C1-C4:
step C1, performing local iterative training on the initial model parameters of the first model to be trained based on the preset training data with the preset labels to obtain intermediate model parameters;
step C2, determining a disturbance model parameter based on the private model parameter factor and a second preset disturbance amplitude;
and step C3, carrying out disturbance adjustment on the intermediate model parameters in the local iterative training process randomly or in a self-adaptive manner based on the disturbance model parameters to obtain a target model.
In this embodiment, the first private disturbance factor is a private model parameter factor, after the first private disturbance factor is determined to be the private model parameter factor, based on the preset training data with the preset label, in the process of performing iterative training (one-time iterative training or multiple-time iterative training) on the initial model parameter of the first model to be trained (in the process of local iterative training of the first participant), an intermediate model parameter to be adjusted (to be feedback-adjusted) is obtained, and based on the private model parameter factor and the second preset disturbance amplitude, a disturbance network parameter is determined, that is, when the model is trained, the disturbed Δ w is determined by the private model parameter factor and the second preset disturbance amplitudesAt the addition of perturbed Δ wsThen, the intermediate model parameter to be adjusted (to be feedback adjusted) and the disturbed delta wsAnd integrating the parameters, namely randomly or adaptively carrying out disturbance adjustment on the intermediate model parameters in the iterative training process based on the disturbance network parameters, and then continuously carrying out iterative training on the disturbance-adjusted intermediate model parameters of the first model to be trained based on the preset training data with the preset labels based on the disturbance-adjusted intermediate model parameters to obtain the target model.
The first private perturbation factor comprises a gradient factor;
the step of executing a preset dynamic federal process on the first model to be trained based on the preset training data with the preset label and the first private disturbance factor to obtain the target model comprises the steps of D1-D4:
step D1, performing local iterative training on the first model to be trained based on the preset training data with the preset label to obtain an intermediate gradient;
step D2, determining a disturbance gradient based on the gradient factor and a third preset disturbance amplitude;
d3, carrying out disturbance adjustment on the intermediate gradient in the iterative training process randomly or adaptively based on the disturbance gradient to obtain a target intermediate gradient;
and D4, performing dynamic iterative training on a preset basic model based on the target intermediate gradient and preset training data with preset labels to obtain a target model.
In this embodiment, the first private disturbance factor is a gradient factor, and after the dynamic iterative training is performed on the first model to be trained based on the preset training data with the preset label, an intermediate gradient is obtained; determining a perturbation gradient, such as Δ g, based on the gradient factor and a third predetermined perturbation amplitudet(ii) a Based on the perturbation gradient Δ gtRandomly or adaptively carrying out disturbance adjustment on the intermediate gradient in the iterative training process (in the local iterative training process of the first participant) to obtain a target intermediate gradient; and performing dynamic iterative training on the first model to be trained on the basis of the target intermediate gradient and preset training data with preset labels to obtain a target model.
In the embodiment, the initial model parameters of the first model to be trained are subjected to local iterative training based on the preset training data with the preset labels to obtain intermediate model parameters; determining a disturbance model parameter based on the private model parameter factor and a second preset disturbance amplitude; and carrying out disturbance adjustment on the intermediate model parameters in the local iterative training process randomly or in a self-adaptive manner based on the disturbance model parameters to obtain a target model. In this embodiment, from the perspective of changing parameters of the private model of the first participant and the like, disturbance in the model training process is performed, and the target model is accurately obtained.
Further, based on the first embodiment of the present application, another embodiment of the present application is provided, in which the first private perturbation factor includes a first private network layer structure factor;
the step of determining a second model to be trained based on the first private perturbation factor and the first model to be trained includes:
step E1, determining the weight information of the first private network layer structure factor;
in this embodiment, the first private perturbation factor includes a first private network layer structure factor (a factor for adding a network layer structure), and the manner of determining the weight information of the first private network layer structure factor may be: and acquiring preset weight information of the first private network layer structure factor.
Step E2, determining a network layer structure to be added based on the weight information;
and determining a network layer structure to be added based on the weight information, if the weight information is 0, adding the network layer structure to be added (a private network layer) on the basis of the global network structure, and if the weight information is not 0, adding the network layer structure to be added (the private network layer) on the basis of the global network (the global network structure of the first model to be trained) according to the weight.
Step E3, determining a second model to be trained based on the network layer structure to be added and the first model to be trained.
And after the network layer structure to be added is obtained, determining a second model to be trained according to a preset logic of adding the network layer structure based on the network layer structure to be added and the first model to be trained.
The first private perturbation factor comprises a second private network layer structure factor;
the step of determining a second model to be trained based on the first private perturbation factor and the first model to be trained includes:
step F1, receiving a second private network layer structure factor sent by the server;
specifically, for data virus exposure and attack, the server may also adopt a private information manner to perform defense, for example, randomly hide some network layer structures of the first model to be trained, and send the remaining layers to each participant for training, so that the data virus exposure/implantation of the data attacker can only affect the limited layer of the first model to be trained, and cannot affect the model as a whole.
In this embodiment, the first private disturbance factor includes a second private network layer structure factor (a factor for removing a network layer structure), and the first participant receives the second private network layer structure factor or the second model to be trained, which is sent by the server.
Step F2, determining a network layer structure to be removed based on the second private network layer structure factor;
step F3, determining a second model to be trained based on the network layer structure to be removed and the first model to be trained.
And if the first participant receives a second private network layer structure factor sent by the server, determining a network layer structure to be removed or determining a network layer structure to be hidden based on the second private network layer structure factor. In this embodiment, a second model to be trained is determined based on the network layer structure to be removed and the first model to be trained.
In this embodiment, the first private network layer structure factor is determined by determining weight information of the first private network layer structure factor; determining a network layer structure to be added based on the weight information; and determining a second model to be trained based on the network layer structure to be added and the first model to be trained. In the embodiment, the second model to be trained is accurately determined, and a foundation is laid for processing the data to be processed based on the preset dynamic federated network model with the defense capability against various attacks.
Further, based on the first embodiment of the present application, another embodiment of the present application is provided, in which if the iteratively trained second model to be trained reaches a preset replacement update condition, the step of performing a preset dynamic federal procedure to perform replacement update on the model variables that are trained and updated to obtain a replacement updated second model to be trained includes the following steps H1-H2:
step H1, if the second model to be trained of iterative training reaches a preset replacement updating condition, encrypting and sending the model variables to be trained and updated to a third party associated with the first participant, so that the third party can aggregate the model variables sent by a plurality of other second participants to obtain aggregated model variables, and feeding the aggregated model variables back to the first participant and the second participants;
the other second participants respectively determine respective model variables based on the corresponding second private disturbance factor sent by the third party and the first model to be trained;
and step H2, receiving the aggregation model variable fed back by the third party, replacing and updating the model variable updated by training to the aggregation model variable, and obtaining the preset prediction model to be trained updated by replacement.
In this embodiment, if the second model to be trained of the iterative training reaches a preset replacement update condition, the model variable updated by the training is encrypted and sent to a third party associated with the first participant, so that the third party can aggregate the model variables sent by a plurality of other second participants to obtain an aggregated model variable, the aggregated model variable fed back by the third party is received, the model variable updated by the training is replaced and updated to the aggregated model variable, and the preset prediction model to be trained updated is obtained, so that information leakage is avoided, and safety in the model training process is ensured.
Referring to fig. 3, fig. 3 is a schematic device structure diagram of a hardware operating environment according to an embodiment of the present application.
As shown in fig. 3, the adaptive data processing apparatus based on federal learning may include: a processor 1001, such as a CPU, a memory 1005, and a communication bus 1002. The communication bus 1002 is used for realizing connection communication between the processor 1001 and the memory 1005. The memory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 1005 may alternatively be a memory device separate from the processor 1001 described above.
Optionally, the adaptive data processing device based on federal learning may further include a rectangular user interface, a network interface, a camera, an RF (Radio Frequency) circuit, a sensor, an audio circuit, a WiFi module, and the like. The rectangular user interface may comprise a Display screen (Display), an input sub-module such as a Keyboard (Keyboard), and the optional rectangular user interface may also comprise a standard wired interface, a wireless interface. The network interface may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface).
It will be understood by those skilled in the art that the federated learning-based adaptive data processing apparatus architecture depicted in FIG. 3 does not constitute a limitation on federated learning-based adaptive data processing apparatuses, and may include more or fewer components than those illustrated, or some components in combination, or a different arrangement of components.
As shown in fig. 3, the memory 1005, which is a storage medium, may include therein an operating system, a network communication module, and an adaptive data processing program based on federal learning. The operating system is a program that manages and controls the hardware and software resources of the Federal learning-based adaptive data processing device, and supports the operation of the Federal learning-based adaptive data processing device as well as other software and/or programs. The network communications module is used to enable communication between components within the memory 1005, as well as with other hardware and software in the adaptive data processing system based on federal learning.
In the adaptive data processing apparatus based on federal learning shown in fig. 3, the processor 1001 is configured to execute an adaptive data processing program based on federal learning stored in the memory 1005, and implement any one of the steps of the adaptive data processing method based on federal learning described above.
The specific implementation of the adaptive data processing device based on federal learning in the present application is basically the same as the embodiments of the adaptive data processing method based on federal learning, and is not described herein again.
The application also provides a self-adaptation data processing device based on federal learning, is applied to first participant, self-adaptation data processing device based on federal learning includes:
the first acquisition module is used for acquiring data to be processed when a data processing instruction is detected, and inputting the data to be processed into a preset dynamic federated network model;
the second obtaining module is used for obtaining a prediction result of the data to be processed after the preset dynamic federated network model carries out prediction processing on the data to be processed;
the preset dynamic federated network model is a target model obtained by executing a preset dynamic federated flow on a first model to be trained on the basis of preset training data with preset labels and a first private disturbance factor.
Optionally, the preset dynamic federated network model comprises one or more dynamic federated sub-network models;
the second acquisition module includes:
the first determining unit is used for determining private disturbance associated information in the data processing instruction;
a second determining unit, configured to determine a disturbance weight corresponding to each disturbance factor if the private disturbance associated information includes multiple disturbance factors;
the first obtaining unit is used for obtaining each predictor result obtained after the preset dynamic disturbance neural sub-model corresponding to each disturbance factor performs prediction processing on the data to be processed in the preset dynamic disturbance neural network model based on the disturbance factors and the corresponding disturbance weights;
and the second acquisition unit is used for obtaining the prediction result of the data to be processed based on each predictor.
Optionally, the adaptive data processing apparatus based on federal learning further includes:
the third acquisition module is used for acquiring preset training data with preset labels;
the fourth obtaining module is used for executing a preset dynamic federal process on the first model to be trained based on the preset training data with the preset label and the first private disturbance factor to obtain a target model;
and the setting module is used for setting the target model as the preset dynamic federated network model.
Optionally, the fourth obtaining module includes:
a third determining unit, configured to determine a second model to be trained based on the first private disturbance factor and the first model to be trained;
the first training unit is used for performing iterative training on the second model to be trained based on the preset training data with the preset labels so as to train and update model variables of the second model to be trained;
the judging unit is used for judging whether the second model to be trained of the iterative training reaches a preset replacement updating condition or not, and if the second model to be trained of the iterative training reaches the preset replacement updating condition, replacing and updating the model variable updated by training by executing a preset dynamic federal flow to obtain the second model to be trained which is replaced and updated;
and the second training unit is used for continuously carrying out iterative training and replacement updating on the second model to be trained which is replaced and updated until the second model to be trained meets a preset training completion condition to obtain a target model.
Optionally, the determining unit includes:
the aggregation subunit is configured to encrypt the model variable updated by the training and send the model variable to a third party associated with the first participant if the second model to be trained of the iterative training meets a preset replacement update condition, so that the third party performs aggregation processing on the model variables sent by a plurality of other second participants to obtain an aggregated model variable, and feeds the aggregated model variable back to the first participant and each second participant;
the other second participants respectively determine respective model variables based on the corresponding second private disturbance factor sent by the third party and the first model to be trained;
and the first receiving subunit is configured to receive the aggregation model variable fed back by the third party, replace and update the model variable updated by training to the aggregation model variable, and obtain the preset prediction model to be trained, which is replaced and updated.
Optionally, the first private perturbation factor comprises a first private network layer structure factor;
the third determination unit includes:
a first determining subunit, configured to determine weight information of the first private network layer structure factor;
the second determining subunit is used for determining the network layer structure to be added based on the weight information;
and the third determining subunit is used for determining a second model to be trained based on the network layer structure to be added and the first model to be trained.
Optionally, the first private perturbation factor comprises a second private network layer structure factor;
the third determination unit further includes:
the second receiving subunit is used for receiving a second private network layer structure factor sent by the server;
a fourth determining subunit, configured to determine, based on the second private network layer structure factor, a network layer structure to be removed;
and the fifth determining subunit is used for determining a second model to be trained based on the network layer structure to be removed and the first model to be trained.
Optionally, the first private perturbation factor comprises a private loss function factor,
the fourth obtaining module further comprises:
the third training unit is used for carrying out local iterative training on the first model to be trained based on the preset training data with the preset labels to obtain an intermediate training result;
a fourth determining unit, configured to determine a disturbance loss function based on the private loss function factor and a first preset disturbance amplitude;
the first adjusting unit is used for randomly or adaptively carrying out private disturbance adjustment on a preset loss function in the local iterative training process based on the disturbance loss function to obtain a target loss function;
and the third obtaining unit is used for carrying out local iterative training on the first model to be trained based on the intermediate training result, the preset label and the target loss function so as to obtain a target model.
Optionally, the first private perturbation factor comprises a private model parameter factor;
the fourth obtaining module further comprises:
the fourth training unit is used for carrying out local iterative training on the initial model parameters of the first model to be trained based on the preset training data with the preset labels to obtain intermediate model parameters;
a fifth determining unit, configured to determine a perturbation model parameter based on the private model parameter factor and a second preset perturbation amplitude;
and the second adjusting unit is used for carrying out disturbance adjustment on the intermediate model parameters in the local iterative training process randomly or in a self-adaptive manner based on the disturbance model parameters so as to obtain a target model.
Optionally, the first private perturbation factor comprises a gradient factor;
the fourth obtaining module further comprises:
the fifth training unit is used for carrying out local iterative training on the first model to be trained based on the preset training data with the preset labels to obtain an intermediate gradient;
a sixth determining unit, configured to determine a disturbance gradient based on the gradient factor and a third preset disturbance amplitude;
a third adjusting unit, configured to randomly or adaptively perform perturbation adjustment on an intermediate gradient in an iterative training process based on the perturbation gradient to obtain a target intermediate gradient;
and the sixth training unit is used for carrying out dynamic iterative training on a preset basic model based on the target intermediate gradient and preset training data with preset labels to obtain a target model.
The specific implementation of the adaptive data processing apparatus based on federal learning in the present application is basically the same as each embodiment of the adaptive data processing method based on federal learning, and is not described herein again.
The present application provides a storage medium, and the storage medium stores one or more programs, which can be further executed by one or more processors for implementing the steps of any one of the above-mentioned adaptive data processing methods based on federal learning.
The specific implementation of the storage medium of the present application is substantially the same as each embodiment of the above-described adaptive data processing method based on federal learning, and is not described herein again.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (12)

1. The adaptive data processing method based on the federal learning is characterized by being applied to a first participant and comprising the following steps of:
when a data processing instruction is detected, acquiring data to be processed, and inputting the data to be processed into a preset dynamic federated network model;
obtaining a prediction result of the data to be processed after the preset dynamic federated network model performs prediction processing on the data to be processed;
the preset dynamic federated network model is a target model obtained by executing a preset dynamic federated flow on a first model to be trained on the basis of preset training data with preset labels and a first private disturbance factor.
2. The adaptive federated learning-based data processing method of claim 1, wherein the pre-set dynamic federated network model includes one or more dynamic federated sub-network models;
the step of obtaining the prediction result of the to-be-processed data after the preset dynamic federated network model performs prediction processing on the to-be-processed data includes:
determining private disturbance associated information in the data processing instruction;
if the private disturbance associated information comprises a plurality of disturbance factors, determining a disturbance weight corresponding to each disturbance factor;
acquiring each forecasting sub-result obtained after the data to be processed is forecasted by the preset dynamic federal sub-network model corresponding to each disturbance factor in the preset dynamic federal network model based on the disturbance factors and the corresponding disturbance weights;
and obtaining the prediction result of the data to be processed based on each predictor.
3. The adaptive data processing method based on federal learning of claim 1, wherein after the step of obtaining the prediction result of the to-be-processed data after the preset dynamic federal network model performs prediction processing on the to-be-processed data, the method comprises:
acquiring preset training data with preset labels;
executing a preset dynamic federal flow on the first model to be trained based on the preset training data with the preset label and the first private disturbance factor to obtain a target model;
and setting the target model as the preset dynamic federated network model.
4. The adaptive data processing method based on federated learning of claim 3,
the step of executing a preset dynamic federal flow to a first model to be trained based on the preset training data with the preset labels and the first private disturbance factor to obtain a target model comprises the following steps:
determining a second model to be trained based on the first private disturbance factor and the first model to be trained;
performing iterative training on the second model to be trained based on the preset training data with the preset labels to train and update model variables of the second model to be trained;
judging whether the second model to be trained of iterative training reaches a preset replacement updating condition, if so, performing replacement updating on the model variable updated by training by executing a preset dynamic federal flow to obtain a second model to be trained which is updated by replacement;
and continuously carrying out iterative training and replacement updating on the second model to be trained which is subjected to replacement updating until the second model to be trained meets a preset training completion condition, so as to obtain a target model.
5. The adaptive data processing method based on federated learning of claim 4,
if the second model to be trained of the iterative training reaches a preset replacement updating condition, performing replacement updating on the model variable updated by training by executing a preset dynamic federal flow to obtain a second model to be trained which is updated by replacement, wherein the step comprises the following steps of:
if the second model to be trained of the iterative training reaches a preset replacement updating condition, encrypting the model variable updated by the training and sending the model variable to a third party associated with the first participant so that the third party can aggregate the model variables sent by a plurality of other second participants to obtain an aggregated model variable, and feeding the aggregated model variable back to the first participant and each second participant;
the other second participants respectively determine respective model variables based on the corresponding second private disturbance factor sent by the third party and the first model to be trained;
and receiving the aggregation model variable fed back by the third party, replacing and updating the model variable updated by training into the aggregation model variable, and obtaining the second model to be trained updated by replacement.
6. The adaptive federated learning-based data processing method of claim 4, wherein the first private perturbation factor comprises a first private network layer structure factor;
the step of determining a second model to be trained based on the first private perturbation factor and the first model to be trained includes:
determining weight information of the first private network layer structure factor;
determining a network layer structure to be added based on the weight information;
and determining a second model to be trained based on the network layer structure to be added and the first model to be trained.
7. The adaptive federated learning-based data processing method of claim 4, wherein the first private perturbation factor comprises a second private network layer structure factor;
the step of determining a second model to be trained based on the first private perturbation factor and the first model to be trained includes:
receiving a second private network layer structure factor sent by a server;
determining a network layer structure to be removed based on the second private network layer structure factor;
and determining a second model to be trained based on the network layer structure to be removed and the first model to be trained.
8. The adaptive federated learning-based data processing method of claim 3, wherein the first private perturbation factor comprises a private loss function factor,
the step of executing a preset dynamic federal flow to a first model to be trained based on the preset training data with the preset labels and the first private disturbance factor to obtain a target model comprises the following steps:
performing local iterative training on the first model to be trained based on the preset training data with the preset labels to obtain an intermediate training result;
determining a disturbance loss function based on the private loss function factor and a first preset disturbance amplitude;
carrying out private disturbance adjustment on a preset loss function in a local iterative training process randomly or in a self-adaptive manner based on the disturbance loss function to obtain a target loss function;
and performing local iterative training on the first model to be trained based on the intermediate training result, the preset label and the target loss function to obtain a target model.
9. The adaptive federated learning-based data processing method of claim 3, wherein the first private perturbation factor comprises a private model parameter factor;
the step of executing a preset dynamic federal flow to a first model to be trained based on the preset training data with the preset labels and the first private disturbance factor to obtain a target model comprises the following steps:
performing local iterative training on the initial model parameters of the first model to be trained based on the preset training data with the preset labels to obtain intermediate model parameters;
determining a disturbance model parameter based on the private model parameter factor and a second preset disturbance amplitude;
and carrying out disturbance adjustment on the intermediate model parameters in the local iterative training process randomly or in a self-adaptive manner based on the disturbance model parameters to obtain a target model.
10. The adaptive federated learning-based data processing method of claim 3, wherein the first private perturbation factor comprises a gradient factor;
the step of executing a preset dynamic federal flow to a first model to be trained based on the preset training data with the preset labels and the first private disturbance factor to obtain a target model comprises the following steps:
performing local iterative training on the first model to be trained based on the preset training data with the preset label to obtain an intermediate gradient;
determining a disturbance gradient based on the gradient factor and a third preset disturbance amplitude;
carrying out disturbance adjustment on the intermediate gradient in the iterative training process randomly or adaptively based on the disturbance gradient to obtain a target intermediate gradient;
and performing dynamic iterative training on the first model to be trained on the basis of the target intermediate gradient and preset training data with preset labels to obtain a target model.
11. An adaptive data processing apparatus based on federal learning, characterized in that the adaptive data processing apparatus based on federal learning includes: a memory, a processor, and a program stored on the memory for implementing the federated learning-based adaptive data processing method,
the memory is used for storing a program for realizing the adaptive data processing method based on the federal learning;
the processor is configured to execute a program for implementing the adaptive data processing method based on federal learning so as to implement the steps of the adaptive data processing method based on federal learning according to any one of claims 1 to 10.
12. A storage medium having stored thereon a program for implementing a federal learning based adaptive data processing method, the program being executed by a processor to implement the steps of the federal learning based adaptive data processing method as claimed in any one of claims 1 to 10.
CN202011115886.6A 2020-10-16 2020-10-16 Adaptive data processing method, device, equipment and storage medium based on federal learning Pending CN112329010A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011115886.6A CN112329010A (en) 2020-10-16 2020-10-16 Adaptive data processing method, device, equipment and storage medium based on federal learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011115886.6A CN112329010A (en) 2020-10-16 2020-10-16 Adaptive data processing method, device, equipment and storage medium based on federal learning

Publications (1)

Publication Number Publication Date
CN112329010A true CN112329010A (en) 2021-02-05

Family

ID=74313260

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011115886.6A Pending CN112329010A (en) 2020-10-16 2020-10-16 Adaptive data processing method, device, equipment and storage medium based on federal learning

Country Status (1)

Country Link
CN (1) CN112329010A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113591999A (en) * 2021-08-03 2021-11-02 北京邮电大学 End edge cloud federal learning model training system and method
CN113988260A (en) * 2021-10-27 2022-01-28 杭州海康威视数字技术股份有限公司 Data processing method, device, equipment and system
CN114091690A (en) * 2021-11-25 2022-02-25 支付宝(杭州)信息技术有限公司 Method for training federated learning model, method for calling federated learning model and federated learning system
CN117932686A (en) * 2024-03-22 2024-04-26 成都信息工程大学 Federal learning privacy protection method, system and medium in meta universe based on excitation mechanism

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113591999A (en) * 2021-08-03 2021-11-02 北京邮电大学 End edge cloud federal learning model training system and method
CN113591999B (en) * 2021-08-03 2023-08-01 北京邮电大学 End-edge cloud federal learning model training system and method
CN113988260A (en) * 2021-10-27 2022-01-28 杭州海康威视数字技术股份有限公司 Data processing method, device, equipment and system
CN113988260B (en) * 2021-10-27 2022-11-25 杭州海康威视数字技术股份有限公司 Data processing method, device, equipment and system
CN114091690A (en) * 2021-11-25 2022-02-25 支付宝(杭州)信息技术有限公司 Method for training federated learning model, method for calling federated learning model and federated learning system
CN117932686A (en) * 2024-03-22 2024-04-26 成都信息工程大学 Federal learning privacy protection method, system and medium in meta universe based on excitation mechanism
CN117932686B (en) * 2024-03-22 2024-05-31 成都信息工程大学 Federal learning privacy protection method, system and medium in meta universe based on excitation mechanism

Similar Documents

Publication Publication Date Title
CN112329010A (en) Adaptive data processing method, device, equipment and storage medium based on federal learning
CN112181666B (en) Equipment assessment and federal learning importance aggregation method based on edge intelligence
CN110263936B (en) Horizontal federal learning method, device, equipment and computer storage medium
Wu et al. A hierarchical security framework for defending against sophisticated attacks on wireless sensor networks in smart cities
Wan et al. Reinforcement learning based mobile offloading for cloud-based malware detection
CN112668913A (en) Network construction method, device, equipment and storage medium based on federal learning
CN109658120B (en) Service data processing method and device
CN114611128B (en) Longitudinal federal learning method, device, system, equipment and storage medium
CN110796399A (en) Resource allocation method and device based on block chain
CN113869533A (en) Federal learning modeling optimization method, apparatus, readable storage medium, and program product
Habler et al. Adversarial machine learning threat analysis and remediation in open radio access network (o-ran)
Dahanayaka et al. Robust open-set classification for encrypted traffic fingerprinting
Kim et al. Deep learning based resource assignment for wireless networks
CN113792892A (en) Federal learning modeling optimization method, apparatus, readable storage medium, and program product
CN114168295A (en) Hybrid architecture system and task scheduling method based on historical task effect
CN117675823A (en) Task processing method and device of computing power network, electronic equipment and storage medium
CN111786937B (en) Method, apparatus, electronic device and readable medium for identifying malicious request
US20210266340A1 (en) Systems and methods for automated quantitative risk and threat calculation and remediation
Emu et al. Towards 6g networks: Ensemble deep learning empowered vnf deployment for iot services
CN105357100A (en) Method and device for acquiring priorities of instant messaging group members
CN116843016A (en) Federal learning method, system and medium based on reinforcement learning under mobile edge computing network
CN114503632A (en) Adaptive mutual trust model for dynamic and diverse multi-domain networks
CN107911315B (en) Message classification method and network equipment
Lari et al. Continual local updates for federated learning with enhanced robustness to link noise
CN113536288A (en) Data authentication method, device, authentication equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination