CN117648999B - Federal learning regression model loss function evaluation method and device and electronic equipment - Google Patents

Federal learning regression model loss function evaluation method and device and electronic equipment Download PDF

Info

Publication number
CN117648999B
CN117648999B CN202410122725.1A CN202410122725A CN117648999B CN 117648999 B CN117648999 B CN 117648999B CN 202410122725 A CN202410122725 A CN 202410122725A CN 117648999 B CN117648999 B CN 117648999B
Authority
CN
China
Prior art keywords
data
predicted value
value
secret sharing
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410122725.1A
Other languages
Chinese (zh)
Other versions
CN117648999A (en
Inventor
马平
兰春嘉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Lingshuzhonghe Information Technology Co ltd
Original Assignee
Shanghai Lingshuzhonghe Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Lingshuzhonghe Information Technology Co ltd filed Critical Shanghai Lingshuzhonghe Information Technology Co ltd
Priority to CN202410122725.1A priority Critical patent/CN117648999B/en
Publication of CN117648999A publication Critical patent/CN117648999A/en
Application granted granted Critical
Publication of CN117648999B publication Critical patent/CN117648999B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a federal learning regression model loss function evaluation method, a federal learning regression model loss function evaluation device and electronic equipment, and relates to the technical field of model training, wherein the federal learning regression model loss function evaluation method comprises the following steps of: acquiring first original data; processing the first original data by using the first sub-model to obtain a first predicted value, determining first intermediate data by using the first predicted value, and determining secret sharing data of the first predicted value according to the first intermediate data; determining second predicted value secret sharing data according to second intermediate data acquired from a second participant, and determining third predicted value secret sharing data according to third intermediate data acquired from a third participant; determining first blinding data according to the secret sharing data of the first, second and third predicted values; determining a federation predicted value of the target user according to the first blinding data, the acquired second blinding data and the acquired third blinding data; judging whether to terminate the preset model training according to the label value and the federal predicted value. The data privacy security and the model training efficiency are improved.

Description

Federal learning regression model loss function evaluation method and device and electronic equipment
Technical Field
The invention relates to the technical field of model training, in particular to a federal learning regression model loss function evaluation method and device and electronic equipment.
Background
As a distributed machine learning technology, the core idea of federal learning is to implement balance between data privacy protection and data sharing calculation by performing distributed model training among a plurality of data sources having local data, and constructing a global model based on virtual fusion data only by exchanging model parameters or intermediate results on the premise of not exchanging local individual or sample data. Federal learning can be classified into horizontal federal learning, vertical federal learning, and transfer federal learning.
For model training scenarios for vertical federal learning, how to properly end the model training phase is an important challenge. In the related art, for a federal learning scenario with multiple participants, training is typically ended when the norms of the gradient vectors of the two locally latest adjacent iterative models of each participant are both smaller than a certain preset threshold.
However, the mode has more iteration times and low model training speed.
Disclosure of Invention
The invention provides a federal learning regression model loss function evaluation method, a federal learning regression model loss function evaluation device and electronic equipment, and aims to solve the problems of more iteration times and low model training speed of model training based on federal learning of a plurality of participants in the related technology.
According to an aspect of the present invention, there is provided a federal learning regression model loss function evaluation method applied to a first participant, the method comprising:
Acquiring a test data set; the test data set comprises first original data of a target user;
processing the first original data by using a first sub-model of a preset regression model to obtain a first predicted value, determining first intermediate data by using the first predicted value, and determining secret sharing data of the first predicted value according to the first intermediate data;
Determining second predicted value secret sharing data according to second intermediate data acquired from a second participant, and determining third predicted value secret sharing data according to third intermediate data acquired from a third participant; the second intermediate data is determined according to a second predicted value, and the second predicted value is obtained by processing second original data by using a second sub-model of a preset regression model in a second participant; the third intermediate data is determined according to a third predicted value, and the third predicted value is obtained by processing third initial data by using a third sub-model of a preset regression model in a third participant;
determining first blinding data according to the first predicted value secret sharing data, the second predicted value secret sharing data and the third predicted value secret sharing data;
determining a federal predicted value of the target user according to the first blinded data, the second blinded data acquired from the second participant and the third blinded data acquired from the third participant;
And evaluating the federation loss value according to the label value and the federation predicted value of the target user, and stopping training of the preset regression model when the federation loss value reaches the convergence condition to obtain the target regression model.
According to another aspect of the present invention, there is provided a regression model training apparatus based on federal learning, applied to a first participant, the apparatus comprising:
the acquisition unit is used for acquiring first original data of the target user;
The secret sharing data acquisition unit is used for processing the first original data by using a first sub-model of a preset regression model to obtain a first predicted value, determining first intermediate data by using the first predicted value, and determining secret sharing data of the first predicted value according to the first intermediate data;
The secret sharing data acquisition unit is further used for determining second predicted value secret sharing data according to second intermediate data acquired from a second participant and determining third predicted value secret sharing data according to third intermediate data acquired from a third participant; the second intermediate data is determined according to a second predicted value, and the second predicted value is obtained by processing second original data by using a second sub-model of a preset regression model in a second participant; the third intermediate data is determined according to a third predicted value, and the third predicted value is obtained by processing third initial data by using a third sub-model of a preset regression model in a third participant;
The first blind data acquisition unit is used for determining first blind data of the federation predicted value according to the first predicted value secret sharing data, the second predicted value secret sharing data and the third predicted value secret sharing data;
The federal predicted value acquisition unit is used for determining the federal predicted value of the target user according to the first blind data, the second blind data acquired from the second participant and the third blind data acquired from the third participant;
The target regression model acquisition unit is used for evaluating the federation loss value according to the label value and the federation predicted value of the target user, and stopping training of the preset regression model when the federation loss value reaches the convergence condition to obtain the target regression model.
According to another aspect of the present invention, there is provided an electronic apparatus including:
at least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
The memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the federal learning regression model loss function evaluation method of any embodiment of the present invention.
According to another aspect of the present invention, there is provided a computer readable storage medium storing computer instructions for causing a processor to implement the federal learning regression model loss function evaluation method of any embodiment of the present invention when executed.
According to the technical scheme provided by the embodiment of the invention, the preset regression model based on a plurality of participants is trained according to the label value and the federal predicted value of the target user, and the model training mode is less in iteration times than the model training mode in which the norms of gradient vectors of two adjacent iteration models are all smaller than a certain preset threshold value by utilizing each participant sub-model in the related art, so that a better model training effect can be achieved more quickly, and the model training efficiency is improved. The scheme can process the original data of the target user in each participant through the sub-model in each participant to obtain the predicted value of the target user; obtaining secret sharing data of each predicted value by utilizing a secret sharing mode; and determining blind data according to the secret sharing data of each predicted value, determining the federal predicted value of the target user according to the blind data, and judging whether to terminate training of the preset regression model according to the tag value and the federal predicted value of the target user. Therefore, the model training mode provided by the scheme is low in complexity, and the model training efficiency is further guaranteed.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the invention or to delineate the scope of the invention. Other features of the present invention will become apparent from the description that follows.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a federal learning regression model loss function evaluation method according to a first embodiment of the present invention;
FIG. 2 is a flow chart of a federal learning regression model loss function evaluation method according to a second embodiment of the present invention;
FIG. 3 is a schematic structural diagram of a loss function evaluation device of a federal learning regression model according to a third embodiment of the present invention;
Fig. 4 is a schematic structural diagram of an electronic device implementing the federal learning regression model loss function evaluation method according to an embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "target," "original," "first," "second," "third," and the like in the description and claims of the present invention and in the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
Fig. 1 is a flowchart of a federal learning regression model loss function evaluation method according to an embodiment of the present invention, where the embodiment is applicable to a multi-participant regression model training scenario based on federal learning, and the practical application scenario may include loan amount prediction, house price prediction, investment risk analysis, crop fertilization amount analysis, student learning ability evaluation, patient organ function prediction, and the like. The method may be performed by an electronic device. As shown in fig. 1, the method includes:
step 101, acquiring a test data set; the test dataset includes first raw data for the target user.
Wherein the test dataset may comprise data of at least one target user. The first party may obtain first raw data of the target user. The second party may obtain second raw data for the target user. The third party may obtain third origination data for the target user. Wherein the first raw data has a tag value.
Taking a loan amount prediction scenario as an example, assuming that a first participant is a loan bank, a second participant is a tax administration agency, and a third participant is an e-commerce platform, the first original data of the target user may include the following information: accumulated properties, credit score, loan amount, etc.; the second raw data may include the following information: salary, accumulated accumulation, job stability coefficient, age, etc.; the third raw data may include the following information: consumption level, etc.
Step 102, processing the first original data by using a first sub-model of a preset regression model to obtain a first predicted value, determining first intermediate data by using the first predicted value, and determining secret sharing data of the first predicted value according to the first intermediate data.
The preset regression model may include three sub-models, namely a first sub-model of the first party, a second sub-model of the second party, and a third sub-model of the third party.
Specifically, the first participant may process the first original data of the target user by using the first sub-model to obtain a first predicted value. The first intermediate data may be constructed using the first predicted value in a preset manner. The first intermediate data may be sent to the second party and the third party.
The first party, the second party and the third party may construct secret sharing data of the first predicted value from the first intermediate data, respectively.
Step 103, determining second predicted value secret sharing data according to second intermediate data acquired from a second participant, and determining third predicted value secret sharing data according to third intermediate data acquired from a third participant; the second intermediate data is determined according to a second predicted value, and the second predicted value is obtained by processing second original data by using a second sub-model of a preset regression model in a second participant; and the third intermediate data is determined according to a third predicted value, and the third predicted value is obtained by processing third original data by using a third sub-model of a preset regression model in a third participant.
Specifically, the second participant may process the second original data of the target user by using a second sub-model of the preset regression model to obtain a second predicted value. The second intermediate data may be constructed using the second predicted value in a preset manner. And may send the second intermediate data to the first and third parties.
The first party, the second party and the third party may construct secret sharing data of the second predicted value from the second intermediate data, respectively.
Similarly, the third participant can process the third initial data of the target user by using a third sub-model of the preset regression model to obtain a third predicted value. The third intermediate data may be constructed using the third predicted value in a preset manner. And may send the third intermediate data to the first participant and the second participant.
The first party, the second party and the third party may respectively construct secret sharing data of a third predicted value from the third intermediate data.
Step 104, determining the first blinded data according to the first predicted value secret sharing data, the second predicted value secret sharing data and the third predicted value secret sharing data.
Specifically, the first party may determine the first blinded data for the federal predictor according to the first predictor secret sharing data, the second predictor secret sharing data, and the third predictor secret sharing data that are constructed by the first party.
The second party can determine second blinded data for the federal predictor based on the first predictor secret sharing data, the second predictor secret sharing data, and the third predictor secret sharing data that it constructed. The second party may send the second blinded data to the first party.
The third party can determine third blinded data for the federal predictor based on the first predictor secret sharing data, the second predictor secret sharing data, and the third predictor secret sharing data that it constructed. The third party may send third blinded data to the first party.
Step 105, determining the federal prediction value of the target user according to the first blinded data, the second blinded data acquired from the second participant and the third blinded data acquired from the third participant.
Specifically, the first participant may receive second blinded data transmitted by the second participant and may receive third blinded data transmitted by the third participant. And determining the federal predicted value of the target user according to the first blinding data, the second blinding data and the third blinding data by using a preset mode.
And step 106, evaluating the federation loss value according to the label value and the federation predicted value of the target user, and stopping training of the preset regression model when the federation loss value reaches the convergence condition to obtain the target regression model.
Specifically, the federal loss value of the preset regression model can be calculated according to the label value and federal predicted value of the target user, and the parameters of the preset regression model are adjusted to reduce the value of the loss function, and the model parameters are continuously and iteratively updated until the convergence condition is reached, so that the target regression model is obtained. The convergence condition may be that the federal loss value is smaller than a preset threshold, or the iteration number reaches a preset value. After the target regression model is obtained, business evaluation, such as evaluation of loan amount, is performed based on the target regression model.
According to the technical scheme provided by the embodiment of the invention, the preset regression model based on a plurality of participants is trained according to the label value and the federal predicted value of the target user, and the model training mode is less in iteration times than the model training mode in which the norms of gradient vectors of two adjacent iteration models are all smaller than a certain preset threshold value by utilizing each participant sub-model in the related art, so that a better model training effect can be achieved more quickly, and the model training efficiency is improved. The scheme can process the original data of the target user in each participant through the sub-model in each participant to obtain the predicted value of the target user; obtaining secret sharing data of each predicted value by utilizing a secret sharing mode; and determining blind data according to the secret sharing data of each predicted value, determining the federal predicted value of the target user according to the blind data, and judging whether to terminate training of the preset regression model according to the tag value and the federal predicted value of the target user. Therefore, the model training mode provided by the scheme is low in complexity, and the model training efficiency is further guaranteed.
Example two
Fig. 2 is a flowchart of a federal learning regression model loss function evaluation method according to a second embodiment of the present invention, in which steps 102, 103, 105 and 106 in the first embodiment are refined. As shown in fig. 2, the method includes:
step 201, acquiring a test data set; the test dataset includes first raw data for the target user.
Step 201 is similar to the principle and implementation of step 101, and will not be described again.
Step 202, processing the first original data by using a first sub-model of a preset regression model to obtain a first predicted value, generating a first random number, and determining a mask confusion value for the first predicted value according to the first predicted value, the first random number, a second random number acquired from a second participant and a third random number acquired from a third participant.
Specifically, a first random number is generated in a first party. A second random number is generated in the second party. A third random number is generated in the third party. The first party may send the first random number to the second party. The second party may send the second random number to the third party. The third party may send the third random number to the first party. In addition, the second party may send the second random number to the first party.
The first party may construct a first mask from the acquired first, second, and third random numbers, the first mask having the formula:
Wherein, Representing a first mask; /(I)Representing a first random number; /(I)Representing a second random number; /(I)Representing a third random number.
The first party may determine a mask confusion value for the first predicted value based on the first predicted value, the first random number, the second random number, and the third random number. And may send the mask confusion value to the second party and the third party. Specifically, the formula for determining the mask confusion value for the first predicted value is as follows:
Wherein, A mask confusion value that is represented as determined for the first predicted value; /(I)Representing a first mask; /(I)Representing a userIs a first predictor of (a).
Step 203, determining the first mask secret sharing data according to the first random number and the third random number.
Specifically, the first party may construct a first slice of the first masked secret sharing according to the first random number and the third random number. The second party may construct a second piece of the first masked secret sharing from the first random number and the second random number. The third party may construct a third slice of the first masked secret sharing from the second random number and the third random number. The specific formula is as follows:
Wherein, A first shard representing a first masked secret sharing; /(I)Representing a first random number; /(I)Representing a second random number; /(I)Representing a third random number; /(I)A second piece representing a first masked secret sharing; /(I)A third shard representing a first masked secret sharing; /(I)Representing a first mask.
In step 204, the first predictor secret sharing data is determined using the mask confusion value and the first mask secret sharing data.
Specifically, the first party may construct a first shard of the first predictor secret sharing based on the mask confusion value determined for the first predictor and the first shard of the first mask secret sharing.
The second party may construct a second shard of the first predictor secret sharing based on the mask confusion value determined for the first predictor and the second shard of the first mask secret sharing.
The third party may construct a third shard of the first predictor secret sharing based on the mask confusion value determined for the first predictor and the third shard of the first mask secret sharing. The specific formula is as follows:
Wherein, Representing user/>Is a first predictive value of (1); /(I)A first shard representing a first predictor secret sharing; /(I)A first shard representing a first masked secret sharing; b represents a mask confusion value determined for the first predicted value; /(I)A second shard representing a secret sharing of the first predictor; /(I)A second piece representing a first masked secret sharing; /(I)A third shard representing a secret sharing of the first predictor; /(I)A third shard representing a first masked secret sharing; /(I)Representing a first random number; /(I)Representing a second random number; /(I)Representing a third random number.
Specifically, the method can be used for conveniently generating the predicted value secret sharing data.
Step 205, determining second predicted value secret sharing data according to the second mask secret sharing data and the mask confusion value corresponding to the second predicted value obtained from the second participant; the method comprises the steps of determining a first predicted value, determining a mask confusion value corresponding to the first predicted value according to the first predicted value, and processing first original data by using a first sub-model of a preset regression model in a first participant.
Similar to the principle of determining that the first predictor secrets share data, in particular, the first party may regenerate the first random number, the second party may regenerate the second random number, and the third party may regenerate the third random number. The first party may send the first random number to the second party. The second party may send the second random number to the third party. The third party may send the third random number to the first party. In addition, the third party may send a third random number to the second party.
The second party may construct a second mask from the regenerated first, second, and third random numbers.
The second party may determine a mask confusion value for the second predicted value based on the second predicted value, the first random number, the second random number, and the third random number. And may send the mask confusion value to the first party and the third party.
The first party may construct a first shard of the second masked secret sharing from the first random number and the third random number. The second party may construct a second piece of the second masked secret sharing from the first random number and the second random number. The third party may construct a third slice of the second masked secret sharing from the second random number and the third random number.
The first party may construct a first shard of the second predictor secret sharing based on the mask confusion value determined for the second predictor and the first shard of the second mask secret sharing.
The second party may construct a second shard of the second predictor secret sharing based on the mask confusion value determined for the second predictor and the second shard of the second mask secret sharing.
The third party may construct a third shard of the second predictor secret sharing based on the mask confusion value determined for the second predictor and the third shard of the second mask secret sharing.
Step 206, determining the third predicted value secret sharing data according to the third mask secret sharing data and the mask confusion value corresponding to the third predicted value obtained from the third participant; and the mask confusion value corresponding to the third predicted value is determined according to the third predicted value, and the third predicted value is obtained by processing third original data by using a third sub-model of a preset regression model in a third participant, similar to the principle of determining the first predicted value to secret and share the data.
Similar to the principle of determining that the first predictor secrets share data, in particular, the first party may regenerate the first random number, the second party may regenerate the second random number, and the third party may regenerate the third random number. The first party may send the first random number to the second party. The second party may send the second random number to the third party. The third party may send the third random number to the first party. In addition, the first party may send the first random number to the third party.
The third party may construct a third mask from the regenerated first, second, and third random numbers.
The third party may determine a mask confusion value for the third predicted value based on the third predicted value, the first random number, the second random number, and the third random number. And may send the mask confusion value to the first party and the second party.
The first participant may construct a first slice of the third masked secret sharing from the first random number and the third random number. The second party may construct a second piece of the third masked secret sharing from the first random number and the second random number. The third party may construct a third shard of a third masked secret sharing from the second random number and the third random number.
The first participant may construct a first shard of the third predictor secret sharing based on the mask confusion value determined for the third predictor and the first shard of the third mask secret sharing.
The second participant may construct a second shard of the third predictor secret sharing based on the mask confusion value determined for the third predictor and the second shard of the third mask secret sharing.
The third party may construct a third shard of the third predictor secret sharing based on the mask confusion value determined for the third predictor and the third shard of the third mask secret sharing.
In step 207, the first blinded data is determined according to the first predicted value secret sharing data, the second predicted value secret sharing data, and the third predicted value secret sharing data.
Specifically, the first party may determine first blinded data for the federal predicted value according to the first shard shared by the first predicted value secret, the first shard shared by the second predicted value secret, and the first shard shared by the third predicted value secret, where the first blinded data may include a first blinding factor and a third blinded value. The specific formula is as follows:
Wherein, Representing user/>Is a first predictive value of (1); /(I)Representing user/>Is a second predictive value of (2); Representing user/> Is a third predictive value of (2); /(I)A first shard representing a first predictor secret sharing; A first shard representing a secret sharing of the second predictor; /(I) A first shard representing a third predictor secret sharing; /(I)Representing the first blinding factor,/>Representing a third blinded value; /(I)Representing a first random number generated when constructing a first mask; /(I)Representing a second random number generated when constructing the first mask; /(I)Representing a third random number generated when constructing the first mask; /(I)Representing a first random number generated when constructing the second mask; /(I)Representing a second random number generated when constructing a second mask; /(I)Representing a third random number generated when constructing the second mask; /(I)Representing a first random number generated when constructing a third mask; /(I)Representing a second random number generated when constructing a third mask; /(I)Representing a third random number generated when constructing a third mask.
Similarly, the second party may determine second blinded data for the federal predicted value based on the second shard shared by the first predicted value secret, the second shard shared by the second predicted value secret, and the second shard shared by the third predicted value secret, where the second blinded data may include a second blinding factor and the first blinded value. The specific formula is as follows:
Wherein, Representing user/>Is a first predictive value of (1); /(I)Representing user/>Is a second predictive value of (2); Representing user/> Is a third predictive value of (2); /(I)A second shard representing a secret sharing of the first predictor; a second shard representing a second predictor secret sharing; /(I) A second shard representing a third predictor secret sharing; /(I)Representing the second blinding factor,/>Representing a first blinded value; /(I)Representing a first random number generated when constructing a first mask; /(I)Representing a second random number generated when constructing the first mask; /(I)Representing a third random number generated when constructing the first mask; /(I)Representing a first random number generated when constructing the second mask; /(I)Representing a second random number generated when constructing a second mask; /(I)Representing a third random number generated when constructing the second mask; /(I)Representing a first random number generated when constructing a third mask; /(I)Representing a second random number generated when constructing a third mask; /(I)Representing a third random number generated when constructing a third mask.
Similarly, the third party may determine third blinded data for the federal predicted value according to the third shard shared by the first predicted value secret, the third shard shared by the second predicted value secret, and the third shard shared by the third predicted value secret, where the third blinded data may include a third blinding factor and a second blinding value. The specific formula is as follows:
Wherein, Representing user/>Is a first predictive value of (1); /(I)Representing user/>Is a second predictive value of (2); Representing user/> Is a third predictive value of (2); /(I)A third shard representing a secret sharing of the first predictor; a third shard representing a secret sharing of the second predictor; /(I) A third shard representing a third predictor secret sharing; /(I)Representing a third blinding factor,/>Representing a second blinded value; /(I)Representing a first random number generated when constructing a first mask; /(I)Representing a second random number generated when constructing the first mask; /(I)Representing a third random number generated when constructing the first mask; /(I)Representing a first random number generated when constructing the second mask; /(I)Representing a second random number generated when constructing a second mask; /(I)Representing a third random number generated when constructing the second mask; /(I)Representing a first random number generated when constructing a third mask; /(I)Representing a second random number generated when constructing a third mask; /(I)Representing a third random number generated when constructing a third mask.
Step 208, the first blinding data comprises a first blinding factor and a third blinding value; the second blinding data comprises a second blinding factor and a first blinding value; the third blinding data includes a third blinding factor and a second blinding value; adding the first blind factor, the second blind factor and the third blind factor to obtain a first sum value; adding the first blind value, the second blind value and the third blind value to obtain a second sum value; and determining the federal predicted value of the target user according to the difference between the first sum and the second sum.
Specifically, the first blinding data includes a first blinding factor and a third blinding value; the second blinding data comprises a second blinding factor and a first blinding value; the third blinding data includes a third blinding factor and a second blinding value. The first participant can acquire a second blinding factor and a first blinding value sent by the second participant; and obtaining a third blinding factor and a second blinding value sent by a third participant. And performing blind removal processing on the first blind value, the second blind value and the third blind value corresponding to the bang predicted value based on the first blind factor, the second blind factor and the third blind factor to obtain the federal predicted value. Specifically, the first blinding factor, the second blinding factor and the third blinding factor may be added to obtain a first sum value; and the first blind value, the second blind value and the third blind value can be added to obtain a second sum value; and determining the federal predicted value of the target user according to the difference between the first sum and the second sum. The specific formula is as follows:
Wherein, Representing user/>Federal predictive value of (2); /(I)Representing user/>Is a first predictive value of (1); Representing user/> Is a second predictive value of (2); /(I)Representing user/>Is a third predictive value of (2); /(I)Representing the first blinding factor,/>Representing a third blinded value; /(I)Representing the second blinding factor,/>Representing a first blinded value; /(I)Representing a third blinding factor,/>Representing a second blinded value.
Specifically, the federal predicted value of the target user can be conveniently determined on the basis of not revealing the data information of each participant by adopting the mode.
Step 209, determining a federation loss value corresponding to the target user according to the tag value and the federation predicted value of the target user.
Specifically, the difference between the federation predicted value and the tag value may be determined as the federation loss value corresponding to the target user. The specific formula is as follows:
/>
Wherein, Representing user/>Federal loss value of (2); /(I)Representing user/>Federal predictive value of (2); Representing user/> Is a label value of (a).
Step 210, determining the federation loss value of the current iteration according to the federation loss value corresponding to the target user; if the federation loss value of the current iteration is smaller than the preset threshold, determining that the federation loss value reaches the convergence condition, and stopping iteration of the preset regression model to obtain the target regression model.
In particular, the user included in the test dataset may be represented asThe federal loss value for the current iteration can be expressed as follows:
where Loss represents the federal Loss value for the current iteration; Representing user/> A corresponding federal loss value; Representing user/> A corresponding federal loss value; /(I)Representing user/>A corresponding federal loss value; Representing user/> A corresponding federal loss value; n represents the total number of users and is a positive integer greater than 1.
Specifically, parameters of a preset regression model can be updated according to the federation loss value of the current iteration, and when the federation loss value of the current iteration is determined to be smaller than a preset threshold value, iteration is stopped to obtain a target regression model. The scheme can protect the gradient, the loss value and the predicted value of the current latest iteration model of each participant and simultaneously finish the model training stage more quickly.
Example III
Fig. 3 is a schematic structural diagram of a regression model training device based on federal learning according to a third embodiment of the present invention. As shown in fig. 3, the apparatus 300 includes:
An obtaining unit 310, configured to obtain first original data of a target user;
The secret sharing data obtaining unit 320 is configured to process the first original data by using a first sub-model of a preset regression model to obtain a first predicted value, determine first intermediate data by using the first predicted value, and determine secret sharing data of the first predicted value according to the first intermediate data;
The secret sharing data obtaining unit 320 is further configured to determine second predicted value secret sharing data according to second intermediate data obtained from the second participant, and determine third predicted value secret sharing data according to third intermediate data obtained from the third participant; the second intermediate data is determined according to a second predicted value, and the second predicted value is obtained by processing second original data by using a second sub-model of a preset regression model in a second participant; the third intermediate data is determined according to a third predicted value, and the third predicted value is obtained by processing third initial data by using a third sub-model of a preset regression model in a third participant;
the first blinded data obtaining unit 330 is configured to determine first blinded data of the federal predicted value according to the first predicted value secret sharing data, the second predicted value secret sharing data, and the third predicted value secret sharing data;
A federal prediction value obtaining unit 340, configured to determine a federal prediction value of the target user according to the first blinded data, the second blinded data obtained from the second participant, and the third blinded data obtained from the third participant;
the target regression model obtaining unit 350 is configured to evaluate the federal loss value according to the tag value and the federal predicted value of the target user, and terminate training of the preset regression model when the federal loss value reaches the convergence condition, so as to obtain the target regression model.
The secret sharing data obtaining unit 320 is specifically configured to: generating a first random number, and determining a mask confusion value for the first predicted value according to the first predicted value, the first random number, a second random number acquired from a second party and a third random number acquired from a third party;
determining first mask secret sharing data according to the first random number and the third random number;
And determining the first predicted value secret sharing data by adopting the mask confusion value and the first mask secret sharing data.
The secret sharing data obtaining unit 320 is specifically configured to: and determining the second predicted value secret sharing data according to the second mask secret sharing data and the mask confusion value corresponding to the second predicted value, which is acquired from the second participant.
The secret sharing data obtaining unit 320 is specifically configured to: and determining the third predicted value secret sharing data according to the third mask secret sharing data and the mask confusion value corresponding to the third predicted value, which is acquired from the third participant.
The federal prediction value obtaining unit 340 is specifically configured to: the first blinding data comprises a first blinding factor and a third blinding value; the second blinding data comprises a second blinding factor and a first blinding value; the third blinding data includes a third blinding factor and a second blinding value; adding the first blind factor, the second blind factor and the third blind factor to obtain a first sum value;
adding the first blind value, the second blind value and the third blind value to obtain a second sum value;
and determining the federal predicted value of the target user according to the difference between the first sum and the second sum.
The federal prediction value obtaining unit 350 is specifically configured to: determining a federation loss value corresponding to the target user according to the tag value and the federation predicted value of the target user;
determining the federation loss value of the current iteration according to the federation loss value corresponding to the target user;
if the federation loss value of the current iteration is smaller than the preset threshold, determining that the federation loss value reaches the convergence condition, and stopping iteration of the preset regression model to obtain the target regression model.
The federal learning-based regression model training device provided by the embodiment of the invention can execute the federal learning regression model loss function evaluation method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of executing the federal learning regression model loss function evaluation method.
Example IV
Fig. 4 shows a schematic diagram of the structure of an electronic device 10 that may be used to implement an embodiment of the invention. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Electronic equipment may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices (e.g., helmets, glasses, watches, etc.), and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
As shown in fig. 4, the electronic device 10 includes at least one processor 11, and a memory, such as a Read Only Memory (ROM) 12, a Random Access Memory (RAM) 13, etc., communicatively connected to the at least one processor 11, in which the memory stores a computer program executable by the at least one processor, and the processor 11 may perform various appropriate actions and processes according to the computer program stored in the Read Only Memory (ROM) 12 or the computer program loaded from the storage unit 18 into the Random Access Memory (RAM) 13. In the RAM 13, various programs and data required for the operation of the electronic device 10 may also be stored. The processor 11, the ROM 12 and the RAM 13 are connected to each other via a bus 14. An input/output (I/O) interface 15 is also connected to bus 14.
Various components in the electronic device 10 are connected to the I/O interface 15, including: an input unit 16 such as a keyboard, a mouse, etc.; an output unit 17 such as various types of displays, speakers, and the like; a storage unit 18 such as a magnetic disk, an optical disk, or the like; and a communication unit 19 such as a network card, modem, wireless communication transceiver, etc. The communication unit 19 allows the electronic device 10 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The processor 11 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of processor 11 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, digital Signal Processors (DSPs), and any suitable processor, controller, microcontroller, etc. The processor 11 performs the various methods and processes described above, such as the federal learning regression model loss function evaluation method.
In some embodiments, any of the federal learning regression model loss function evaluation methods described above may be implemented as a computer program tangibly embodied on a computer-readable storage medium, such as the storage unit 18. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 10 via the ROM 12 and/or the communication unit 19. When the computer program is loaded into RAM 13 and executed by processor 11, one or more steps of any of the federal learning regression model loss function evaluation methods described above may be performed. Alternatively, in other embodiments, processor 11 may be configured to perform any of the federal learning regression model loss function evaluation methods described above in any other suitable manner (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
A computer program for carrying out methods of the present invention may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be implemented. The computer program may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of the present invention, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. The computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on an electronic device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) through which a user can provide input to the electronic device. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the internet.
The computing system may include clients and servers. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service are overcome.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present invention may be performed in parallel, sequentially, or in a different order, so long as the desired results of the technical solution of the present invention are achieved, and the present invention is not limited herein.
The above embodiments do not limit the scope of the present invention. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the scope of the present invention.

Claims (8)

1. A federal learning regression model loss function evaluation method, for use with a first party, the method comprising:
Acquiring a test data set; wherein the test dataset comprises first raw data of a target user;
processing the first original data by using a first sub-model of a preset regression model to obtain a first predicted value, determining first intermediate data by using the first predicted value, and determining secret sharing data of the first predicted value according to the first intermediate data;
determining second predicted value secret sharing data according to second intermediate data acquired from a second participant, and determining third predicted value secret sharing data according to third intermediate data acquired from a third participant; the second intermediate data is determined according to a second predicted value, and the second predicted value is obtained by processing second original data by using a second sub-model of a preset regression model in a second participant; the third intermediate data is determined according to a third predicted value, and the third predicted value is obtained by processing third initial data by using a third sub-model of a preset regression model in a third participant;
determining first blinding data according to the first predicted value secret sharing data, the second predicted value secret sharing data and the third predicted value secret sharing data;
determining a federal predicted value of the target user according to the first blinded data, the second blinded data acquired from the second participant and the third blinded data acquired from the third participant;
evaluating a federation loss value according to a label value and a federation predicted value of a target user, and stopping training of a preset regression model when the federation loss value reaches a convergence condition to obtain a target regression model;
The determining the first intermediate data by using the first predicted value, and determining the secret sharing data of the first predicted value according to the first intermediate data includes:
Generating a first random number, and determining a mask confusion value for the first predicted value according to the first predicted value, the first random number, a second random number acquired from a second party and a third random number acquired from a third party;
determining first mask secret sharing data according to the first random number and the third random number;
And determining the first predicted value secret sharing data by adopting the mask confusion value and the first mask secret sharing data.
2. The method of claim 1, wherein the determining the second predictor secret sharing data from the second intermediate data obtained from the second party comprises:
And determining the second predicted value secret sharing data according to the second mask secret sharing data and the mask confusion value corresponding to the second predicted value, which is acquired from the second participant.
3. The method of claim 1, wherein the determining third predictor secret sharing data from third intermediate data obtained from a third party comprises:
And determining the third predicted value secret sharing data according to the third mask secret sharing data and the mask confusion value corresponding to the third predicted value, which is acquired from the third participant.
4. The method of claim 1, wherein the first blinding data comprises a first blinding factor and a third blinding value; the second blinding data comprises a second blinding factor and a first blinding value; the third blinding data includes a third blinding factor and a second blinding value; the determining the federal predicted value of the target user according to the first blinding data, the second blinding data acquired from the second participant and the third blinding data acquired from the third participant comprises the following steps:
Adding the first blind factor, the second blind factor and the third blind factor to obtain a first sum value;
adding the first blind value, the second blind value and the third blind value to obtain a second sum value;
and determining the federal predicted value of the target user according to the difference between the first sum and the second sum.
5. The method of claim 1, wherein the estimating the federal loss value according to the tag value and the federal predictive value of the target user, and terminating training of the preset regression model when the federal loss value reaches the convergence condition, and obtaining the target regression model comprises:
Determining a federation loss value corresponding to the target user according to the tag value and the federation predicted value of the target user;
determining the federation loss value of the current iteration according to the federation loss value corresponding to the target user;
If the federation loss value of the current iteration is smaller than a preset threshold, determining that the federation loss value reaches a convergence condition, and stopping iteration of a preset regression model to obtain a target regression model.
6. A federal learning regression model loss function evaluation apparatus for use with a first party, the apparatus comprising:
an acquisition unit configured to acquire a test data set; wherein the test dataset comprises first raw data of a target user;
The secret sharing data acquisition unit is used for processing the first original data by using a first sub-model of a preset regression model to obtain a first predicted value, determining first intermediate data by using the first predicted value, and determining secret sharing data of the first predicted value according to the first intermediate data; the method is also used for determining second predicted value secret sharing data according to second intermediate data acquired from a second participant and determining third predicted value secret sharing data according to third intermediate data acquired from a third participant; the second intermediate data is determined according to a second predicted value, and the second predicted value is obtained by processing second original data by using a second sub-model of a preset regression model in a second participant; the third intermediate data is determined according to a third predicted value, and the third predicted value is obtained by processing third initial data by using a third sub-model of a preset regression model in a third participant;
the first blinding data acquisition unit is used for determining first blinding data according to the first predicted value secret sharing data, the second predicted value secret sharing data and the third predicted value secret sharing data;
The federal predicted value acquisition unit is used for determining the federal predicted value of the target user according to the first blind data, the second blind data acquired from the second participant and the third blind data acquired from the third participant;
The target regression model acquisition unit is used for evaluating the federation loss value according to the label value and the federation predicted value of the target user, and stopping training of a preset regression model when the federation loss value reaches a convergence condition to obtain a target regression model;
The secret sharing data acquisition unit is specifically configured to: generating a first random number, and determining a mask confusion value for the first predicted value according to the first predicted value, the first random number, a second random number acquired from a second party and a third random number acquired from a third party;
determining first mask secret sharing data according to the first random number and the third random number;
And determining the first predicted value secret sharing data by adopting the mask confusion value and the first mask secret sharing data.
7. An electronic device, the electronic device comprising:
at least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
The memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the federal learning regression model loss function evaluation method of any one of claims 1-5.
8. A computer readable storage medium storing computer instructions for causing a processor to implement the federal learning regression model loss function evaluation method of any one of claims 1-5 when executed.
CN202410122725.1A 2024-01-30 2024-01-30 Federal learning regression model loss function evaluation method and device and electronic equipment Active CN117648999B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410122725.1A CN117648999B (en) 2024-01-30 2024-01-30 Federal learning regression model loss function evaluation method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410122725.1A CN117648999B (en) 2024-01-30 2024-01-30 Federal learning regression model loss function evaluation method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN117648999A CN117648999A (en) 2024-03-05
CN117648999B true CN117648999B (en) 2024-04-23

Family

ID=90048140

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410122725.1A Active CN117648999B (en) 2024-01-30 2024-01-30 Federal learning regression model loss function evaluation method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN117648999B (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111241567A (en) * 2020-01-16 2020-06-05 深圳前海微众银行股份有限公司 Longitudinal federal learning method, system and storage medium based on secret sharing
CN111985573A (en) * 2020-08-28 2020-11-24 深圳前海微众银行股份有限公司 Factorization machine classification model construction method and device and readable storage medium
CN112906912A (en) * 2021-04-01 2021-06-04 深圳市洞见智慧科技有限公司 Method and system for training regression model without trusted third party in longitudinal federal learning
CN113516256A (en) * 2021-09-14 2021-10-19 深圳市洞见智慧科技有限公司 Third-party-free federal learning method and system based on secret sharing and homomorphic encryption
WO2021239006A1 (en) * 2020-05-27 2021-12-02 支付宝(杭州)信息技术有限公司 Secret sharing-based training method and apparatus, electronic device, and storage medium
CN113992393A (en) * 2021-10-26 2022-01-28 中国电信股份有限公司 Method, apparatus, system, and medium for model update for longitudinal federated learning
CN114330759A (en) * 2022-03-08 2022-04-12 富算科技(上海)有限公司 Training method and system for longitudinal federated learning model
CN114548418A (en) * 2021-12-30 2022-05-27 天翼电子商务有限公司 Secret sharing-based transverse federal IV algorithm
CN114648130A (en) * 2022-02-07 2022-06-21 北京航空航天大学 Longitudinal federal learning method and device, electronic equipment and storage medium
CN115392531A (en) * 2022-06-29 2022-11-25 云南电网有限责任公司信息中心 Enterprise electric charge payment risk prediction method and system based on longitudinal federal logistic regression
CN116432040A (en) * 2023-06-15 2023-07-14 上海零数众合信息科技有限公司 Model training method, device and medium based on federal learning and electronic equipment
WO2023134077A1 (en) * 2022-01-17 2023-07-20 平安科技(深圳)有限公司 Homomorphic encryption method and system based on federated factorization machine, device and storage medium
CN116541878A (en) * 2023-04-27 2023-08-04 电子科技大学 Privacy protection method based on safe two-party calculation S-shaped function
CN117195060A (en) * 2023-11-06 2023-12-08 上海零数众合信息科技有限公司 Telecom fraud recognition method and model training method based on multiparty security calculation

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111241567A (en) * 2020-01-16 2020-06-05 深圳前海微众银行股份有限公司 Longitudinal federal learning method, system and storage medium based on secret sharing
WO2021239006A1 (en) * 2020-05-27 2021-12-02 支付宝(杭州)信息技术有限公司 Secret sharing-based training method and apparatus, electronic device, and storage medium
CN111985573A (en) * 2020-08-28 2020-11-24 深圳前海微众银行股份有限公司 Factorization machine classification model construction method and device and readable storage medium
CN112906912A (en) * 2021-04-01 2021-06-04 深圳市洞见智慧科技有限公司 Method and system for training regression model without trusted third party in longitudinal federal learning
CN113516256A (en) * 2021-09-14 2021-10-19 深圳市洞见智慧科技有限公司 Third-party-free federal learning method and system based on secret sharing and homomorphic encryption
CN113992393A (en) * 2021-10-26 2022-01-28 中国电信股份有限公司 Method, apparatus, system, and medium for model update for longitudinal federated learning
CN114548418A (en) * 2021-12-30 2022-05-27 天翼电子商务有限公司 Secret sharing-based transverse federal IV algorithm
WO2023134077A1 (en) * 2022-01-17 2023-07-20 平安科技(深圳)有限公司 Homomorphic encryption method and system based on federated factorization machine, device and storage medium
CN114648130A (en) * 2022-02-07 2022-06-21 北京航空航天大学 Longitudinal federal learning method and device, electronic equipment and storage medium
CN114330759A (en) * 2022-03-08 2022-04-12 富算科技(上海)有限公司 Training method and system for longitudinal federated learning model
CN115392531A (en) * 2022-06-29 2022-11-25 云南电网有限责任公司信息中心 Enterprise electric charge payment risk prediction method and system based on longitudinal federal logistic regression
CN116541878A (en) * 2023-04-27 2023-08-04 电子科技大学 Privacy protection method based on safe two-party calculation S-shaped function
CN116432040A (en) * 2023-06-15 2023-07-14 上海零数众合信息科技有限公司 Model training method, device and medium based on federal learning and electronic equipment
CN117195060A (en) * 2023-11-06 2023-12-08 上海零数众合信息科技有限公司 Telecom fraud recognition method and model training method based on multiparty security calculation

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于秘密分享和梯度选择的高效安全联邦学习;董业;侯炜;陈小军;曾帅;;计算机研究与发展;20201009(10);全文 *
多数据源下机器学习的隐私保护研究;张铭凯;范宇豪;夏仕冰;;网络空间安全;20200425(04);全文 *

Also Published As

Publication number Publication date
CN117648999A (en) 2024-03-05

Similar Documents

Publication Publication Date Title
CN112580733B (en) Classification model training method, device, equipment and storage medium
CN113343803A (en) Model training method, device, equipment and storage medium
CN113657269A (en) Training method and device for face recognition model and computer program product
CN113378855A (en) Method for processing multitask, related device and computer program product
CN113627536A (en) Model training method, video classification method, device, equipment and storage medium
CN115358411A (en) Data processing method, device, equipment and medium
CN113627361B (en) Training method and device for face recognition model and computer program product
CN117195060B (en) Telecom fraud recognition method and model training method based on multiparty security calculation
CN117609921A (en) Method and device for constructing anomaly detection model, electronic equipment and storage medium
CN113033408A (en) Data queue dynamic updating method and device, electronic equipment and storage medium
CN117648999B (en) Federal learning regression model loss function evaluation method and device and electronic equipment
CN115328621B (en) Transaction processing method, device, equipment and storage medium based on block chain
CN114860411B (en) Multi-task learning method, device, electronic equipment and storage medium
CN116340518A (en) Text association matrix establishment method and device, electronic equipment and storage medium
CN113361575B (en) Model training method and device and electronic equipment
CN115641481A (en) Method and device for training image processing model and image processing
CN115578583B (en) Image processing method, device, electronic equipment and storage medium
CN116402615B (en) Account type identification method and device, electronic equipment and storage medium
CN113591095B (en) Identification information processing method and device and electronic equipment
CN118350719A (en) Evaluation method and device of organization architecture, electronic equipment and storage medium
CN117331924A (en) Data model matching degree checking method, device, equipment and storage medium
CN117611239A (en) Training method of transaction flow prediction model, and transaction flow prediction method and device
CN117608944A (en) Method and device for calculating weight migration volume ratio, electronic equipment and storage medium
CN117591714A (en) Service data matching method and device, electronic equipment and storage medium
CN115017145A (en) Data expansion method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant