CN113554181B - Federal learning training method based on batch increment mode - Google Patents

Federal learning training method based on batch increment mode Download PDF

Info

Publication number
CN113554181B
CN113554181B CN202110768514.1A CN202110768514A CN113554181B CN 113554181 B CN113554181 B CN 113554181B CN 202110768514 A CN202110768514 A CN 202110768514A CN 113554181 B CN113554181 B CN 113554181B
Authority
CN
China
Prior art keywords
model
training
local
learning
federal learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110768514.1A
Other languages
Chinese (zh)
Other versions
CN113554181A (en
Inventor
胡凯
陆美霞
吴佳胜
李姚根
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Information Science and Technology
Original Assignee
Nanjing University of Information Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Information Science and Technology filed Critical Nanjing University of Information Science and Technology
Priority to CN202110768514.1A priority Critical patent/CN113554181B/en
Publication of CN113554181A publication Critical patent/CN113554181A/en
Application granted granted Critical
Publication of CN113554181B publication Critical patent/CN113554181B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a federal learning training model based on a batch increment mode. Belonging to the field of federal learning training, the operation steps are as follows: setting up a federal learning framework with incremental learning; the history forgetfulness problem of incremental learning exists in the federal learning framework; aiming at the historical forgetfulness problem, a targeted loss function optimization model is selected by constructing local loss update; and then, a federal learning local increment self-attention mechanism model is built, so that the memory of a federal learning framework with increment learning in training data is enhanced, and the accuracy of classification tasks in the federal learning training model is improved. The method not only adds the average value of all the current local model losses to the loss of the local model, but also helps to reduce the influence of quick forgetting of incremental learning; the self-attention mechanism is also used in the most classical convolutional neural network, key information is reserved through the attention mechanism, feature extraction and selection are better carried out, and incremental learning and memory are enhanced.

Description

Federal learning training method based on batch increment mode
Technical Field
The invention belongs to the field of federal learning training, and particularly relates to a federal learning training method based on a batch increment mode.
Background
Technology in the field of artificial intelligence is continuously developed, robots are increasingly applied to solve complex problems in modern society, and accurate classification is considered as a precondition for robots to realize autonomous tasks, so various machine learning algorithms are proposed, however, most of the algorithms are batch learning modes, after batch samples are trained, the learning process is terminated, and learning cannot be continued later. At the same time, if the whole data is to be relearned after the training is interrupted, a lot of time and space are required, so that the batch learning algorithm cannot meet the requirement. To solve these problems, an incremental learning algorithm for machine learning has emerged. However, when new knowledge is acquired, the incremental learning algorithm interferes with old knowledge, and in order to improve the problem, a self-attention mechanism is added in the network model to help strengthen the memory of the local model on the old knowledge.
However, the conventional training method of machine learning is to collect data and transmit the data to a central point, and the central point with a high-performance cluster directly adopts incremental learning old data to help process new data, so that relevant information of the data can be exposed, privacy problems of user data are not considered, and certain potential safety hazards exist, so that encryption processing is required to be performed on the data. If a framework exists, the problem of data leakage can be solved, and meanwhile, the average value of all the current local model losses is given to the local area, so that the influence of quick forgetting of incremental learning is reduced. Then, an incremental self-attention mechanism is added to the local model to help capture the correlation of the local features, effectively extract and select the features, improve the performance of the local model, strengthen incremental learning memory and improve the accuracy of classification tasks.
Disclosure of Invention
The invention aims to: the invention aims to provide a federal learning training method based on a batch increment mode; the invention applies the incremental learning to the framework, can ensure that the user progressively updates the knowledge, and can correct and strengthen the previous knowledge, so that the updated knowledge can adapt to the newly arrived data, and the privacy safety problem of the user data is ensured while the whole data is not required to be learned again.
The technical scheme is as follows: the invention relates to a federal learning training method based on a batch increment mode, which comprises the following specific operation steps:
(1) Aiming at the problems of time and space loss caused by retraining after training interruption in a batch learning algorithm used in a traditional federal learning framework, the federal learning framework with incremental learning is put forward to be built;
the method comprises the steps of reserving a historical forgetfulness problem existing in incremental learning in a federal learning framework with the incremental learning;
(2) Aiming at the historical forgetfulness problem existing in the federal learning framework, a targeted loss function optimization model is selected by constructing local loss update;
(3) The memory of the federal learning framework with incremental learning in training data is enhanced by building a federal learning local incremental self-attention mechanism model, and finally the accuracy of classification tasks in the federal learning training model is improved;
in the step (1), the specific operation steps for building the federal learning framework with incremental learning are as follows:
(1.1) before training, the client checks whether the model parameters stored in the previous training exist, and if the model parameters in the previous training exist, the model parameters are loaded into the global model;
(1.2) the global model sends the aggregate federation model to each client;
(1.3) each client adopts batch increment training to each client model in the local training process, saves model parameters of fixed batch number training, and outputs network model parameters obtained by the last fixed batch training after training;
(1.4) after the training of the client is finished, the model parameter update at the moment is sent to the aggregation server;
(1.5) the aggregation server aggregates the received model parameters, and finally, the weighted average is used for the received model parameters, which is specifically shown as the following formula:
Figure GDA0004216544120000021
in the formula (1), t represents the time of polymerization average treatment, and n k Represent the firstThe local data volume of the k participants,
Figure GDA0004216544120000022
parameters representing the local model at time t; the parameters comprise weight parameters and bias parameters;
the server then aggregates the model parameters
Figure GDA0004216544120000023
Sending to all participants;
in the step (2), the specific operation of constructing the local loss update is as follows:
(2.1) during local training, judging whether model parameters stored in the previous training exist or not, and if so, updating the local loss as shown in the following formula;
Figure GDA0004216544120000024
where x represents a sample, y represents a label, and j represents a variable that traverses all labels.
(2.2) during local training, judging whether model parameters stored in the previous training exist or not, and if not, updating the local loss as shown in the following formula;
Figure GDA0004216544120000031
wherein,,
Figure GDA0004216544120000032
representing the loss value obtained by the current local training, < +.>
Figure GDA0004216544120000033
Representing the average value of all the current local losses, q representing the variable traversing all the current clients;
in the step (3), the specific operation steps of building the federal learning local increment self-attention mechanism model are as follows:
(3.1) obtaining the latest model update from the server
Figure GDA0004216544120000034
The data set is D k The method comprises the steps of carrying out a first treatment on the surface of the Randomly fitting data set D k Dividing the model into the size of batch B, and putting the batch B into a network model for training;
(3.2) constructing a convolution layer, adopting a most classical convolution neural network CNN, locally adding a self-attention mechanism to strengthen the increment learning memory, calculating a weight according to the importance of a specific part of input and the correlation of the specific part with output when the data increment is performed, and distributing a correlation score to elements of the input and neglecting noisy parts. Key information is reserved through a self-attention mechanism, feature extraction and selection are better carried out, the correlation of local features is continuously captured, and the performance of a local model is improved; the specific formula is shown as follows;
Figure GDA0004216544120000035
where Q represents the query matrix, V and K represent the key values of the matrix,
Figure GDA0004216544120000036
representing a scaling factor for adjustment, preventing the inner product of Q, K from being too large;
(3.3) updating model parameters in local batch increment, and storing model parameters in fixed batch; and then the current local model parameters are sent to the global model.
The beneficial effects are that: compared with the prior art, the training method based on traditional machine learning is to collect data and transmit the data to a central point, and the central point with a high-performance cluster directly adopts old data for incremental learning to help process new data, so that the privacy problem of user data is not considered, and a federal learning training model based on a batch incremental mode is provided, which has the function of adding the average value of the losses of all local models into the loss of a local model, and helping to reduce the influence of quick forgetting of incremental learning; in addition, the self-attention mechanism is used in the most classical convolutional neural network, key information is reserved through the attention mechanism, feature extraction and selection are better carried out, and incremental learning and memory are enhanced.
Drawings
FIG. 1 is a schematic illustration of the operation of a federal learning framework of the present invention in which build-up incremental learning is proposed;
FIG. 2 is a flow chart of operations performed by constructing a local loss update in accordance with the present invention;
FIG. 3 is a flow chart of the operation of the present invention by building a model of federally learned local incremental self-attention mechanisms;
fig. 4 is a general operational flow diagram of the present invention.
Detailed Description
The invention is further described with reference to the drawings and the detailed description below; in the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application, but the present application may be embodied in many other forms than described herein, and similar modifications may be made by those skilled in the art without departing from the spirit of the present application and, therefore, the present application is not limited to the specific implementations disclosed below.
The invention relates to a federal learning training model based on a batch increment mode, which comprises the following specific operation steps:
(1) Aiming at the problems of time and space loss caused by retraining after training interruption in a batch learning algorithm used in a traditional federal learning framework, the federal learning framework with incremental learning is put forward to be built;
the method comprises the steps of reserving a historical forgetfulness problem existing in incremental learning in a federal learning framework with the incremental learning;
(2) Aiming at the historical forgetfulness problem existing in the federal learning framework, a targeted loss function optimization model is selected by constructing local loss update;
(3) The memory of the federal learning framework with incremental learning in training data is enhanced by building a federal learning local incremental self-attention mechanism model, and finally the accuracy of classification tasks in the federal learning training model is improved;
as shown in fig. 1, in the step (1), the specific operation steps of building the federal learning framework with incremental learning are as follows:
(1.1) before training of the client, checking whether the model parameters stored in the previous training exist or not, and if the model parameters which are trained previously exist, loading the model parameters into the global model;
(1.2) the global model sends the aggregate federation model to each client;
(1.3) each client adopts batch increment training to each client model in the local training process, saves model parameters of fixed batch number training, and outputs network model parameters obtained by the last fixed batch training after training;
(1.4) after the training of the client is finished, the model parameter update at the moment is sent to the aggregation server;
(1.5) the aggregation server aggregates the received model parameters, and finally, the weighted average is used for the received model parameters, which is specifically shown as the following formula:
Figure GDA0004216544120000041
wherein t represents the time of aggregation average processing, n k Representing the local data volume of the kth participant,
Figure GDA0004216544120000051
parameters (including weight parameters and bias parameters) representing the local model at time t; the server will then aggregate the model parameters +.>
Figure GDA0004216544120000052
Sending to all participants;
as shown in fig. 2, in the step (2), the specific operation steps of updating by constructing the local loss are as follows:
(2.1) during local training, judging whether model parameters stored in the previous training exist or not, and if so, updating the local loss as shown in the following formula;
Figure GDA0004216544120000053
wherein x represents a sample, y represents a label, and j represents a variable traversing all labels;
(2.2) during local training, judging whether model parameters stored in the previous training exist or not, and if not, updating the local loss as shown in the following formula;
Figure GDA0004216544120000054
wherein,,
Figure GDA0004216544120000055
representing the loss value obtained by the current local training, < +.>
Figure GDA0004216544120000056
Representing the average value of all the current local losses, q representing the variable traversing all the current clients;
as shown in fig. 3, in the step (3), the specific operation steps of building the federal learning local increment self-attention mechanism model are as follows:
(3.1) obtaining the latest model update from the server
Figure GDA0004216544120000057
The data set is D k (CIFAR 10 public dataset); randomly fitting data set D k Dividing into a batch B (B=10), taking random gradient descent as a learning rate, setting the initial learning rate to be 0.1, setting the momentum to be 0.1, setting the weight attenuation coefficient to be 0.01, and putting the batch B into a network model for training;
(3.2) constructing a convolution layer, adopting a most classical convolution neural network CNN, locally adding a self-attention mechanism to strengthen the increment learning memory, calculating a weight according to the importance of a specific part of input and the correlation of the specific part with output when the data increment is performed, and distributing a correlation score to elements of the input and neglecting noisy parts. Key information is reserved through a self-attention mechanism, feature extraction and selection are better carried out, the correlation of local features is continuously captured, and the performance of a local model is improved, wherein the performance is shown in the following formula;
Figure GDA0004216544120000058
where Q represents the query matrix, V and K represent the key values of the matrix,
Figure GDA0004216544120000059
representing a scaling factor for adjustment, preventing the inner product of Q, K from being too large;
(3.3) updating model parameters in local batch increment, and storing model parameters in fixed batch; then, the current local model parameters are sent to the global model;
as shown in fig. 4, the federal learning training model based on batch increment mode of the invention aims at the problems of time and space loss caused by retraining after interruption of the traditional batch learning algorithm, avoids data leakage,
the invention applies the incremental learning to the framework, can ensure that the user progressively updates the knowledge, and can correct and strengthen the previous knowledge, so that the updated knowledge can adapt to the newly arrived data, and the privacy safety problem of the user data is ensured while the whole data is not required to be learned again; the method can solve the problems of time and space loss and the like caused by retraining after interruption by using a traditional batch learning algorithm in a federal learning framework, then in order to preliminarily solve the historical forgetting problem caused by incremental learning, the average value of all the current local model losses is given to the local, and finally, an incremental self-attention mechanism is added in a network model to help strengthen the correlation knowledge memory and improve the accuracy of classification tasks.
The conventional batch learning algorithm can bring the problems of time and space loss caused by retraining after interruption, and when training a small data set, the time and space loss required by retraining is small, however, in practical application, such as artificial intelligence, the requirement on the data volume of the data set is extremely high, and at the moment, the retraining can bring corresponding loss. The data set used in the invention is bank customer complaints, the complaints are classified into 12 categories, debt additional collection, consumer loans, mortgage, credit cards, credit reports, student loans, bank accounts or services, payday loans, money transfer, other financial services, prepaid cards; the invention provides a batch increment mode-based federal learning training model, which can effectively solve the problem of interruption caused by the traditional batch learning algorithm, but maintains the historic problem of increment learning in a built federal learning frame with the increment learning function, and aims at the problem by constructing local loss update and selecting a targeted loss function optimization model; and then, by building a federal learning local increment self-attention mechanism model, the memory of the federal learning framework with increment learning in training data is enhanced, and the accuracy of classification tasks in a bank customer complaint data set is improved.
Specific examples: taking customer complaints of banks as an example: the specific operation steps are as follows:
(1) Aiming at the problems of time and space loss caused by retraining a bank customer complaint data set after training interruption in a batch learning algorithm used in a traditional federal learning framework, the federal learning framework with incremental learning is put forward;
the method comprises the steps of reserving a historical forgetfulness problem existing in incremental learning in a federal learning framework with the incremental learning;
(2) Aiming at the historical forgetfulness problem existing in the federal learning framework, a targeted loss function optimization model is selected by constructing local loss update;
(3) And then, building a federal learning local increment self-attention mechanism model, strengthening the memory of the federal learning framework with increment learning when training the data of the bank customer complaint data set, and finally improving the accuracy of classification tasks of the bank customer complaint data set in the federal learning training model.
The federal learning training model based on the batch increment mode aims at the problems of time and space loss and the like caused by retraining a bank customer complaint data set after the interruption of a traditional batch learning algorithm, and avoids the leakage of the bank customer complaint data.
The invention applies the incremental learning to the framework, can ensure that the user progressively updates the knowledge, and can correct and strengthen the previous knowledge, so that the updated knowledge can adapt to the newly arrived data, and the privacy safety problem of the user data is ensured while the whole data is not required to be learned again; the method can solve the problems of time and space loss and the like caused by retraining a bank customer complaint data set after interruption by using a traditional batch learning algorithm in a federal learning framework, then gives the average value of all the current local model losses to the local in order to preliminarily solve the historical forgetting problem caused by incremental learning, and finally adds an incremental self-attention mechanism in a network model to help strengthen the correlation knowledge memory and improve the accuracy of classification tasks in the bank customer complaint data set.

Claims (2)

1. The federal learning training method based on the batch increment mode is characterized by comprising the following specific operation steps:
(1) Building a federal learning framework with incremental learning;
in the step (1), the specific operation steps for building the federal learning framework with incremental learning are as follows:
(1.1) before training, the client checks whether the model parameters stored in the previous training exist, and if the model parameters in the previous training exist, the model parameters are loaded into the global model;
(1.2) the global model sends the aggregate federation model to each client;
(1.3) each client adopts batch increment training to each client model in the local training process, saves model parameters of fixed batch number training, and outputs network model parameters obtained by the last fixed batch training after training;
(1.4) after the training of the client is finished, the model parameter update at the moment is sent to the aggregation server;
(1.5) the aggregation server aggregates the received model parameters, and finally, the weighted average is used for the received model parameters, which is specifically shown as the following formula:
Figure QLYQS_1
in the formula (1), t represents the time of polymerization average treatment, and n k Representing the local data volume of the kth participant,
Figure QLYQS_2
parameters representing the local model at time t; the server will then aggregate the model parameters +.>
Figure QLYQS_3
Sending to all participants;
(2) Selecting a targeted loss function optimization model by constructing local loss update;
in the step (2), the specific operation of constructing the local loss update is as follows:
(2.1) during local training, judging whether model parameters stored in the previous training exist or not, and if so, updating the local loss as shown in the following formula;
Figure QLYQS_4
in the formula (2), x represents a sample, y represents a label, and j represents a variable traversing all labels;
(2.2) if not, updating the local loss as shown in the following formula;
Figure QLYQS_5
in formula (3), b=0.5,
Figure QLYQS_6
representing the loss value obtained by the current local training, < +.>
Figure QLYQS_7
Representing the average value of all the current local losses, q representing the variable traversing all the current clients;
(3) Then, building a federal learning local increment self-attention mechanism model to strengthen the memory of the federal learning framework with increment learning in training data;
in the step (3), the specific operation steps of building the federal learning local increment self-attention mechanism model are as follows:
(3.1) obtaining the latest model update from the server
Figure QLYQS_8
The data set is D k The method comprises the steps of carrying out a first treatment on the surface of the Randomly fitting data set D k Dividing the model into the size of batch B, and putting the batch B into a network model for training;
(3.2) constructing a convolution layer, and adding a self-attention mechanism locally by adopting a convolution neural network CNN, wherein the self-attention mechanism is shown in the following formula;
Figure QLYQS_9
in the formula (4), Q represents a query matrix, V and K represent key values of the matrix,
Figure QLYQS_10
representing a scaling factor for adjustment, preventing the inner product of Q, K from being too large;
(3.3) updating model parameters in local batch increment, and storing model parameters in fixed batch; and then the current local model parameters are sent to the global model.
2. A federal learning training method based on batch increment mode according to claim 1, wherein the parameters of the local model at the time t include weight parameters and bias parameters.
CN202110768514.1A 2021-07-07 2021-07-07 Federal learning training method based on batch increment mode Active CN113554181B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110768514.1A CN113554181B (en) 2021-07-07 2021-07-07 Federal learning training method based on batch increment mode

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110768514.1A CN113554181B (en) 2021-07-07 2021-07-07 Federal learning training method based on batch increment mode

Publications (2)

Publication Number Publication Date
CN113554181A CN113554181A (en) 2021-10-26
CN113554181B true CN113554181B (en) 2023-06-23

Family

ID=78131397

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110768514.1A Active CN113554181B (en) 2021-07-07 2021-07-07 Federal learning training method based on batch increment mode

Country Status (1)

Country Link
CN (1) CN113554181B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111708640A (en) * 2020-06-23 2020-09-25 苏州联电能源发展有限公司 Edge calculation-oriented federal learning method and system
CN112232528A (en) * 2020-12-15 2021-01-15 之江实验室 Method and device for training federated learning model and federated learning system
CN112507219A (en) * 2020-12-07 2021-03-16 中国人民大学 Personalized search system based on federal learning enhanced privacy protection
CN112949837A (en) * 2021-04-13 2021-06-11 中国人民武装警察部队警官学院 Target recognition federal deep learning method based on trusted network

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11443240B2 (en) * 2019-09-06 2022-09-13 Oracle International Corporation Privacy preserving collaborative learning with domain adaptation

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111708640A (en) * 2020-06-23 2020-09-25 苏州联电能源发展有限公司 Edge calculation-oriented federal learning method and system
CN112507219A (en) * 2020-12-07 2021-03-16 中国人民大学 Personalized search system based on federal learning enhanced privacy protection
CN112232528A (en) * 2020-12-15 2021-01-15 之江实验室 Method and device for training federated learning model and federated learning system
CN112949837A (en) * 2021-04-13 2021-06-11 中国人民武装警察部队警官学院 Target recognition federal deep learning method based on trusted network

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
A Federated Incremental Learning Algorithm Based on Dual Attention Mechanism;Hu Kai 等;《Applied Sciences》;第12卷(第19期);1-19 *
Hongrui Shi 等.Towards Federated Learning with Attention Transfer to Mitigate System and Data Heterogeneity of Clients.《EdgeSys '21: Proceedings of the 4th International Workshop on Edge Systems, Analytics and Networking》.2021,61–66. *
三维重建算法研究综述;张彦雯 等;《南京信息工程大学学报(自然科学版)》;第12卷(第05期);591-602 *
基于联邦学习和区块链的新冠肺炎胸部CT图像分割;王生生 等;《吉林大学学报(工学版) 》;第51卷(第06期);2164-2173 *

Also Published As

Publication number Publication date
CN113554181A (en) 2021-10-26

Similar Documents

Publication Publication Date Title
CN112364943B (en) Federal prediction method based on federal learning
CN113905391B (en) Integrated learning network traffic prediction method, system, equipment, terminal and medium
CN110717816A (en) Artificial intelligence technology-based global financial risk knowledge graph construction method
CN111179089B (en) Money laundering transaction identification method, device and equipment
CN114511576B (en) Image segmentation method and system of scale self-adaptive feature enhanced deep neural network
CN112292696B (en) Method and device for determining action selection policy of execution device
CN109767225B (en) Network payment fraud detection method based on self-learning sliding time window
CN110378575B (en) Overdue event refund collection method and device and computer readable storage medium
CN112464058B (en) Telecommunication Internet fraud recognition method based on XGBoost algorithm
CN112561691B (en) Client trust prediction method, device, equipment and storage medium
CN112639841A (en) Sampling scheme for policy search in multi-party policy interaction
CN117994635B (en) Federal element learning image recognition method and system with enhanced noise robustness
CN112200665A (en) Method and device for determining credit limit
CN113554181B (en) Federal learning training method based on batch increment mode
CN107256231A (en) A kind of Team Member&#39;s identification equipment, method and system
CN110475033A (en) Intelligent dialing method, device, equipment and computer readable storage medium
CN117036001A (en) Risk identification processing method, device and equipment for transaction service and storage medium
CN116702976A (en) Enterprise resource prediction method and device based on modeling dynamic enterprise relationship
US20230342351A1 (en) Change management process for identifying inconsistencies for improved processing efficiency
TW202040479A (en) Automatic fund depositing system and automatic fund depositing method
CN116823264A (en) Risk identification method, risk identification device, electronic equipment, medium and program product
CN112508608B (en) Popularization activity configuration method, system, computer equipment and storage medium
CN115019359A (en) Cloud user identity recognition task allocation and parallel processing method
CN112613986A (en) Capital backflow identification method, device and equipment
CN111179070A (en) Loan risk timeliness prediction system and method based on LSTM

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant