CN115034333A - Federal learning method, federal learning device and federal learning system - Google Patents

Federal learning method, federal learning device and federal learning system Download PDF

Info

Publication number
CN115034333A
CN115034333A CN202210759145.4A CN202210759145A CN115034333A CN 115034333 A CN115034333 A CN 115034333A CN 202210759145 A CN202210759145 A CN 202210759145A CN 115034333 A CN115034333 A CN 115034333A
Authority
CN
China
Prior art keywords
model
business
business model
correlation
member device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210759145.4A
Other languages
Chinese (zh)
Inventor
李龙飞
周俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alipay Hangzhou Information Technology Co Ltd
Original Assignee
Alipay Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Hangzhou Information Technology Co Ltd filed Critical Alipay Hangzhou Information Technology Co Ltd
Priority to CN202210759145.4A priority Critical patent/CN115034333A/en
Publication of CN115034333A publication Critical patent/CN115034333A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

Embodiments of the present specification provide a federal learning method, a federal learning device and a federal learning system for training a business model via at least two first member devices and a second member device. Each first member device has local data and the second member device maintains a business model to be trained. In federal learning, after each first member device receives a current business model from a second member device, each first member device trains the received current business model using local data, determines a model correlation between the locally trained business model and the received business model, and provides a locally trained model update amount to the second member device only when the model correlation satisfies a predetermined condition. The second member devices update the business models based on the model update amounts received from the respective first member devices.

Description

Federal learning method, federal learning device and federal learning system
Technical Field
The embodiments of the present description generally relate to the field of artificial intelligence, and in particular, to a federal learning method, a federal learning apparatus, and a federal learning system for training a business model.
Background
Machine learning techniques are widely applied in various business application scenarios. In a business application scenario, a machine learning model is used as a business model to perform various business prediction services, such as classification prediction, business risk prediction, and the like. In many cases, the business model requires model training using business data of multiple data owners. A plurality of data owners (e.g., an electronic commerce company, a courier company, and a bank) each own part of data of training sample data used for training a business model. The multiple data owners desire to use each other's data together to train the business model uniformly, but do not want to provide their respective data to other data owners to prevent their own data from being leaked.
In view of such a situation, a federal learning method capable of protecting data security is proposed, which can cooperate with a plurality of data owners to train a business model while ensuring the data security of each of the data owners.
Disclosure of Invention
The embodiment of the specification provides a federal learning method, a federal learning device and a federal learning system for training a business model, which are characterized in that after each first member device with local data finishes training the local business model, the model updating influence degree of the local business model updating on global business model updating is determined based on the locally trained business model and the received model parameter value of the business model, and only when the model updating influence degree of the local business model updating meets a preset condition, the model updating quantity of the trained business model is locally provided to a second member device to participate in the global business model updating, so that the communication overhead between the first member device and the second member device can be reduced under the condition of not damaging the model training precision of the business model, and further the model training efficiency of the federal learning system is improved, and the system scale of the federal learning system is improved.
According to an aspect of embodiments herein, there is provided a method for training a business model via at least two first member devices, each having local data, and a second member device maintaining a business model to be trained, the method being applied to the first member devices, the method comprising: receiving a current business model from a second member device; training the received current business model using local data; determining model correlation between the locally trained business model and the received business model based on the locally trained business model and the model parameter values of the received business model; and providing the trained model updating amount of the business model to the second member device in response to the model correlation degree not being smaller than a first preset threshold value, so that the second member device can update the business model.
Optionally, in an example of the above aspect, the model relevance may include at least one of: a first model relevance based on the model distance; and a second model relevance based on the model divergence.
Optionally, in an example of the above aspect, the model divergence degree is determined based on a model parameter variation relative value of each model parameter locally trained out of the business model.
Optionally, in one example of the above aspect, the model divergence may be determined according to the following formula:
Figure BDA0003720501270000021
wherein d represents the model parameter dimension of the business model, p j A model parameter value representing a jth model parameter in the locally trained business model, an
Figure BDA0003720501270000022
A model parameter value representing a jth model parameter in the received traffic model.
Optionally, in an example of the above aspect, the first model correlation and the second model correlation have weights, and when the model correlation includes both the first model correlation and the second model correlation, the model correlation is obtained by performing weighted summation on the first model correlation and the second model correlation based on the respective weights.
Optionally, in an example of the above aspect, the method may further include: and in response to the model correlation degree not being smaller than a first preset threshold value, determining a model updating trend correlation degree between the locally trained business model and the received business model, wherein the model updating trend correlation degree is used for indicating whether the model updating trend of the locally trained business model conforms to the global model updating trend of the business model. In response to the model correlation not being less than a first predetermined threshold, providing a model update amount of the trained business model to the second member device for use by the second member device in updating the business model may include: in response to the model correlation degree not being less than a first predetermined threshold and the model updating trend correlation degree not being less than a second predetermined threshold, providing a model updating amount of the trained business model to the second member device for the second member device to use for updating the business model.
Optionally, in an example of the above aspect, the model update trend correlation is determined based on a locally trained business model and a trend of change of each model parameter of the received business model.
Optionally, in an example of the above aspect, the model update trend correlation is determined according to the following formula:
Figure BDA0003720501270000031
Figure BDA0003720501270000032
wherein d represents the model parameter dimension of the business model, sign () is a value taking function, Δ p j The model parameter variation value of the jth model parameter in the locally trained business model is represented,
Figure BDA0003720501270000033
a model parameter variation value representing a jth model parameter in the received business model.
According to another aspect of embodiments herein, there is provided a method for training a business model via at least two first member devices, each having local data, and a second member device maintaining a business model to be trained, the method being applied to the second member device, the method comprising: sending the current business model to each first member device; receiving respective local model update quantities from first member devices determined to have a model correlation not less than a first predetermined threshold after termination of respective current model training processes; and updating the current business model according to the received local model updating amount, wherein the updated business model is used as the current business model of the next round of model training process.
According to another aspect of embodiments herein, there is provided a federated learning apparatus for training business models via at least two first member devices and a second member device, each first member device having local data, the second member device maintaining a business model to be trained, the federated learning apparatus being applied to the first member devices, the federated learning apparatus comprising: a model receiving unit which receives the current business model from the second member device; a model training unit for training the received current service model using local data; the model correlation degree determining unit is used for determining the model correlation degree between the locally trained business model and the received business model based on the locally trained business model and the model parameter value of the received business model; and a model update amount providing unit, which responds to the model correlation degree not less than a first preset threshold value, and provides the trained model update amount of the business model to the second member device for the second member device to update the business model.
Optionally, in an example of the above aspect, the model relevance comprises at least one of: a first model relevance based on the model distance; and a second model relevance based on the model divergence.
Optionally, in an example of the above aspect, the first model correlation and the second model correlation have weights, and when the model correlation includes both the first model correlation and the second model correlation, the model correlation determination unit performs weighted summation on the first model correlation and the second model correlation based on the respective weights to obtain the model correlation.
Optionally, in an example of the above aspect, the federal learning device may further include: and the model updating trend correlation degree determining unit is used for determining the model updating trend correlation degree between the locally trained business model and the received business model in response to the model correlation degree not being smaller than a first preset threshold value, wherein the model updating trend correlation degree is used for indicating whether the model updating trend of the locally trained business model conforms to the model updating global trend of the business model. In response to the model correlation being not less than a first predetermined threshold and the model update tendency correlation being not less than a second predetermined threshold, the model update amount providing unit provides the trained business model with a model update amount to the second member device for use by the second member device in updating the business model.
According to another aspect of embodiments herein, there is provided a federated learning apparatus for training business models via at least two first member devices and a second member device, each first member device having local data, the second member device maintaining a business model to be trained, the federated learning apparatus being applied to the second member device, the federated learning apparatus comprising: the model sending unit is used for sending the current service model to each first member device; a model update amount receiving unit that receives respective local model update amounts from first member devices that are determined to have a model correlation not less than a first predetermined threshold after respective current model training processes end; and the model updating unit updates the current business model according to the received local model updating amount, and the updated business model is used as the current business model in the lower-turn model training process.
According to another aspect of embodiments herein, there is provided a federated learning system, including: at least two first member devices, each first member device comprising the federal learning arrangement as described above, each first member device having local data; and a second member device that maintains a business model to be trained and includes the federal learning device as described above.
According to another aspect of embodiments herein, there is provided a federal learning device for training a business model via at least two first member devices and a second member device, comprising: at least one processor, a memory coupled with the at least one processor, and a computer program stored in the memory, the at least one processor executing the computer program to implement the method for training a business model via at least two first and second member devices as described above.
According to another aspect of embodiments herein, there is provided a computer-readable storage medium storing executable instructions that, when executed, cause a processor to perform the method for training a business model via at least two first and second member devices as above.
According to another aspect of embodiments of the present description, there is provided a computer program product comprising a computer program executed by a processor to implement the method for training a business model via at least two first and second member devices as described above.
Drawings
A further understanding of the nature and advantages of the present disclosure may be realized by reference to the following drawings. In the drawings, similar components or features may have the same reference numerals.
FIG. 1 illustrates an example schematic diagram of a federated learning system for federated learning of business models.
Fig. 2 illustrates an example architectural diagram of a federated learning system in accordance with an embodiment of the present description.
FIG. 3 illustrates an example flow diagram of a federated learning method in accordance with an embodiment of the present description.
Fig. 4 shows a block diagram of a federal learning device on the first member device side in accordance with an embodiment of the present specification.
Fig. 5 shows a block diagram of a federal learning device on the second member device side in accordance with an embodiment of the present specification.
Fig. 6 illustrates a schematic diagram of a computer-implemented federal learning device on a first component device side in accordance with an embodiment of the present description.
Fig. 7 illustrates a schematic diagram of a computer-implemented federal learning device on a second member device side in accordance with an embodiment of the present description.
Detailed Description
The subject matter described herein will now be discussed with reference to example embodiments. It should be understood that these embodiments are discussed only to enable those skilled in the art to better understand the subject matter described herein and are not intended to limit the scope, applicability, or examples set forth in the claims. Changes may be made in the function and arrangement of elements discussed without departing from the scope of the disclosure. Various examples may omit, substitute, or add various procedures or components as needed. For example, the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. In addition, features described with respect to some examples may also be combined in other examples.
As used herein, the term "include" and its variants mean open-ended terms in the sense of "including, but not limited to. The term "based on" means "based at least in part on". The terms "one embodiment" and "an embodiment" mean "at least one embodiment". The term "another embodiment" means "at least one other embodiment". The terms "first," "second," and the like may refer to different or the same object. Other definitions, whether explicit or implicit, may be included below. The definition of a term is consistent throughout the specification unless the context clearly dictates otherwise.
In this specification, the term "business model" refers to a machine learning model applied in a business scenario to perform a business prediction service, such as a machine learning model for classification prediction, business risk prediction, and the like. Examples of machine learning models may include, but are not limited to: linear regression models, logistic regression models, neural network models, decision tree models, support vector machines, and the like. Examples of Neural Network models may include, but are not limited to, Deep Neural Network (DNN) models, Convolutional Neural Network (CNN) models, BP Neural networks, and the like.
The specific implementation of the business model depends on the business scenario applied. For example, in an application scenario where the business model is applied to classify a user, the business model is implemented as a user classification model. Accordingly, the user characteristic data of the user to be classified can be subjected to user classification prediction according to the service model. In an application scenario where the business model is applied to business risk prediction for business transactions occurring on a business system, the business model is implemented as a business risk prediction model. Accordingly, business risk prediction can be performed on the business transaction characteristic data of the business transaction according to the business model.
With the development of artificial intelligence technology, machine learning technology is widely applied to various business application scenarios as a business model to perform various business prediction services, such as classification prediction, business risk prediction, and the like. For example, business models have wide application in the fields of financial fraud, recommendation systems, image recognition, and the like. To achieve better model performance, more training data needs to be used to train the business model. In the application fields of medical treatment, finance and the like, different enterprises or institutions have different data samples, and once the data are subjected to combined training, the model accuracy of a business model is greatly improved, so that huge economic benefits are brought to the enterprises.
In view of the foregoing, a federal learning scheme has been proposed. In a federated learning scheme, a plurality of data owners co-train a business model with the assistance of a server. FIG. 1 illustrates an example schematic diagram of a federated learning system 100 for federated learning of business models.
As shown in FIG. 1, the federated learning system 100 includes a server 110 and a plurality of data owners (clients) 120, such as data owner A120-1, data owner B120-2, and data owner C120-3. The server 110 has a business model 10. When performing federal learning, the server 110 issues the business model 10 to each data owner, and is used by each data owner to perform model calculation based on local data at the data owner, so as to obtain a model prediction value at the data owner. Then, each data owner determines gradient information of each business model based on the calculated model prediction value and tag value, and provides the gradient information to the server 110. The service end 110 updates the business model using the acquired respective gradient information.
In a federal learning system, data communication between a data owner and a server is typically via a network. Since the bandwidth of the network used or the device at the data owner is usually limited, the communication overhead between the data owner and the server becomes a limitation on the size of the participants of the federal learning system.
In view of the above, embodiments of the present disclosure provide a federal learning method, a federal learning apparatus, and a federal learning system for training a business model, which determine a model correlation between a local business model and a global business model based on a locally trained business model and a received model parameter value of the business model after each first member device having local data completes training of the local business model, and only when there is a large model correlation between the local business model and the global business model, locally provide a model update amount of the trained business model to a second member device to participate in global business model update, so as to reduce communication overhead between the first member device and the second member device without damaging model training accuracy of the business model, thereby improving model training efficiency of the federal learning system, and the system scale of the federal learning system is improved.
The federal learning method, the federal learning apparatus, and the federal learning system for training a business model according to the embodiments of the present specification will be described in detail below with reference to the accompanying drawings.
FIG. 2 illustrates an architectural diagram showing a federated learning system 200 for training business models in accordance with embodiments of the present specification.
As shown in fig. 2, federal learning system 200 includes at least two first member devices 210 and a second member device 220. In fig. 2, 3 first member devices 210-1 through 210-3 are shown. In other embodiments of the present description, more or fewer first member devices 210 may be included. At least two first member devices 210 and second member devices 220 may communicate with each other over a network 230 such as, but not limited to, the internet or a local area network, etc.
In embodiments of the present description, the first member device 210 may be a device or a device side for locally collecting data samples, such as a smart terminal device, a server device, an edge device, and the like. In this specification, the term "first member device" and the term "data owner" may be used interchangeably. The business model to be trained is maintained on the second member device 220. For example, the second member device 220 may be a terminal device or a server device at the model provider. The business model maintained on the second member device 220 may also be referred to as a global business model. In this specification, the term "second member device" and the term "model provider" may be used interchangeably.
In embodiments of the present description, the local data for the first member devices 210-1 through 210-3 may include traffic data collected locally by the respective member devices. The business data may include characteristic data of the business object. Examples of business objects may include, but are not limited to, users, goods, events, or relationships. Accordingly, the business data may include, for example, but is not limited to, locally collected user characteristic data, commodity characteristic data, event characteristic data, or relationship characteristic data, such as user characteristic data, business process data, financial transaction data, commodity transaction data, medical health data, and the like. Business data can be applied to business models for model prediction, model training, and other suitable multiparty data joint processing, for example.
In this specification, the service data may include service data based on text data, image data, and/or voice data. Accordingly, the business model may be applied to business risk identification, business classification, or business decision, etc., based on text data, image data, and/or voice data. For example, the local data may be medical data collected by a hospital, and the business model may be used to perform disease examinations or disease diagnoses. Alternatively, the collected local data may include user characteristic data. Accordingly, the business model may be applied to business risk identification, business classification, business recommendation or business decision, etc. based on user characteristic data. Examples of business models may include, but are not limited to, face recognition models, disease diagnosis models, business risk prediction models, service recommendation models, and so forth.
In this description, the local data owned by each first member device 210 constitutes training sample data of a business model, and the local data owned by each first member device is a secret of the second member device, and cannot be learned or completely learned by other first member devices.
In one practical example, each first member device 210 may be, for example, a data storage server or an intelligent terminal device of a business application party or a business application association party, such as a local data storage server or an intelligent terminal device of a different financial institution or medical institution. The second member device may be, for example, a server of a service provider or service operator, such as a server of a third party payment platform for providing payment services.
In this description, each of first member device 210 and second member device 220 may be any suitable electronic device having computing capabilities. The electronic devices include, but are not limited to: personal computers, server computers, workstations, desktop computers, laptop computers, notebook computers, mobile electronic devices, smart phones, tablet computers, cellular phones, Personal Digital Assistants (PDAs), handheld devices, messaging devices, wearable electronic devices, consumer electronic devices, and the like.
Further, first member devices 210-1, 210-2, 210-3 and second member device 220 each have a federal learning means. The federated learning devices present at the first member devices 210-1, 210-2, 210-3 and the second member device 220 may perform network communications via the network 230 for data interaction, whereby a collaborative process performs a model training process for the business model. The operation and structure of the federal learning device will be described in detail below with reference to the accompanying drawings.
In some embodiments, the network 230 may be any one or more of a wired network or a wireless network. Examples of network 230 may include, but are not limited to, a cable network, a fiber optic network, a telecommunications network, an intranet, the internet, a Local Area Network (LAN), a Wide Area Network (WAN), a Wireless Local Area Network (WLAN), a Metropolitan Area Network (MAN), a Public Switched Telephone Network (PSTN), a bluetooth network, a zigbee network (zigbee), Near Field Communication (NFC), an intra-device bus, an intra-device line, and the like, or any combination thereof.
FIG. 3 illustrates a flow diagram of a federated learning method 300 for training business models in accordance with an embodiment of the present description. The flow chart shown in fig. 3 is a process flow for one cycle training process in the federal learning process.
As shown in FIG. 3, at 310, each of the first member devices 210-1, 210-2, and 210-3 receives a current business model from the second member device 220, the received current business model serving as the business model to be trained for the current recurring training process. Here, the current business model is an updated business model resulting from the last loop processing performed at the second member device 220.
At 320, the received business model is trained using the respective local data at the respective first member devices 210-1 through 210-3, respectively. The local business model training process at each first member device may be implemented using any suitable model training process known in the art.
After each of the first member devices 210-1 to 210-3 completes its respective local model training, each of the first member devices 210-1 to 210-3 determines a model correlation between the local business model and the business model received from the second member device based on the locally trained business model and the received model parameter values of the business model, and makes a model correlation determination based on the determined model correlation, e.g., compares the determined model correlation with a predetermined threshold to make a model correlation determination, at 330. And if the determined model correlation degree is not less than the preset threshold value, the locally trained business model and the received business model are considered to have larger model correlation. If the determined model correlation is less than a predetermined threshold, the locally trained business model is considered to have a smaller model correlation with the received business model.
The determined "model relevance" may reflect the degree of influence of the local business model update on the business model update at the second member device. The business model update at the second member device is a global business model update (whole business model update) of the present round of federal learning process. If the model correlation between the locally trained service model and the received service model is large, it indicates that the influence degree of the local service model update on the global service model update is large, so that the model update quantity of the locally trained service model needs to be uploaded to second member equipment to participate in the current round of global service model update. If the model correlation degree between the locally trained business model and the received business model is small, the influence degree of the local business model updating on the global business model updating is small, and therefore the model updating amount of the locally trained business model does not need to be uploaded to second member equipment to participate in the global business model updating in the current round.
In some embodiments, examples of model relevance may include, for example, but are not limited to, at least one of: a first model relevance based on the model distance; and a second model relevance based on the model divergence.
For a first model correlation based on model distance, the first member device may calculate a model distance between the locally trained business model and the current business model received from the second member device. For example, the model parameters of the business model may be characterized as a model parameter vector, and then a vector distance between the model parameter vector of the locally trained business model and the received model parameter vector of the current business model may be calculated as a model distance between the locally trained business model and the current business model received from the second member device. Examples of vector distances may include, for example, but are not limited to: euclidean distance, manhattan distance, hamming distance, chebyshev distance, minkowski distance, cosine distance, etc. Then, according to the obtained model distance, a first model correlation degree between the locally trained business model and the current business model received from the second member device is determined. For example, the model distance may be determined directly as the first model relevance. The model distance may also be normalized to obtain the first model correlation.
In this specification, the term "degree of model divergence" is used to characterize the degree of divergence between models. In some embodiments, the degree of model divergence between the locally trained business model and the received business model may be determined based on the relative values of model parameter changes for the respective model parameters of the locally trained business model. For example, the model divergence may be determined according to the following formula:
Figure BDA0003720501270000101
wherein d represents the model parameter dimension of the business model, p j A model parameter value representing a jth model parameter in the locally trained business model, an
Figure BDA0003720501270000102
A model parameter value representing a jth model parameter in the received business model.
In some embodiments, when determining the correlation degree of the model, a first correlation degree of the model may be determined, and the correlation degree of the model may be determined based on the first correlation degree of the model. And if the first model correlation degree is not less than a preset threshold value, determining that a larger model correlation exists between the locally trained business model and the received business model. At this time, it is not necessary to further determine the second model correlation degree and perform the corresponding model correlation determination. And if the first model correlation degree is smaller than a preset threshold value, determining a second model correlation degree, and judging the model correlation degree based on the second model correlation degree. And if the second model correlation degree is not less than a preset threshold value, determining that a larger model correlation exists between the locally trained business model and the received business model. And if the second model correlation degree is smaller than a preset threshold value, determining that small model correlation exists between the locally trained business model and the received business model. It should be noted that the predetermined threshold used in the first model correlation determination process and the second model correlation determination process may be the same or different.
In some embodiments, when determining the correlation degree of the model, the correlation degree of the second model may be determined first, and the correlation degree of the model may be determined based on the correlation degree of the second model. And if the second model correlation degree is not less than the preset threshold value, determining that a larger model correlation exists between the locally trained business model and the received business model. At this time, it is not necessary to further determine the first model correlation and perform the corresponding model correlation determination. And if the second model correlation degree is smaller than a preset threshold value, determining the first model correlation degree, and judging the model correlation degree based on the first model correlation degree. And if the first model correlation degree is not less than a preset threshold value, determining that a larger model correlation exists between the locally trained business model and the received business model. And if the first model correlation degree is smaller than a preset threshold value, determining that small model correlation exists between the locally trained business model and the received business model.
In some embodiments, in making the model relevance determination, a first model relevance and a second model relevance may be calculated, and the first model relevance and the second model relevance may be pre-assigned weights. And then, carrying out weighted summation on the first model correlation and the second model correlation based on respective weights to obtain the model correlation. Then, a model correlation determination is performed based on the obtained model correlation degree.
In response to determining that there is a small model correlation between the locally trained business model and the received business model, that is, the determined model correlation is smaller than the first predetermined threshold, the model update amount of the business model trained by the local business model is not sent to the second member device.
Returning to fig. 3, in response to determining that there is a large model correlation between the locally trained business model and the received business model, i.e., the determined model correlation is not less than the first predetermined threshold, at 340, a model update trend correlation between the locally trained business model and the received business model is determined at the corresponding first member device, the model update trend correlation being used to indicate whether a model update trend of the locally trained business model conforms to a model update global trend of the business model.
In some embodiments, the model update trend correlation may be determined based on a locally trained business model and a trend of change of various model parameters of the received business model. For example, in one example, the model update trend correlation may be determined according to the following formula:
Figure BDA0003720501270000111
Figure BDA0003720501270000112
wherein d represents the dimension of the model parameter of the business model, sign () is a value function, and Δ p j The model parameter variation value of the jth model parameter in the locally trained business model is represented,
Figure BDA0003720501270000113
a model parameter variation value representing a jth model parameter in the received business model.
In response to the determined correlation of the model update trend is not less than the second predetermined threshold, at 350, the trained model update amount of the business model is provided to the second member device at the corresponding first member device. And in response to the determined correlation degree of the model updating trend being smaller than a second preset threshold value, not providing the trained model updating amount of the business model to the second member device.
After receiving the local model update quantity sent by the first member device, the second member device updates the current business model using the received local model update quantity at 360. And the updated service model is taken as the current service model of the next cycle training process and is sent to each first member device to execute the next cycle training process.
The federal learning process in accordance with embodiments of the present specification is described above with reference to fig. 1 through 3. It is noted that fig. 3 shows only an exemplary embodiment. In other embodiments of the present description, the embodiment shown in fig. 3 may also be modified. For example, the determination process of the correlation degree of the model update tendency and the judgment process based on the correlation degree of the model update tendency may not be performed.
By utilizing the federal learning process, after each first member device with local data finishes training a local business model, the model correlation between the local business model and the global business model is determined based on the locally trained business model and the received model parameter value of the business model, and only when the local business model and the global business model have larger model correlation, the model updating amount of the trained business model is locally provided to the second member device to participate in the global business model updating, so that the communication overhead between the first member device and the second member device can be reduced under the condition of not damaging the model training precision of the business model, the model training efficiency of the federal learning system is further improved, and the system scale of the federal learning system is improved.
By utilizing the federated learning scheme, the model correlation is further limited to the first model correlation based on the model distance and the second model correlation based on the model divergence, so that the accuracy of the model correlation determination can be improved, and the model training precision of the business model is further ensured not to be damaged.
By using the federal learning scheme, under the condition that the business model is determined to have larger model correlation based on the model correlation degree, whether the model updating trend of the business model after the local model training accords with the model updating trend of the global business model at the second member device is further determined, and under the condition that the model updating trend of the business model after the local model training accords with the model updating trend of the global business model at the second member device, the model updating amount of the business model trained by the local model is sent to the second member device, so that the model training precision of the business model is further ensured not to be damaged.
Fig. 4 shows a block diagram of a federated learning apparatus 400 on the first member device side for training a business model in accordance with an embodiment of the present description. As shown in fig. 4, the federal learning device 400 includes a model receiving unit 410, a model training unit 420, a model correlation degree determination unit 430, a model update trend correlation degree determination unit 440, and a model update amount providing unit 450.
The model receiving unit 410 is configured to receive a current business model from the second member device. The operation of the model receiving unit 410 may refer to the operation described above with reference to 310 of fig. 3.
The model training unit 420 is configured to train the received current business model using local data. The operation of the model training unit 420 may refer to the operation described above with reference to 320 of FIG. 3.
The model relevance determining unit 430 is configured to determine a model relevance between the local business model and the business model received from the second member device based on the locally trained business model and the received model parameter values of the business model. The operation of the model correlation determination unit 430 may refer to the operation described above with reference to 330 of fig. 3.
The model update trend correlation determination unit 440 is configured to determine a model update trend correlation between the locally trained business model and the received business model in response to the determined model correlation being not less than a first predetermined threshold, the model update trend correlation indicating whether a model update trend of the locally trained business model conforms to a model update global trend of the business model. The operation of the model update tendency correlation determination unit 440 may refer to the operation described above with reference to 340 of fig. 3.
The model update amount providing unit 450 is configured to provide the trained model update amount of the business model to a second member device for the second member device to use for updating the business model in response to the determined model correlation degree not being less than the first predetermined threshold and the determined model update tendency correlation degree not being less than the second predetermined threshold. In response to the determined model correlation being less than the first predetermined threshold or the determined model update tendency correlation being less than the second predetermined threshold, the model update amount providing unit 450 does not provide the trained model update amount of the business model to the second member device. The operation of the model update amount providing unit 450 may refer to the operation described above with reference to 350 of fig. 3.
In some embodiments, the model correlation determined by the model correlation determination unit 430 may include at least one of: a first model relevance based on model distance; and a second model relevance based on the model divergence.
In some embodiments, the model relevance determining unit 430 may determine a first model relevance and perform the model relevance determination based on the first model relevance. If the first model correlation is not less than the predetermined threshold, the model correlation determination unit 430 determines that there is a large model correlation between the locally trained business model and the received business model. At this time, it is not necessary to further determine the second model correlation degree and perform the corresponding model correlation determination. If the first model correlation is smaller than the predetermined threshold, the model correlation determination unit 430 determines a second model correlation, and makes a model correlation determination based on the second model correlation. If the second model correlation is not less than the predetermined threshold, the model correlation determination unit 430 determines that there is a large model correlation between the locally trained business model and the received business model. If the second model correlation is less than the predetermined threshold, the model correlation determination unit 430 determines that there is a small model correlation between the locally trained traffic model and the received traffic model.
In some embodiments, the model correlation determination unit 430 may determine the second model correlation and perform the model correlation determination based on the second model correlation. If the second model correlation is not less than the predetermined threshold, the model correlation determination unit 430 determines that there is a large model correlation between the locally trained business model and the received business model. At this time, it is not necessary to further determine the first model correlation and perform the corresponding model correlation determination. If the second model correlation is smaller than the predetermined threshold, the model correlation determination unit 430 determines the first model correlation, and performs the model correlation determination based on the first model correlation. If the first model correlation is not less than the predetermined threshold, the model correlation determination unit 430 determines that there is a large model correlation between the locally trained business model and the received business model. If the first model correlation is less than the predetermined threshold, the model correlation determination unit 430 determines that there is a small model correlation between the locally trained traffic model and the received traffic model.
In some embodiments, the model correlation determination unit 430 calculates a first model correlation and a second model correlation, the calculated first model correlation and second model correlation having weights, respectively. Subsequently, the model correlation determination unit 430 performs weighted summation on the first model correlation and the second model correlation based on the respective weights to obtain a model correlation, and performs model correlation determination based on the obtained model correlation.
In some embodiments, federal learning device 400 may not include model update trend correlation determination unit 440. In this case, in response to the determined model correlation being not less than the first predetermined threshold, the model update amount providing unit 450 provides the trained model update amount of the business model to the second member device for the second member device to use to update the business model. In response to the determined model correlation being less than the first predetermined threshold, the model update amount providing unit 450 does not provide the trained model update amount of the business model to the second member device.
Fig. 5 shows a block diagram of a federal learning device 500 for training a business model on the second member device side in accordance with an embodiment of the present description. As shown in fig. 5, the federal learning device 500 includes a model transmission unit 710, a model update reception unit 520, and a model update unit 530.
The model transmitting unit 510 is configured to transmit the current business model to the respective first member devices. The operation of the model transmitting unit 510 may refer to the operation described above with reference to 310 of fig. 3.
The model update receiving unit 520 is configured to receive respective local model update quantities from first member devices determined to have a model correlation not smaller than a first predetermined threshold after the respective current model training process is ended. The operation of the parameter update receiving unit 520 may refer to the operation described above with reference to 350 of fig. 3.
The model updating unit 530 is configured to update the current business model according to the received local model update quantities, where the updated business model is used as the current business model of the next round of model training process. The operation of the model updating unit 530 may refer to the operation described above with reference to 650 of fig. 3.
As described above with reference to fig. 1 to 5, the federal learning method for training the business model, the federal learning apparatus for training the business model, and the federal learning system according to the embodiments of the present specification are described. The above federal learning device for business model training can be implemented by hardware, software, or a combination of hardware and software.
FIG. 6 illustrates a schematic diagram of a federated learning apparatus 600 that is based on a computer implementation on a first member device side for training a business model, according to an embodiment of the present description. As shown in fig. 6, federal learning device 600 may include at least one processor 610, storage (e.g., non-volatile storage) 620, memory 630 and communication interface 640, and at least one processor 610, storage 620, memory 630 and communication interface 640 are connected together via bus 660. The at least one processor 810 executes at least one computer program (i.e., the above-described elements implemented in software) stored or encoded in memory.
In one embodiment, a computer program is stored in the memory that, when executed, causes the at least one processor 610 to: receiving a current business model from a second member device; training the received current business model using local data; determining a model correlation degree of local business model updating to business model updating at the second member device based on the locally trained business model and the received model parameter values of the business model; and in response to the model correlation degree not being smaller than the first preset threshold value, providing the trained model updating amount of the business model to a second member device to be used by the second member device for updating the business model.
It should be appreciated that the computer programs stored in the memory, when executed, cause the at least one processor 610 to perform the various operations and functions described above in connection with fig. 1-4 in the various embodiments of the present description.
Fig. 7 illustrates a schematic diagram of a federated learning apparatus 700 for training business models that is computer-implemented on a second member device side, according to an embodiment of the present description. As shown in fig. 7, federal learning device 700 may include at least one processor 710, storage (e.g., non-volatile storage) 720, memory 730, and communication interface 740, and at least one processor 710, storage 720, memory 730, and communication interface 740 are coupled together via bus 760. The at least one processor 710 executes at least one computer program (i.e., the above-described elements implemented in software) stored or encoded in memory.
In one embodiment, a computer program is stored in the memory that, when executed, causes the at least one processor 710 to: sending the current business model to each first member device; receiving respective local model update quantities from first member devices determined to have a model correlation not less than a first predetermined threshold after termination of respective current model training processes; and updating the current business model according to the received local model updating amount, wherein the updated business model is used as the current business model of the next round of model training process.
It should be appreciated that the computer programs stored in the memory, when executed, cause the at least one processor 710 to perform the various operations and functions described above in connection with fig. 1-3 and 5 in the various embodiments of the present specification.
According to one embodiment, a program product, such as a computer-readable medium (e.g., a non-transitory computer-readable medium), is provided. The computer-readable medium may have a computer program (i.e., the elements described above as being implemented in software) that, when executed by a processor, causes the processor to perform various operations and functions described above in connection with fig. 1-5 in various embodiments of the present specification. Specifically, a system or apparatus may be provided which is provided with a readable storage medium on which software program code implementing the functions of any of the above embodiments is stored, and causes a computer or processor of the system or apparatus to read out and execute instructions stored in the readable storage medium.
In this case, the program code itself read from the readable medium can realize the functions of any of the above-described embodiments, and thus the computer-readable code and the readable storage medium storing the computer-readable code constitute a part of the present invention.
Examples of the readable storage medium include floppy disks, hard disks, magneto-optical disks, optical disks (e.g., CD-ROMs, CD-R, CD-RWs, DVD-ROMs, DVD-RAMs, DVD-RWs), magnetic tapes, nonvolatile memory cards, and ROMs. Alternatively, the program code may be downloaded from a server computer or from the cloud via a communications network.
According to an embodiment, a computer program product is provided, which comprises a computer program, which when executed by a processor, causes the processor to perform the various operations and functions described above in connection with fig. 1-5 in the various embodiments of the present description.
It will be understood by those skilled in the art that various changes and modifications may be made in the above-disclosed embodiments without departing from the spirit of the invention. Accordingly, the scope of the invention should be determined from the following claims.
It should be noted that not all steps and units in the above flows and system structure diagrams are necessary, and some steps or units may be omitted according to actual needs. The execution order of the steps is not fixed, and can be determined as required. The apparatus structures described in the above embodiments may be physical structures or logical structures, that is, some units may be implemented by the same physical entity, or some units may be implemented by a plurality of physical entities, or some units may be implemented by some components in a plurality of independent devices.
In the above embodiments, the hardware units or modules may be implemented mechanically or electrically. For example, a hardware unit, module or processor may comprise permanently dedicated circuitry or logic (such as a dedicated processor, FPGA or ASIC) to perform the corresponding operations. The hardware units or processors may also include programmable logic or circuitry (e.g., a general purpose processor or other programmable processor) that may be temporarily configured by software to perform the corresponding operations. The specific implementation (mechanical, or dedicated permanent, or temporarily set) may be determined based on cost and time considerations.
The detailed description set forth above in connection with the appended drawings describes exemplary embodiments but does not represent all embodiments that may be practiced or fall within the scope of the claims. The term "exemplary" used throughout this specification means "serving as an example, instance, or illustration," and does not mean "preferred" or "advantageous" over other embodiments. The detailed description includes specific details for the purpose of providing an understanding of the described technology. However, the techniques may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form in order to avoid obscuring the concepts of the described embodiments.
The previous description of the disclosure is provided to enable any person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the scope of the disclosure. Thus, the disclosure is not intended to be limited to the examples and designs described herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (18)

1. A method for training a business model via at least two first member devices, each having local data, and a second member device maintaining a business model to be trained, the method being applied to the first member devices, the method comprising:
receiving a current business model from a second member device;
training the received current business model using local data;
determining a model correlation degree between the locally trained business model and the received business model based on the locally trained business model and the model parameter values of the received business model; and
in response to the model correlation not being less than a first predetermined threshold, providing a model update amount of the trained business model to the second member device for use by the second member device in updating the business model.
2. The method of claim 1, wherein the model relevance comprises at least one of:
a first model relevance based on the model distance; and
a second model relevance based on the model divergence.
3. The method of claim 2, wherein the model divergence is determined based on relative values of model parameter variations of respective model parameters of the locally trained business model.
4. The method of claim 3, wherein the model divergence is determined according to the following formula:
Figure FDA0003720501260000011
wherein d represents the model parameter dimension of the business model, p j A model parameter value representing a jth model parameter in the locally trained business model, an
Figure FDA0003720501260000012
A model parameter value representing a jth model parameter in the received traffic model.
5. The method of claim 2, wherein the first and second model correlations have weights, and wherein the model correlations are weighted sums of the first and second model correlations based on the respective weights when the model correlations include both the first and second model correlations.
6. The method of claim 1, further comprising:
determining a model update trend correlation between the locally trained business model and the received business model in response to the model correlation being not less than a first predetermined threshold, the model update trend correlation indicating whether a model update trend of the locally trained business model conforms to a model update global trend of the business model,
in response to the model correlation not being less than a first predetermined threshold, providing a model update amount of the trained business model to the second member device for use by the second member device in updating the business model comprises:
in response to the model correlation degree not being less than a first predetermined threshold and the model updating trend correlation degree not being less than a second predetermined threshold, providing a model updating amount of the trained business model to the second member device for the second member device to use for updating the business model.
7. The method of claim 6, wherein the model update trend correlation is determined based on a locally trained business model and a trend of change of respective model parameters of the received business model.
8. The method of claim 7, wherein the model update trend correlation is determined according to the following equation:
Figure FDA0003720501260000021
Figure FDA0003720501260000022
wherein d represents the model parameter dimension of the business model, sign () is a value taking function, Δ p j The model parameter variation value of the jth model parameter in the locally trained business model is represented,
Figure FDA0003720501260000023
a model parameter variation value representing a jth model parameter in the received business model.
9. A method for training a business model via at least two first member devices, each having local data, and a second member device maintaining a business model to be trained, the method being applied to the second member device, the method comprising:
sending the current business model to each first member device;
receiving respective local model update quantities from first member devices determined to have a model correlation not less than a first predetermined threshold after termination of respective current model training processes; and
and updating the current business model according to the received local model updating amount, wherein the updated business model is used as the current business model in the next round of model training process.
10. A federal learning arrangement for training a business model via at least two first member devices and a second member device, each first member device having local data, the second member device maintaining the business model to be trained, the federal learning arrangement being applied to the first member devices, the federal learning arrangement comprising:
a model receiving unit which receives the current business model from the second member device;
a model training unit for training the received current business model by using local data;
the model correlation degree determining unit is used for determining the model correlation degree between the locally trained business model and the received business model based on the locally trained business model and the model parameter value of the received business model; and
and a model update amount providing unit, which provides the trained model update amount of the business model to the second member device in response to the model correlation degree not being less than a first preset threshold value, so that the second member device can update the business model.
11. The federal learning device as claimed in claim 10, wherein the model correlations include at least one of:
a first model relevance based on the model distance; and
a second model relevance based on the model divergence.
12. The federal learning device as claimed in claim 11, wherein the first model correlation degree and the second model correlation degree have weights, and the model correlation degree determination unit obtains the model correlation degree by weighted summation of the first model correlation degree and the second model correlation degree based on the respective weights when the model correlation degree includes both the first model correlation degree and the second model correlation degree.
13. The federal learning device as claimed in claim 10, further comprising:
a model update tendency correlation determination unit that determines a model update tendency correlation between the locally trained business model and the received business model in response to the model correlation being not less than a first predetermined threshold, the model update tendency correlation indicating whether a model update tendency of the locally trained business model conforms to a model update global tendency of the business model,
in response to the model correlation being not less than a first predetermined threshold and the model update tendency correlation being not less than a second predetermined threshold, the model update amount providing unit provides the trained business model with a model update amount to the second member device for use by the second member device in updating the business model.
14. A federal learning arrangement for training a business model via at least two first member devices and a second member device, each first member device having local data, the second member device maintaining the business model to be trained, the federal learning arrangement being applied to the second member device, the federal learning arrangement comprising:
the model sending unit is used for sending the current service model to each first member device;
a model update amount receiving unit that receives respective local model update amounts from first member devices that are determined to have a model correlation not less than a first predetermined threshold after respective current model training processes end; and
and the model updating unit updates the current business model according to the received local model updating amount, and the updated business model is used as the current business model of the next-round model training process.
15. A bang learning system, comprising:
at least two first member devices, each first member device comprising the federal learning device as claimed in any of claims 10 to 13, each first member device having local data; and
a second member device that maintains a business model to be trained and that includes the federal learning device as claimed in claim 14.
16. A federal learning device for training a business model via at least two first and second member devices, comprising:
at least one processor for processing the received data,
a memory coupled to the at least one processor, an
A computer program stored in the memory, the computer program being executable by the at least one processor to implement the method of any one of claims 1 to 8 or the method of claim 9.
17. A computer readable storage medium storing executable instructions that when executed cause a processor to perform the method of any one of claims 1 to 8 or the method of claim 9.
18. A computer program product comprising a computer program for execution by a processor to perform the method of any one of claims 1 to 8 or the method of claim 9.
CN202210759145.4A 2022-06-29 2022-06-29 Federal learning method, federal learning device and federal learning system Pending CN115034333A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210759145.4A CN115034333A (en) 2022-06-29 2022-06-29 Federal learning method, federal learning device and federal learning system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210759145.4A CN115034333A (en) 2022-06-29 2022-06-29 Federal learning method, federal learning device and federal learning system

Publications (1)

Publication Number Publication Date
CN115034333A true CN115034333A (en) 2022-09-09

Family

ID=83126310

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210759145.4A Pending CN115034333A (en) 2022-06-29 2022-06-29 Federal learning method, federal learning device and federal learning system

Country Status (1)

Country Link
CN (1) CN115034333A (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111325417A (en) * 2020-05-15 2020-06-23 支付宝(杭州)信息技术有限公司 Method and device for realizing privacy protection and realizing multi-party collaborative updating of business prediction model
CN111460443A (en) * 2020-05-28 2020-07-28 南京大学 Security defense method for data manipulation attack in federated learning
CN112181666A (en) * 2020-10-26 2021-01-05 华侨大学 Method, system, equipment and readable storage medium for equipment evaluation and federal learning importance aggregation based on edge intelligence
CN112181971A (en) * 2020-10-27 2021-01-05 华侨大学 Edge-based federated learning model cleaning and equipment clustering method, system, equipment and readable storage medium
CN113377797A (en) * 2021-07-02 2021-09-10 支付宝(杭州)信息技术有限公司 Method, device and system for jointly updating model
US20210342749A1 (en) * 2020-04-29 2021-11-04 International Business Machines Corporation Adaptive asynchronous federated learning
CN113962322A (en) * 2021-11-01 2022-01-21 浙江大学 Federal learning-based backdoor attack defense method and system and storable medium
CN114091617A (en) * 2021-11-29 2022-02-25 深圳前海微众银行股份有限公司 Federal learning modeling optimization method, electronic device, storage medium, and program product
CN114186237A (en) * 2021-10-26 2022-03-15 北京理工大学 Truth-value discovery-based robust federated learning model aggregation method
US20220158888A1 (en) * 2020-11-18 2022-05-19 Research & Business Foundation Sungkyunkwan University Method to remove abnormal clients in a federated learning model
CN114676849A (en) * 2022-03-24 2022-06-28 支付宝(杭州)信息技术有限公司 Method and system for updating model parameters based on federal learning

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210342749A1 (en) * 2020-04-29 2021-11-04 International Business Machines Corporation Adaptive asynchronous federated learning
CN111325417A (en) * 2020-05-15 2020-06-23 支付宝(杭州)信息技术有限公司 Method and device for realizing privacy protection and realizing multi-party collaborative updating of business prediction model
CN111460443A (en) * 2020-05-28 2020-07-28 南京大学 Security defense method for data manipulation attack in federated learning
CN112181666A (en) * 2020-10-26 2021-01-05 华侨大学 Method, system, equipment and readable storage medium for equipment evaluation and federal learning importance aggregation based on edge intelligence
CN112181971A (en) * 2020-10-27 2021-01-05 华侨大学 Edge-based federated learning model cleaning and equipment clustering method, system, equipment and readable storage medium
US20220158888A1 (en) * 2020-11-18 2022-05-19 Research & Business Foundation Sungkyunkwan University Method to remove abnormal clients in a federated learning model
CN113377797A (en) * 2021-07-02 2021-09-10 支付宝(杭州)信息技术有限公司 Method, device and system for jointly updating model
CN114186237A (en) * 2021-10-26 2022-03-15 北京理工大学 Truth-value discovery-based robust federated learning model aggregation method
CN113962322A (en) * 2021-11-01 2022-01-21 浙江大学 Federal learning-based backdoor attack defense method and system and storable medium
CN114091617A (en) * 2021-11-29 2022-02-25 深圳前海微众银行股份有限公司 Federal learning modeling optimization method, electronic device, storage medium, and program product
CN114676849A (en) * 2022-03-24 2022-06-28 支付宝(杭州)信息技术有限公司 Method and system for updating model parameters based on federal learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
夏建业;田锡炜;刘娟;庄英萍;: "人工智能时代的智能生物制造", 生物加工过程, no. 01, 15 January 2020 (2020-01-15) *

Similar Documents

Publication Publication Date Title
CN110929870B (en) Method, device and system for training neural network model
CN112580826B (en) Business model training method, device and system
CN108229419B (en) Method and apparatus for clustering images
CN113536383B (en) Method and device for training graph neural network based on privacy protection
CN108197592B (en) Information acquisition method and device
CN109740620A (en) Method for building up, device, equipment and the storage medium of crowd portrayal disaggregated model
CN112052942B (en) Neural network model training method, device and system
CN108460365B (en) Identity authentication method and device
CN108491812B (en) Method and device for generating face recognition model
CN111738438B (en) Method, device and system for training neural network model
WO2022142903A1 (en) Identity recognition method and apparatus, electronic device, and related product
CN111814910A (en) Abnormality detection method, abnormality detection device, electronic apparatus, and storage medium
CN111931628B (en) Training method and device of face recognition model and related equipment
CN111368983A (en) Business model training method and device and business model training system
CN114757364A (en) Federal learning method and device based on graph neural network and federate learning system
CN113705534A (en) Behavior prediction method, behavior prediction device, behavior prediction equipment and storage medium based on deep vision
CN114880513A (en) Target retrieval method and related device
CN113033824A (en) Model hyper-parameter determination method, model training method and system
CN112434746A (en) Pre-labeling method based on hierarchical transfer learning and related equipment thereof
WO2023142550A1 (en) Abnormal event detection method and apparatus, computer device, storage medium, computer program, and computer program product
CN112288088B (en) Business model training method, device and system
CN115034333A (en) Federal learning method, federal learning device and federal learning system
CN112966809B (en) Privacy protection-based two-party model prediction method, device and system
CN114493850A (en) Artificial intelligence-based online notarization method, system and storage medium
CN109829150B (en) Insurance claim text processing method and apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination