CN112418446A - Model processing method, system, device, medium and electronic equipment - Google Patents

Model processing method, system, device, medium and electronic equipment Download PDF

Info

Publication number
CN112418446A
CN112418446A CN202011298789.5A CN202011298789A CN112418446A CN 112418446 A CN112418446 A CN 112418446A CN 202011298789 A CN202011298789 A CN 202011298789A CN 112418446 A CN112418446 A CN 112418446A
Authority
CN
China
Prior art keywords
model
inference
inference service
target
input data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011298789.5A
Other languages
Chinese (zh)
Other versions
CN112418446B (en
Inventor
陈程
周子凯
余乐乐
解浚源
吴良超
常龙
张力哲
刘小兵
吴迪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lemon Inc Cayman Island
Original Assignee
Lemon Inc Cayman Island
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lemon Inc Cayman Island filed Critical Lemon Inc Cayman Island
Priority to CN202011298789.5A priority Critical patent/CN112418446B/en
Publication of CN112418446A publication Critical patent/CN112418446A/en
Priority to PCT/SG2021/050707 priority patent/WO2022108527A1/en
Application granted granted Critical
Publication of CN112418446B publication Critical patent/CN112418446B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/602Providing cryptographic facilities or services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models

Abstract

The present disclosure relates to a model processing method, system, apparatus, medium, and electronic device, the method comprising: acquiring a plurality of sub-models; splicing the sub models to obtain a target model; and under the condition of receiving a model acquisition request aiming at the target model sent by an inference service executor, sending the target model to the inference service executor so as to enable the inference service executor to obtain an inference result through the target model. By the technical scheme, the inference service executing party can directly obtain the inference result through the integral target model, the whole process of the inference service can be completed locally at the inference service executing party, a plurality of models are not needed for training remote communication transmission data among the participating parties, not only can the communication overhead be reduced, but also the problem of unstable remote transmission caused by the influence of factors such as network routing and the like can be effectively avoided, the normal operation of the inference service is ensured, and the reliability of the inference service is improved.

Description

Model processing method, system, device, medium and electronic equipment
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a model processing method, system, apparatus, medium, and electronic device.
Background
The federated machine learning is more and more widely applied in the field of machine learning, can solve the problems of data islanding and data privacy, and effectively helps a plurality of organizations to complete the co-training of models through a federated learning system under the requirements of user privacy protection and data safety. The federal learning model is generally composed of a plurality of sub-models, and when reasoning service is carried out through the federal learning model, how to ensure the reliability of the reasoning service is an important problem.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
In a first aspect, the present disclosure provides a model processing method, the method comprising: acquiring a plurality of sub-models; splicing the sub models to obtain a target model; and under the condition of receiving a model acquisition request aiming at the target model sent by an inference service executor, sending the target model to the inference service executor so as to enable the inference service executor to obtain an inference result through the target model.
In a second aspect, the present disclosure provides a model processing method, the method comprising: sending a model acquisition request aiming at a target model, wherein the target model is obtained by splicing the plurality of sub-models; and receiving the target model, and obtaining a reasoning result through the target model.
In a third aspect, the present disclosure provides a model processing system, which includes a model optimization platform, a model storage platform; the model optimization platform is used for acquiring a plurality of sub models, splicing the sub models to obtain a target model, and sending the target model to the model storage platform; and the model storage platform is used for sending the target model to the inference service executor under the condition of receiving a model acquisition request aiming at the target model and sent by the inference service executor so as to enable the inference service executor to obtain an inference result through the target model.
In a fourth aspect, the present disclosure provides a model processing apparatus, the apparatus comprising: an obtaining module configured to obtain a plurality of submodels; the splicing module is configured to splice the plurality of sub-models to obtain a target model; and the target model sending module is configured to send the target model to the inference service executor so as to enable the inference service executor to obtain an inference result through the target model under the condition of receiving a model obtaining request which is sent by the inference service executor and aims at the target model.
In a fifth aspect, the present disclosure provides a model processing apparatus, the apparatus comprising: an obtaining request sending module configured to send a model obtaining request for a target model, wherein the target model is obtained by splicing the plurality of sub models; and the reasoning module is configured for receiving the target model and obtaining a reasoning result through the target model.
In a sixth aspect, the present disclosure provides a computer readable medium having stored thereon a computer program which, when executed by a processing apparatus, performs the steps of the method provided by the first aspect of the present disclosure.
In a seventh aspect, the present disclosure provides a computer readable medium having stored thereon a computer program which, when executed by a processing apparatus, performs the steps of the method provided by the second aspect of the present disclosure.
In an eighth aspect, the present disclosure provides an electronic device comprising: a storage device having a computer program stored thereon; processing means for executing the computer program in the storage means to implement the steps of the method provided by the first aspect of the present disclosure.
In a ninth aspect, the present disclosure provides an electronic device comprising: a storage device having a computer program stored thereon; processing means for executing the computer program in the storage means to implement the steps of the method provided by the second aspect of the present disclosure.
By the technical scheme, the plurality of sub-models are spliced to obtain the target model, and the inference service executor can obtain the inference result through the target model. The target model is obtained by splicing a plurality of submodels, the inference service executive party can directly obtain an inference result through the whole target model, each model training participant does not need to load the submodel of the inference service executive party, the whole inference service process can be completed locally by the inference service executive party, remote communication transmission data among the plurality of model training participants is not needed, communication overhead can be reduced, the problem of unstable remote transmission caused by the influence of factors such as network routing and the like can be effectively avoided, normal operation of inference service is ensured, and reliability of the inference service is improved.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale. In the drawings:
fig. 1 is a diagram of a federal learning model in the related art.
FIG. 2 is a flow diagram illustrating a method of model processing in accordance with an exemplary embodiment.
FIG. 3 is a schematic diagram illustrating an object model in accordance with an exemplary embodiment.
FIG. 4 is a schematic diagram illustrating a model processing system in accordance with an exemplary embodiment.
FIG. 5 is a diagram illustrating an inference service executor obtaining inference results from a target model according to its model input data, according to an exemplary embodiment.
FIG. 6 is a flow diagram illustrating a method of model processing in accordance with an exemplary embodiment.
FIG. 7 is a block diagram illustrating a model processing device in accordance with an exemplary embodiment.
FIG. 8 is a block diagram illustrating a model processing device in accordance with an exemplary embodiment.
Fig. 9 is a schematic structural diagram of an electronic device according to an exemplary embodiment.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
The federal learning system can be used for training a common federal learning model by combining data of a plurality of data owners, the federal learning model is trained by integrating data of the plurality of data owners, and the training data is more comprehensive, so that the accuracy of the federal learning model is higher. The federal learning model is generally composed of a plurality of submodels, fig. 1 is a schematic diagram of the federal learning model in the related art, and as shown in fig. 1, the federal learning model includes a submodel a and a submodel B, for example, the submodel a corresponds to the model training participant 1, the model input data X, Y, Z of the submodel a is the data owned by the model training participant 1, the submodel B corresponds to the model training participant 2, and the model input data M, N of the submodel B is the data owned by the model training participant 2.
When reasoning service is carried out through the federal learning model, each model training participant loads respective submodel, namely a model training participant 1 loads a submodel A, and a model training participant 2 loads a submodel B. As shown in fig. 1, the model training participant 1 performs calculation through the submodel a according to the model input data X, Y, Z, and then the model training participant 1 needs to remotely send data to the model training participant 2 through the sending node of the submodel a, so as to transmit the data to the receiving node of the submodel B, and the model training participant 2 obtains an inference result through the submodel B according to the data received by the receiving node and the model input data M, N. Therefore, when inference service is carried out through the federal learning model, remote communication is needed among a plurality of model training participants to complete the whole inference service, namely, a sending node and a receiving node transmit data in a remote communication mode, and the communication overhead is high. Moreover, remote communication is easily affected by factors such as network routing and the like, and is generally unstable and low in reliability, so that the calculation process of inference service is unstable. For example, if a sending node cannot transmit data to a receiving node in time due to a network congestion phenomenon when transmitting data to the receiving node remotely, the progress of the entire inference service is affected. The present disclosure provides a model processing method, system, apparatus, medium, and electronic device for solving the problems in the related art.
Fig. 2 is a flow chart illustrating a model processing method according to an exemplary embodiment, which may include S201 to S203, as shown in fig. 2.
In S201, a plurality of submodels are acquired.
In S202, the multiple submodels are spliced to obtain a target model.
Fig. 3 is a schematic diagram of an object model according to an exemplary embodiment, where the object model shown in fig. 3 may be obtained according to the federal learning model shown in fig. 1, and a sending node of a sub-model a and a receiving node of a sub-model B have a connection relationship, and as shown in fig. 3, a computing node of the sub-model a connected to the sending node and a computing node of the sub-model B connected to the receiving node may be connected to obtain an object model, where the object model is an overall full model obtained by splicing the sub-model a and the sub-model B together. It should be noted that the present disclosure is illustrated by taking two submodels as an example, but the present disclosure is not limited to the embodiment of the present disclosure, and in practical applications, the number of the submodels may be multiple, and the present disclosure is not limited specifically.
In S203, in the case of receiving a model acquisition request for the target model sent by the inference service executor, the target model is sent to the inference service executor so that the inference service executor obtains an inference result through the target model.
The inference service may refer to a process in which a server performs calculation through a model according to input data and obtains a result. Illustratively, by taking the prediction of the shopping intention of the user as an example, the current shopping intention of the user can be inferred through the model according to the historical shopping behavior information of the user, and then an inference result which is in accordance with the shopping intention and the demand of the user can be provided for the user. By taking prediction of the search intention of the user as an example, the current search intention of the user can be inferred through the model according to the historical click behavior information of the user, and then an inference result according with the search intention can be provided for the user.
In an alternative embodiment, one of the models can train the participant as the inference service executor, load the target model and obtain the inference result through the target model. The target model is obtained by splicing a plurality of submodels, the inference service executive party can directly obtain an inference result through the whole target model, each model training participant does not need to load own submodel respectively, and a plurality of model training participants do not need to remotely communicate and transmit data, so that the problem of unstable remote communication can be effectively avoided.
It should be noted that, in the present disclosure, when referring to operations of sending, receiving, and processing data by the inference service executing side, it is understood that the inference service executing side performs these operations through the server device.
By the technical scheme, the plurality of sub-models are spliced to obtain the target model, and the inference service executor can obtain the inference result through the target model. The target model is obtained by splicing a plurality of submodels, the inference service executive party can directly obtain an inference result through the whole target model, each model training participant does not need to load the submodel of the inference service executive party, the whole inference service process can be completed locally by the inference service executive party, remote communication transmission data among the plurality of model training participants is not needed, communication overhead can be reduced, the problem of unstable remote transmission caused by the influence of factors such as network routing and the like can be effectively avoided, normal operation of inference service is ensured, and reliability of the inference service is improved.
In an embodiment, the model processing method shown in fig. 2 may be applied to a model processing apparatus including a splicing module, where the model processing apparatus may be, for example, a cloud server, and an obtaining module in the model processing apparatus obtains a plurality of sub models, and the splicing module splices the plurality of sub models to obtain a target model.
In another embodiment, the model processing method shown in fig. 2 may also be applied to a model processing system, and fig. 4 is a schematic diagram of a model processing system according to an exemplary embodiment, and as shown in fig. 4, the model processing system may include a model optimization platform 401 and a model storage platform 402, and may further include a model training platform 403, a model meta-information storage platform 404, a model training participant 1 and a model training participant 2.
The model training platform 403 is used to train each sub-model, for example, the sub-model a and the sub-model B, the model meta-information storage platform 404 is used to store meta-information related to the model, the model training platform 403 is used to send a plurality of sub-models to the model optimization platform 401, the model optimization platform 401 is used to obtain a plurality of sub-models sent by the model training platform 403, the plurality of sub-models are spliced to obtain a target model, and the target model is sent to the model storage platform 402. The inference service executor may send a model acquisition request for the target model to model storage platform 402, and model storage platform 402 may send the target model to the inference service executor if the request is received. The inference service executor may be, for example, one of a model training participant 1 and a model training participant 2. Fig. 4 is an illustration of two model training participants, and is not to be construed as a limitation on the disclosed embodiments.
In this disclosure, splicing the multiple submodels in S202 may include: obtaining model meta-information, wherein the model meta-information can comprise connection relation information between a sending node of a submodel with a sending node and receiving nodes of other submodels with connection relations with the sending node; and connecting the calculation node of the submodel connected with the sending node with the calculation nodes of other submodels connected with the receiving node according to the model meta-information so as to splice a plurality of submodels.
The model meta information may refer to information for describing a model, which may include connection relationship information between nodes. When the model processing method provided by the present disclosure is applied to the model processing system shown in fig. 4, the model optimization platform 401 may obtain model meta-information from the model meta-information storage platform 404, and connect the computation node of the submodel connected to the sending node and the computation node of the other submodel connected to the receiving node according to the model meta-information. Therefore, data can be directly transmitted between the two computing nodes which originally need to forward the data through the sending node and the receiving node, the inference service executing party carries out inference service through the target model, the whole inference service process can be completed locally at the inference service executing party without carrying out remote communication of the data, the problem of unstable remote transmission caused by the influence of factors such as network routing and the like is effectively avoided, the normal operation of the inference service is ensured, and the reliability of the inference service is improved.
Several exemplary embodiments of determining the inference service executor and obtaining the inference result through the objective model in the present disclosure are described below.
In an optional embodiment, the plurality of submodels correspond to the plurality of model training participants one to one, each model training participant has its own model input data, the inference service executor may be one of the plurality of model training participants, and the inference service executor may obtain the inference result through the target model according to its own model input data.
In this embodiment, the inference service executor may be any one of a plurality of model training participants. Fig. 5 is a schematic diagram illustrating an inference service executing party obtaining an inference result through a target model according to its own model input data according to an exemplary embodiment, and fig. 5 exemplifies that the inference service executing party trains the participating party 1 as a model. As shown in fig. 5, after the model training participant 1 acquires the target model, it can obtain the inference result through the target model according to its own model input data X, Y, Z.
For example, the inference service executing party may be one of the plurality of model training participants that needs the inference result, that is, the inference result requiring party, for example, the model training participant 1 needs the final inference result, and the model training participant 1 may serve as the inference service executing party to acquire the target model and perform the inference service. Alternatively, if the inference service executor is not the inference result demander, the inference service executor may send the inference result to the inference result demander, e.g., the model training participant 2 needs the inference result, and the model training participant 1 may send the inference result to the model training participant 2.
By the scheme, other model training participants can not transmit the model input data to the inference service executing party, and the inference service executing party can perform inference service according to the data of the inference service executing party, so that the communication overhead is low. Moreover, the inference service executor completes inference service without performing a process of remote communication data transmission among a plurality of model training participants, and the stability of the inference service is improved while the communication overhead is reduced.
In another optional embodiment, the plurality of submodels correspond to a plurality of model training participants one to one, each model training participant has its own model input data, and the inference service executor is one of the plurality of model training participants; the inference service executor can obtain an inference result by the following method:
receiving encrypted model input data sent by other model training participants except the inference service executing party; and obtaining a reasoning result through the target model according to the model input data of the reasoning service executive party and the encrypted model input data of other model training participants.
In order to protect data privacy and ensure data security, other model training participants can encrypt model input data and send the encrypted model input data to an inference service executor, and the disclosure is not particularly limited to the encryption mode. The inference service executing party can obtain an inference result through the target model according to the model input data of the inference service executing party and the encrypted model input data of other model training participating parties, namely, the inference service is carried out according to the model input data of each sub-model.
Alternatively, the inference service executor may be the model training participant that needs to receive the least amount of data of the encrypted model input data of the other model training participants. Illustratively, if the inference service is conducted by model training participant 1, model training participant 2 needs to send model input data M, N for submodel B to model training participant 1, and if the inference service is conducted by model training participant 2, model training participant 1 needs to send model input data X, Y, Z for submodel a to model training participant 2. If the amount of data of model input data M, N is less than the amount of data of model input data X, Y, Z, the amount of data of model input data that model training participant 1 needs to receive is minimal and can be used by model training participant 1 as an inference service executor.
Through the scheme, if the inference service executing party carries out inference service according to the model input data of each sub-model, the model training participant with the minimum data volume of the model input data required to be received can be used as the inference service executing party, and the communication overhead can be reduced to a certain extent. And the inference service executing party performs calculation through the target model, and the process of remote communication data transmission among a plurality of model training participants is not needed, so that the stability of the inference service process is improved.
In yet another alternative embodiment, the plurality of submodels correspond to a plurality of model training participants one to one, each model training participant has its own model input data, and the inference service executor is not a model training participant; the inference service executor can obtain an inference result by the following method:
respectively receiving encrypted model input data sent by each model training participant; and (4) according to the encrypted model input data of each model training participant, obtaining a reasoning result through the target model.
In this embodiment, the inference service executor may not be a model training participant, for example, may be a cloud server provided in advance, and each model training participant may send its own model input data to the inference service executor in an encrypted form. After the inference service executing party obtains the target model, the inference service executing party can train encrypted model input data of the participating party according to each model, and an inference result is obtained through the target model.
By the scheme, the inference service executor obtains an inference result through the target model without performing remote communication of data, so that the problem of unstable remote transmission caused by the influence of factors such as network routing and the like is effectively solved, the normal operation of the inference service is ensured, and the stability and the reliability of the inference service are improved.
Fig. 6 is a flowchart illustrating a model processing method applicable to an inference service executor, i.e., a server device of an inference service executor, according to an exemplary embodiment, and as shown in fig. 6, the method may include S501 and S502.
In S501, a model acquisition request for a target model is transmitted. The target model is obtained by splicing the plurality of sub models.
The inference service executor may send a model acquisition request to the model storage platform, or may send a model acquisition request to a model processing apparatus including a concatenation module, which is not limited in this disclosure.
In S502, a target model is received, and an inference result is obtained through the target model.
By the technical scheme, the inference service executor can send the model acquisition request aiming at the target model, wherein the target model is obtained by splicing a plurality of sub-models, and the inference service executor can obtain the inference result through the target model. The plurality of sub-models are spliced together to obtain the target model, when the inference service executive party obtains the inference result through the target model, the whole inference service process can be completed locally by the inference service executive party without remote communication of data, so that the communication overhead can be reduced, the problem of unstable remote transmission caused by the influence of factors such as network routing and the like can be effectively solved, the normal operation of the inference service is ensured, and the reliability of the inference service is improved.
Optionally, a plurality of sub-models correspond to a plurality of model training participants one to one, each model training participant has its own model input data, and the inference service executor is one of the plurality of model training participants; in S502, obtaining the inference result through the target model may include: and obtaining the inference result through the target model according to the model input data of the inference service executive party.
Optionally, a plurality of sub-models correspond to a plurality of model training participants one to one, each model training participant has its own model input data, and the inference service executor is one of the plurality of model training participants; in S502, obtaining the inference result through the target model may include: receiving encrypted model input data sent by other model training participants except the inference service executor; and obtaining the inference result through the target model according to the model input data of the inference service executive party and the encrypted model input data of the other model training participants.
Optionally, the inference service executor is a model training participant that needs to receive the minimum data amount of the encrypted model input data of other model training participants.
Optionally, the plurality of submodels correspond to a plurality of model training participants one to one, each model training participant has its own model input data, and the inference service executor is not the model training participant; in S502, obtaining the inference result through the target model may include: respectively receiving encrypted model input data sent by each model training participant; and obtaining the reasoning result through the target model according to the encrypted model input data of each model training participant.
With regard to the method applied to the inference service executor in the above embodiment, the specific manner in which each step performs operations has been described in detail in the embodiment related to the method applied to the model processing system or the model processing apparatus including the concatenation module, and will not be elaborated herein.
The present disclosure also provides a model processing system, such as the model processing system shown in fig. 4, which may include a model optimization platform, a model storage platform;
the model optimization platform is used for acquiring a plurality of sub models, splicing the sub models to obtain a target model, and sending the target model to the model storage platform; and the model storage platform is used for sending the target model to the inference service executor under the condition of receiving a model acquisition request aiming at the target model and sent by the inference service executor so as to enable the inference service executor to obtain an inference result through the target model.
Optionally, the model optimization platform is configured to obtain model meta-information, where the model meta-information includes connection relationship information between a sending node of a submodel having a sending node and a receiving node of another submodel having a connection relationship with the sending node; and the model optimization platform is used for connecting the calculation nodes of the submodels connected with the sending node with the calculation nodes of the other submodels connected with the receiving node according to the model meta-information so as to splice the submodels.
With regard to the system in the above embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 7 is a block diagram illustrating a model processing apparatus according to an exemplary embodiment, and as shown in fig. 7, the model processing apparatus 600 may include:
an obtaining module 601 configured to obtain a plurality of submodels;
a splicing module 602 configured to splice the plurality of sub-models to obtain a target model;
a target model sending module 603 configured to, upon receiving a model obtaining request for the target model sent by an inference service executor, send the target model to the inference service executor so that the inference service executor obtains an inference result through the target model.
Optionally, the splicing module 602 may include: an obtaining submodule configured to obtain model meta-information including connection relationship information between a sending node of a submodel having a sending node and receiving nodes of other submodels having a connection relationship with the sending node; a splicing sub-module configured to connect the computation nodes of the submodel connected to the transmission node and the computation nodes of the other submodels connected to the reception node according to the model meta-information to splice the plurality of submodels.
Optionally, the plurality of submodels correspond to a plurality of model training participants one to one, each model training participant has its own model input data, the inference service executing party is one of the plurality of model training participants, and the inference service executing party obtains the inference result through the target model according to its own model input data.
Optionally, a plurality of sub-models correspond to a plurality of model training participants one to one, each model training participant has its own model input data, and the inference service executor is one of the plurality of model training participants; the inference service executor obtains the inference result by the following method: receiving encrypted model input data sent by other model training participants except the inference service executor; and obtaining the inference result through the target model according to the model input data of the inference service executive party and the encrypted model input data of the other model training participants.
Optionally, the inference service executor is a model training participant that needs to receive the minimum data amount of the encrypted model input data of other model training participants.
Optionally, the plurality of submodels correspond to a plurality of model training participants one to one, each model training participant has its own model input data, and the inference service executor is not the model training participant; the inference service executor obtains the inference result by the following method: respectively receiving encrypted model input data sent by each model training participant; and obtaining the reasoning result through the target model according to the encrypted model input data of each model training participant.
Fig. 8 is a block diagram illustrating a model processing apparatus 700 applicable to an inference service executing party according to an exemplary embodiment, and as shown in fig. 8, the model processing apparatus 700 may include:
an obtaining request sending module 701 configured to send a model obtaining request for a target model, where the target model is obtained by splicing the plurality of sub models;
and the inference module 702 is configured to receive the target model and obtain an inference result through the target model.
Optionally, a plurality of sub-models correspond to a plurality of model training participants one to one, each model training participant has its own model input data, and the inference service executor is one of the plurality of model training participants; the inference module 702 may include: a first reasoning submodule configured to derive the reasoning result from the target model based on model input data of the reasoning service executor itself.
Optionally, a plurality of sub-models correspond to a plurality of model training participants one to one, each model training participant has its own model input data, and the inference service executor is one of the plurality of model training participants; the inference module 702 may include: a first receiving submodule configured to receive encrypted model input data sent by other model training participants than the inference service executor; and the second reasoning submodule is configured to obtain the reasoning result through the target model according to the model input data of the reasoning service executive party and the encrypted model input data of the other model training participants.
Optionally, the inference service executor is a model training participant that needs to receive the minimum data amount of the encrypted model input data of other model training participants.
Optionally, the plurality of submodels correspond to a plurality of model training participants one to one, each model training participant has its own model input data, and the inference service executor is not the model training participant; the inference module 702 may include: a second receiving submodule configured to receive the encrypted model input data sent by each of the model training participants, respectively; a third reasoning submodule configured to train encrypted model input data of the participants according to each of the models, the reasoning result being obtained by the target model.
Referring now to FIG. 9, shown is a schematic diagram of an electronic device 800 suitable for use in implementing embodiments of the present disclosure. The terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 9 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 9, the electronic device 800 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 801 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)802 or a program loaded from a storage means 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data necessary for the operation of the electronic apparatus 800 are also stored. The processing apparatus 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to bus 804.
Generally, the following devices may be connected to the I/O interface 805: input devices 806 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 807 including, for example, a Liquid Crystal Display (LCD), speakers, vibrators, and the like; storage 808 including, for example, magnetic tape, hard disk, etc.; and a communication device 809. The communication means 809 may allow the electronic device 800 to communicate wirelessly or by wire with other devices to exchange data. While fig. 9 illustrates an electronic device 800 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication means 809, or installed from the storage means 808, or installed from the ROM 802. The computer program, when executed by the processing apparatus 801, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring a plurality of sub-models; splicing the sub models to obtain a target model; and under the condition of receiving a model acquisition request aiming at the target model sent by an inference service executor, sending the target model to the inference service executor so as to enable the inference service executor to obtain an inference result through the target model.
Alternatively, the computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: sending a model acquisition request aiming at a target model, wherein the target model is obtained by splicing the plurality of sub-models; and receiving the target model, and obtaining a reasoning result through the target model.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of a module does not in some cases constitute a limitation of the module itself, for example, a mosaic module may also be described as a "sub-model mosaic module".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
Example 1 provides, in accordance with one or more embodiments of the present disclosure, a model processing method, the method comprising: acquiring a plurality of sub-models; splicing the sub models to obtain a target model; and under the condition of receiving a model acquisition request aiming at the target model sent by an inference service executor, sending the target model to the inference service executor so as to enable the inference service executor to obtain an inference result through the target model.
Example 2 provides the method of example 1, the stitching the plurality of submodels, comprising: obtaining model meta-information, wherein the model meta-information comprises connection relation information between a sending node of a submodel with a sending node and receiving nodes of other submodels with connection relations with the sending node; and connecting the calculation nodes of the submodels connected with the sending node with the calculation nodes of the other submodels connected with the receiving node according to the model meta-information so as to splice the plurality of submodels.
According to one or more embodiments of the present disclosure, example 3 provides the method of example 1, where a plurality of submodels correspond to a plurality of model training participants one to one, each of the model training participants has its own model input data, the inference service executor is one of the plurality of model training participants, and the inference service executor obtains the inference result through the target model according to its own model input data.
Example 4 provides the method of example 1, the plurality of submodels corresponding one-to-one to a plurality of model training participants, each of the model training participants having its own model input data, the inference service executor being one of the plurality of model training participants; the inference service executor obtains the inference result by the following method: receiving encrypted model input data sent by other model training participants except the inference service executor; and obtaining the inference result through the target model according to the model input data of the inference service executive party and the encrypted model input data of the other model training participants.
Example 5 provides the method of example 4, the inference service executor being a model training participant that requires a minimum amount of data to receive encrypted model input data of other model training participants, in accordance with one or more embodiments of the present disclosure.
Example 6 provides the method of example 1, the plurality of submodels corresponding one-to-one to a plurality of model training participants, each of the model training participants having its own model input data, the inference service executor not being the model training participant; the inference service executor obtains the inference result by the following method: respectively receiving encrypted model input data sent by each model training participant; and obtaining the reasoning result through the target model according to the encrypted model input data of each model training participant.
Example 7 provides a model processing method, according to one or more embodiments of the present disclosure, the method including: sending a model acquisition request aiming at a target model, wherein the target model is obtained by splicing the plurality of sub-models; and receiving the target model, and obtaining a reasoning result through the target model.
Example 8 provides the method of example 7, the plurality of submodels corresponding one-to-one to a plurality of model training participants, each of the model training participants having its own model input data, the inference service executor being one of the plurality of model training participants; the obtaining of the inference result through the target model includes: and obtaining the inference result through the target model according to the model input data of the inference service executive party.
Example 9 provides the method of example 7, the plurality of submodels corresponding one-to-one to a plurality of model training participants, each of the model training participants having its own model input data, the inference service executor being one of the plurality of model training participants; the obtaining of the inference result through the target model includes: receiving encrypted model input data sent by other model training participants except the inference service executor; and obtaining the inference result through the target model according to the model input data of the inference service executive party and the encrypted model input data of the other model training participants.
Example 10 provides the method of example 9, the inference service executor being a model training participant that requires a minimum amount of data to receive encrypted model input data of other model training participants, in accordance with one or more embodiments of the present disclosure.
Example 11 provides the method of example 7, the plurality of submodels corresponding one-to-one to a plurality of model training participants, each of the model training participants having its own model input data, the inference service executor not being a model training participant; the obtaining of the inference result through the target model includes: respectively receiving encrypted model input data sent by each model training participant; and obtaining the reasoning result through the target model according to the encrypted model input data of each model training participant.
Example 12 provides a model processing system comprising a model optimization platform, a model storage platform, according to one or more embodiments of the present disclosure; the model optimization platform is used for acquiring a plurality of sub models, splicing the sub models to obtain a target model, and sending the target model to the model storage platform; and the model storage platform is used for sending the target model to the inference service executor under the condition of receiving a model acquisition request aiming at the target model and sent by the inference service executor so as to enable the inference service executor to obtain an inference result through the target model.
Example 13 provides the system of example 12, the model optimization platform to obtain model meta-information including connection relationship information between a sending node of a sub-model having the sending node and a receiving node of another sub-model having a connection relationship with the sending node; and the model optimization platform is used for connecting the calculation nodes of the submodels connected with the sending node with the calculation nodes of the other submodels connected with the receiving node according to the model meta-information so as to splice the submodels.
Example 14 provides, in accordance with one or more embodiments of the present disclosure, a model processing apparatus, the apparatus comprising: an obtaining module configured to obtain a plurality of submodels; the splicing module is configured to splice the plurality of sub-models to obtain a target model; and the target model sending module is configured to send the target model to the inference service executor so as to enable the inference service executor to obtain an inference result through the target model under the condition of receiving a model obtaining request which is sent by the inference service executor and aims at the target model.
Example 15 provides, in accordance with one or more embodiments of the present disclosure, a model processing apparatus, the apparatus comprising: an obtaining request sending module configured to send a model obtaining request for a target model, wherein the target model is obtained by splicing the plurality of sub models; and the reasoning module is configured for receiving the target model and obtaining a reasoning result through the target model.
Example 16 provides a computer readable medium having stored thereon a computer program that, when executed by a processing apparatus, performs the steps of the method of any of examples 1-6, in accordance with one or more embodiments of the present disclosure.
Example 17 provides a computer readable medium having stored thereon a computer program that, when executed by a processing apparatus, implements the steps of the method of any of examples 7-11, in accordance with one or more embodiments of the present disclosure.
Example 18 provides, in accordance with one or more embodiments of the present disclosure, an electronic device, comprising: a storage device having a computer program stored thereon; processing means for executing the computer program in the storage means to carry out the steps of the method of any of examples 1-6.
Example 19 provides, in accordance with one or more embodiments of the present disclosure, an electronic device comprising: a storage device having a computer program stored thereon; processing means for executing the computer program in the storage means to carry out the steps of the method of any of examples 7-11.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims. With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.

Claims (19)

1. A method of model processing, the method comprising:
acquiring a plurality of sub-models;
splicing the sub models to obtain a target model;
and under the condition of receiving a model acquisition request aiming at the target model sent by an inference service executor, sending the target model to the inference service executor so as to enable the inference service executor to obtain an inference result through the target model.
2. The method of claim 1, wherein said stitching the plurality of submodels comprises:
obtaining model meta-information, wherein the model meta-information comprises connection relation information between a sending node of a submodel with a sending node and receiving nodes of other submodels with connection relations with the sending node;
and connecting the calculation nodes of the submodels connected with the sending node with the calculation nodes of the other submodels connected with the receiving node according to the model meta-information so as to splice the plurality of submodels.
3. The method of claim 1, wherein a plurality of submodels correspond to a plurality of model training participants one to one, each model training participant has its own model input data, the inference service executor is one of the plurality of model training participants, and the inference service executor obtains the inference result through the target model according to its own model input data.
4. The method of claim 1, wherein a plurality of submodels correspond one-to-one to a plurality of model training participants, each of the model training participants having its own model input data, the inference service executor being one of the plurality of model training participants;
the inference service executor obtains the inference result by the following method:
receiving encrypted model input data sent by other model training participants except the inference service executor;
and obtaining the inference result through the target model according to the model input data of the inference service executive party and the encrypted model input data of the other model training participants.
5. The method of claim 4, wherein the inference service executive is the model training participant that needs to receive the least amount of encrypted model input data from other model training participants.
6. The method of claim 1, wherein a plurality of submodels correspond one-to-one to a plurality of model training participants, each of the model training participants having its own model input data, the inference service executor not being the model training participant;
the inference service executor obtains the inference result by the following method:
respectively receiving encrypted model input data sent by each model training participant;
and obtaining the reasoning result through the target model according to the encrypted model input data of each model training participant.
7. A method of model processing, the method comprising:
sending a model acquisition request aiming at a target model, wherein the target model is obtained by splicing the plurality of sub-models;
and receiving the target model, and obtaining a reasoning result through the target model.
8. The method of claim 7, wherein a plurality of submodels correspond one-to-one to a plurality of model training participants, each of the model training participants having its own model input data, the inference service executor being one of the plurality of model training participants;
the obtaining of the inference result through the target model includes:
and obtaining the inference result through the target model according to the model input data of the inference service executive party.
9. The method of claim 7, wherein a plurality of submodels correspond one-to-one to a plurality of model training participants, each of the model training participants having its own model input data, the inference service executor being one of the plurality of model training participants;
the obtaining of the inference result through the target model includes:
receiving encrypted model input data sent by other model training participants except the inference service executor;
and obtaining the inference result through the target model according to the model input data of the inference service executive party and the encrypted model input data of the other model training participants.
10. The method of claim 9, wherein the inference service executive is the model training participant that needs to receive the least amount of encrypted model input data from other model training participants.
11. The method of claim 7, wherein a plurality of submodels correspond one-to-one to a plurality of model training participants, each of the model training participants having its own model input data for which the inference service executor is not;
the obtaining of the inference result through the target model includes:
respectively receiving encrypted model input data sent by each model training participant;
and obtaining the reasoning result through the target model according to the encrypted model input data of each model training participant.
12. A model processing system is characterized by comprising a model optimization platform and a model storage platform;
the model optimization platform is used for acquiring a plurality of sub models, splicing the sub models to obtain a target model, and sending the target model to the model storage platform;
and the model storage platform is used for sending the target model to the inference service executor under the condition of receiving a model acquisition request aiming at the target model and sent by the inference service executor so as to enable the inference service executor to obtain an inference result through the target model.
13. The system of claim 12, wherein the model optimization platform is configured to obtain model meta-information, the model meta-information including connection relationship information between a sending node of a sub-model having a sending node and a receiving node of another sub-model having a connection relationship with the sending node;
and the model optimization platform is used for connecting the calculation nodes of the submodels connected with the sending node with the calculation nodes of the other submodels connected with the receiving node according to the model meta-information so as to splice the submodels.
14. A model processing apparatus, characterized in that the apparatus comprises:
an obtaining module configured to obtain a plurality of submodels;
the splicing module is configured to splice the plurality of sub-models to obtain a target model;
and the target model sending module is configured to send the target model to the inference service executor so as to enable the inference service executor to obtain an inference result through the target model under the condition of receiving a model obtaining request which is sent by the inference service executor and aims at the target model.
15. A model processing apparatus, characterized in that the apparatus comprises:
an obtaining request sending module configured to send a model obtaining request for a target model, wherein the target model is obtained by splicing the plurality of sub models;
and the reasoning module is configured for receiving the target model and obtaining a reasoning result through the target model.
16. A computer-readable medium, on which a computer program is stored, characterized in that the program, when being executed by processing means, carries out the steps of the method of any one of claims 1 to 6.
17. A computer-readable medium, on which a computer program is stored, which, when being executed by processing means, carries out the steps of the method according to any one of claims 7 to 11.
18. An electronic device, comprising:
a storage device having a computer program stored thereon;
processing means for executing the computer program in the storage means to carry out the steps of the method according to any one of claims 1 to 6.
19. An electronic device, comprising:
a storage device having a computer program stored thereon;
processing means for executing the computer program in the storage means to carry out the steps of the method according to any one of claims 7 to 11.
CN202011298789.5A 2020-11-18 2020-11-18 Model processing method, system, device, medium and electronic equipment Active CN112418446B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011298789.5A CN112418446B (en) 2020-11-18 2020-11-18 Model processing method, system, device, medium and electronic equipment
PCT/SG2021/050707 WO2022108527A1 (en) 2020-11-18 2021-11-16 Model processing method, system and apparatus, medium, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011298789.5A CN112418446B (en) 2020-11-18 2020-11-18 Model processing method, system, device, medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN112418446A true CN112418446A (en) 2021-02-26
CN112418446B CN112418446B (en) 2024-04-09

Family

ID=74773394

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011298789.5A Active CN112418446B (en) 2020-11-18 2020-11-18 Model processing method, system, device, medium and electronic equipment

Country Status (2)

Country Link
CN (1) CN112418446B (en)
WO (1) WO2022108527A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112346870A (en) * 2020-11-18 2021-02-09 脸萌有限公司 Model processing method and system
CN112966825A (en) * 2021-04-13 2021-06-15 杭州欣禾圣世科技有限公司 Multi-model fusion parallel reasoning method, device and system based on python

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115374944B (en) * 2022-10-26 2023-04-18 小米汽车科技有限公司 Model reasoning method and device, electronic equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107871160A (en) * 2016-09-26 2018-04-03 谷歌公司 Communicate efficient joint study
CN110633805A (en) * 2019-09-26 2019-12-31 深圳前海微众银行股份有限公司 Longitudinal federated learning system optimization method, device, equipment and readable storage medium
CN111460511A (en) * 2020-04-17 2020-07-28 支付宝(杭州)信息技术有限公司 Federal learning and virtual object distribution method and device based on privacy protection
CN111461874A (en) * 2020-04-13 2020-07-28 浙江大学 Credit risk control system and method based on federal mode
CN111753996A (en) * 2020-06-24 2020-10-09 中国建设银行股份有限公司 Optimization method, device, equipment and storage medium of scheme determination model
CN111797999A (en) * 2020-07-10 2020-10-20 深圳前海微众银行股份有限公司 Longitudinal federal modeling optimization method, device, equipment and readable storage medium
CN111898769A (en) * 2020-08-17 2020-11-06 中国银行股份有限公司 Method and system for establishing user behavior period model based on horizontal federal learning
CN111899076A (en) * 2020-08-12 2020-11-06 科技谷(厦门)信息技术有限公司 Aviation service customization system and method based on federal learning technology platform

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107871160A (en) * 2016-09-26 2018-04-03 谷歌公司 Communicate efficient joint study
CN110633805A (en) * 2019-09-26 2019-12-31 深圳前海微众银行股份有限公司 Longitudinal federated learning system optimization method, device, equipment and readable storage medium
CN111461874A (en) * 2020-04-13 2020-07-28 浙江大学 Credit risk control system and method based on federal mode
CN111460511A (en) * 2020-04-17 2020-07-28 支付宝(杭州)信息技术有限公司 Federal learning and virtual object distribution method and device based on privacy protection
CN111753996A (en) * 2020-06-24 2020-10-09 中国建设银行股份有限公司 Optimization method, device, equipment and storage medium of scheme determination model
CN111797999A (en) * 2020-07-10 2020-10-20 深圳前海微众银行股份有限公司 Longitudinal federal modeling optimization method, device, equipment and readable storage medium
CN111899076A (en) * 2020-08-12 2020-11-06 科技谷(厦门)信息技术有限公司 Aviation service customization system and method based on federal learning technology platform
CN111898769A (en) * 2020-08-17 2020-11-06 中国银行股份有限公司 Method and system for establishing user behavior period model based on horizontal federal learning

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112346870A (en) * 2020-11-18 2021-02-09 脸萌有限公司 Model processing method and system
CN112346870B (en) * 2020-11-18 2024-04-16 脸萌有限公司 Model processing method and system
CN112966825A (en) * 2021-04-13 2021-06-15 杭州欣禾圣世科技有限公司 Multi-model fusion parallel reasoning method, device and system based on python

Also Published As

Publication number Publication date
CN112418446B (en) 2024-04-09
WO2022108527A1 (en) 2022-05-27

Similar Documents

Publication Publication Date Title
CN112418446B (en) Model processing method, system, device, medium and electronic equipment
CN111931962B (en) Information display method and device and electronic equipment
CN113327598B (en) Model training method, voice recognition method, device, medium and equipment
CN113190871B (en) Data protection method and device, readable medium and electronic equipment
CN112434620B (en) Scene text recognition method, device, equipment and computer readable medium
CN110781373A (en) List updating method and device, readable medium and electronic equipment
CN112312057A (en) Multimedia conference data processing method and device and electronic equipment
CN112329044A (en) Information acquisition method and device, electronic equipment and computer readable medium
CN111798251A (en) Verification method and device of house source data and electronic equipment
CN116489621A (en) Vehicle key sharing method, device, equipment and medium
CN112434064B (en) Data processing method, device, medium and electronic equipment
CN113722738B (en) Data protection method, device, medium and electronic equipment
CN112346661B (en) Data processing method and device and electronic equipment
CN109933556B (en) Method and apparatus for processing information
CN112346870A (en) Model processing method and system
CN111752625A (en) Method and device for interface mock
CN112036822B (en) Interaction method and device based on color ropes, medium and electronic equipment
CN111680754A (en) Image classification method and device, electronic equipment and computer-readable storage medium
CN112036821B (en) Quantization method, quantization device, quantization medium and quantization electronic equipment based on grid map planning private line
CN112781581B (en) Method and device for generating path from moving to child cart applied to sweeper
CN116226888B (en) Power data interactive encryption method, system and equipment based on privacy protection
CN111625707B (en) Recommendation response method, device, medium and equipment
CN113784097B (en) Key generation and distribution method, device, electronic equipment and computer readable medium
CN117132245B (en) Method, device, equipment and readable medium for reorganizing online article acquisition business process
CN116614384A (en) Model training method and device, instant pushing method and device, medium and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant