CN112418446B - Model processing method, system, device, medium and electronic equipment - Google Patents

Model processing method, system, device, medium and electronic equipment Download PDF

Info

Publication number
CN112418446B
CN112418446B CN202011298789.5A CN202011298789A CN112418446B CN 112418446 B CN112418446 B CN 112418446B CN 202011298789 A CN202011298789 A CN 202011298789A CN 112418446 B CN112418446 B CN 112418446B
Authority
CN
China
Prior art keywords
model
sub
models
reasoning
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011298789.5A
Other languages
Chinese (zh)
Other versions
CN112418446A (en
Inventor
陈程
周子凯
余乐乐
解浚源
吴良超
常龙
张力哲
刘小兵
吴迪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lemon Inc Cayman Island
Original Assignee
Lemon Inc Cayman Island
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lemon Inc Cayman Island filed Critical Lemon Inc Cayman Island
Priority to CN202011298789.5A priority Critical patent/CN112418446B/en
Publication of CN112418446A publication Critical patent/CN112418446A/en
Priority to PCT/SG2021/050707 priority patent/WO2022108527A1/en
Application granted granted Critical
Publication of CN112418446B publication Critical patent/CN112418446B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/602Providing cryptographic facilities or services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models

Abstract

The present disclosure relates to a model processing method, system, device, medium and electronic equipment, the method comprising: acquiring a plurality of sub-models; splicing the plurality of sub-models to obtain a target model; and under the condition that a model acquisition request aiming at the target model and sent by an reasoning service executive party is received, the target model is sent to the reasoning service executive party, so that the reasoning service executive party obtains a reasoning result through the target model. Through the technical scheme, the reasoning service executive side can directly obtain the reasoning result through the integral target model, the integral reasoning service process can be completed locally by the reasoning service executive side, a plurality of models are not required to train the participants to remotely communicate and transmit data, the communication expense can be reduced, the problem of unstable remote transmission caused by the influence of factors such as network routing can be effectively avoided, the normal operation of the reasoning service is ensured, and the reliability of the reasoning service is improved.

Description

Model processing method, system, device, medium and electronic equipment
Technical Field
The disclosure relates to the technical field of computers, and in particular relates to a model processing method, a system, a device, a medium and electronic equipment.
Background
Federal machine learning, also called federal learning and joint learning, is widely and widely applied in the field of machine learning, and federal learning can solve the problems of data island and data privacy, thereby effectively helping a plurality of institutions to complete common training of models through a federal learning system under the condition of meeting the requirements of user privacy protection and data safety. Federal learning models are generally composed of a plurality of sub-models, and how to guarantee reliability of an inference service is an important issue when the inference service is performed through the federal learning model.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
In a first aspect, the present disclosure provides a model processing method, the method comprising: acquiring a plurality of sub-models; splicing the plurality of sub-models to obtain a target model; and under the condition that a model acquisition request aiming at the target model and sent by an reasoning service executive party is received, the target model is sent to the reasoning service executive party, so that the reasoning service executive party obtains a reasoning result through the target model.
In a second aspect, the present disclosure provides a model processing method, the method comprising: sending a model acquisition request aiming at a target model, wherein the target model is obtained by splicing the plurality of sub-models; and receiving the target model, and obtaining an inference result through the target model.
In a third aspect, the present disclosure provides a model processing system, the system comprising a model optimization platform, a model storage platform; the model optimization platform is used for acquiring a plurality of sub-models, splicing the plurality of sub-models to obtain a target model, and sending the target model to the model storage platform; the model storage platform is used for sending the target model to the reasoning service executive under the condition of receiving a model acquisition request for the target model sent by the reasoning service executive, so that the reasoning service executive obtains a reasoning result through the target model.
In a fourth aspect, the present disclosure provides a model processing apparatus, the apparatus comprising: an acquisition module configured to acquire a plurality of sub-models; the splicing module is configured to splice the plurality of sub-models to obtain a target model; and the target model sending module is configured to send the target model to the reasoning service executive when receiving a model acquisition request for the target model sent by the reasoning service executive, so that the reasoning service executive obtains a reasoning result through the target model.
In a fifth aspect, the present disclosure provides a model processing apparatus, the apparatus comprising: the acquisition request sending module is configured to send a model acquisition request for a target model, wherein the target model is obtained by splicing the plurality of sub-models; and the reasoning module is configured to receive the target model and obtain a reasoning result through the target model.
In a sixth aspect, the present disclosure provides a computer readable medium having stored thereon a computer program which when executed by a processing device implements the steps of the method provided by the first aspect of the present disclosure.
In a seventh aspect, the present disclosure provides a computer readable medium having stored thereon a computer program which when executed by a processing device implements the steps of the method provided by the second aspect of the present disclosure.
In an eighth aspect, the present disclosure provides an electronic device, comprising: a storage device having a computer program stored thereon; processing means for executing said computer program in said storage means to carry out the steps of the method provided by the first aspect of the present disclosure.
In a ninth aspect, the present disclosure provides an electronic device, comprising: a storage device having a computer program stored thereon; processing means for executing said computer program in said storage means to carry out the steps of the method provided by the second aspect of the present disclosure.
By the technical scheme, the plurality of sub-models are spliced to obtain the target model, and an inference service executive party can obtain an inference result through the target model. The target model is obtained by splicing a plurality of sub-models, an inference service executive side can directly obtain an inference result through the integral target model, each model training participant is not required to load own sub-model respectively, the whole inference service process can be completed locally by the inference service executive side, the remote communication among the plurality of model training participants is not required to transmit data, the communication expense can be reduced, the problem of unstable remote transmission caused by the influence of factors such as network routing can be effectively avoided, the normal operation of the inference service is ensured, and the reliability of the inference service is improved.
Additional features and advantages of the present disclosure will be set forth in the detailed description which follows.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale. In the drawings:
FIG. 1 is a schematic diagram of a federal learning model in the related art.
FIG. 2 is a flow chart illustrating a method of model processing according to an exemplary embodiment.
FIG. 3 is a schematic diagram of a target model, according to an example embodiment.
FIG. 4 is a schematic diagram of a model processing system, according to an example embodiment.
Fig. 5 is a schematic diagram showing an inference service executor obtaining an inference result through a target model according to its own model input data according to an exemplary embodiment.
FIG. 6 is a flow chart illustrating a method of model processing according to an exemplary embodiment.
Fig. 7 is a block diagram of a model processing apparatus according to an exemplary embodiment.
Fig. 8 is a block diagram of a model processing apparatus according to an exemplary embodiment.
Fig. 9 is a schematic diagram of an electronic device according to an exemplary embodiment.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been shown in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but are provided to provide a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "including" and variations thereof as used herein are intended to be open-ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments. Related definitions of other terms will be given in the description below.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
The federal learning system can combine the data of a plurality of data owners to train out a common federal learning model, and the federal learning model is trained by integrating the data of the plurality of data owners, so that the training data is more comprehensive, and the accuracy of the federal learning model is higher. The federal learning model is generally composed of a plurality of sub-models, fig. 1 is a schematic diagram of the federal learning model in the related art, and as shown in fig. 1, the federal learning model includes a sub-model a and a sub-model B, for example, the sub-model a corresponds to the model training participant 1, model input data X, Y, Z of the sub-model a is data owned by the model training participant 1, the sub-model B corresponds to the model training participant 2, and model input data M, N of the sub-model B is data owned by the model training participant 2.
When the reasoning service is carried out through the federal learning model, each model training participant loads a respective sub-model, namely the model training participant 1 loads the sub-model A, and the model training participant 2 loads the sub-model B. As shown in fig. 1, the model training participant 1 performs calculation through the sub-model a according to the model input data X, Y, Z, and then the model training participant 1 needs to remotely send data to the model training participant 2 through the sending node of the sub-model a, so as to transmit the data to the receiving node of the sub-model B, and the model training participant 2 obtains an inference result through the sub-model B according to the data received by the receiving node and the model input data M, N. Therefore, when the reasoning service is carried out through the federal learning model, a plurality of models train the participants to carry out remote communication so as to complete the whole reasoning service, namely, the sending node and the receiving node adopt a remote communication mode to transmit data, and the communication cost is high. Moreover, telecommunications is susceptible to network routing and other factors, and is often not stable enough and has low reliability, so that the calculation process of the reasoning service is not stable enough. For example, if a transmitting node cannot timely transmit data to a receiving node due to a network congestion phenomenon when remotely transmitting data to the receiving node, the progress of the entire inference service is affected. The disclosure provides a model processing method, a system, a device, a medium and an electronic device for solving the problems in the related art.
Fig. 2 is a flowchart illustrating a model processing method according to an exemplary embodiment, which may include S201 to S203 as shown in fig. 2.
In S201, a plurality of submodels are acquired.
In S202, a plurality of sub-models are spliced to obtain a target model.
Fig. 3 is a schematic diagram illustrating a target model according to an exemplary embodiment, where the target model shown in fig. 3 may be obtained according to the federal learning model shown in fig. 1, and a transmitting node of a sub-model a and a receiving node of a sub-model B have a connection relationship therebetween, and as shown in fig. 3, a computing node of the sub-model a connected to the transmitting node and a computing node of the sub-model B connected to the receiving node may be connected to obtain a target model, and the target model is an overall full model obtained by splicing the sub-model a and the sub-model B together. It should be noted that, the present disclosure is illustrated by taking two sub-models as examples, but the embodiments of the present disclosure are not limited thereto, and in practical application, the number of sub-models may be plural, and the present disclosure is not limited thereto.
In S203, in the case of receiving a model acquisition request for a target model sent by an inference service executor, the target model is sent to the inference service executor, so that the inference service executor obtains an inference result through the target model.
The inference service may refer to a process in which a server calculates and obtains a result through a model according to input data. By way of example, taking the example of predicting the shopping intention of the user, the current shopping intention of the user can be inferred through a model according to the historical shopping behavior information of the user, and further the inference result conforming to the shopping intention and the requirement can be provided for the user. For example, taking the predicted search intention of the user as an example, according to the historical click behavior information of the user, the current search intention of the user can be deduced through a model, and further the reasoning result conforming to the search intention can be provided for the user.
In an alternative embodiment, one of the model training participants may be used as an inference service executor to load the target model and obtain the inference result from the target model. The target model is obtained by splicing a plurality of sub-models, and the reasoning service executive side can directly obtain the reasoning result through the integral target model, so that each model training participant is not required to load own sub-model respectively, and the remote communication transmission data among the model training participants are not required, thereby effectively avoiding the problem of unstable remote communication.
It should be noted that, when referring to operations of sending, receiving, and processing data by the inference service executor in the present disclosure, it may be understood that the inference service executor performs these operations through the server device.
By the technical scheme, the plurality of sub-models are spliced to obtain the target model, and an inference service executive party can obtain an inference result through the target model. The target model is obtained by splicing a plurality of sub-models, an inference service executive side can directly obtain an inference result through the integral target model, each model training participant is not required to load own sub-model respectively, the whole inference service process can be completed locally by the inference service executive side, the remote communication among the plurality of model training participants is not required to transmit data, the communication expense can be reduced, the problem of unstable remote transmission caused by the influence of factors such as network routing can be effectively avoided, the normal operation of the inference service is ensured, and the reliability of the inference service is improved.
In an embodiment, the model processing method shown in fig. 2 may be applied to a model processing device including a stitching module, where the model processing device may be, for example, a cloud server, and the acquiring module in the model processing device acquires a plurality of sub-models, and the stitching module stitches the plurality of sub-models to obtain the target model.
In another embodiment, the model processing method shown in fig. 2 may also be applied to a model processing system, and fig. 4 is a schematic diagram of a model processing system, which may include a model optimization platform 401, a model storage platform 402, and may further include a model training platform 403, a model meta-information storage platform 404, a model training participant 1, and a model training participant 2, as shown in fig. 4, according to an exemplary embodiment.
The model training platform 403 is configured to train out each sub-model, for example, the sub-model a and the sub-model B, the model meta-information storage platform 404 may be configured to store meta-information related to the model, the model training platform 403 may send a plurality of sub-models to the model optimization platform 401, the model optimization platform 401 may be configured to obtain a plurality of sub-models sent by the model training platform 403, splice the plurality of sub-models to obtain a target model, and send the target model to the model storage platform 402. The inference service executor may send a model acquisition request for the target model to the model storage platform 402, and the model storage platform 402 may send the target model to the inference service executor upon receiving the request. The reasoning service executor may be one of model training participant 1 and model training participant 2, for example. Fig. 4 is an illustration of an example including two model training participants, and is not to be construed as limiting the embodiments of the present disclosure.
In the present disclosure, stitching the multiple submodels in S202 may include: obtaining model meta information, which may include connection relationship information between a transmitting node having a sub-model of the transmitting node and receiving nodes of other sub-models having connection relationships with the transmitting node; and connecting the computing node of the sub-model connected with the sending node with the computing nodes of other sub-models connected with the receiving node according to the model meta-information so as to splice a plurality of sub-models.
The model meta information may refer to information describing a model, and may include connection relationship information between nodes. When the model processing method provided in the present disclosure is applied to the model processing system shown in fig. 4, the model optimization platform 401 may obtain model meta information from the model meta information storage platform 404, and connect the computing node of the sub-model connected to the transmitting node with the computing nodes of other sub-models connected to the receiving node according to the model meta information. Therefore, the data can be directly transmitted between two computing nodes which originally need to forward the data through the sending node and the receiving node, the reasoning service is carried out by the reasoning service executive party through the target model, the whole reasoning service process can be finished locally by the reasoning service executive party, remote communication of the data is not needed, the problem of unstable remote transmission caused by the influence of factors such as network routing is effectively avoided, normal operation of the reasoning service is ensured, and therefore reliability of the reasoning service is improved.
Several exemplary embodiments of determining an inference service executor and deriving an inference result from a goal model in the present disclosure are described below.
In an alternative embodiment, the plurality of sub-models are in one-to-one correspondence with the plurality of model training participants, each model training participant has its own model input data, the inference service executor may be one of the plurality of model training participants, and the inference service executor may obtain an inference result through the target model according to its own model input data.
In this embodiment, the inference service executable may be any one of a plurality of model training participants. Fig. 5 is a schematic diagram showing an inference service executor obtaining an inference result through a target model according to its model input data according to an exemplary embodiment, and fig. 5 exemplifies that the inference service executor trains the participant 1 as a model. As shown in fig. 5, after the model training participant 1 acquires the target model, the inference result may be obtained through the target model according to its own model input data X, Y, Z.
For example, the inference service executor may be one of multiple model training participants that needs an inference result, i.e. an inference result demander, for example, model training participant 1 needs a final inference result, and the model training participant 1 may be used as an inference service executor to obtain a target model and perform an inference service. Alternatively, if the inference service executor is not the inference result demander, the inference service executor may send the inference result to the inference result demander, for example, the model training participant 2 needs the inference result, and the model training participant 1 may send the inference result to the model training participant 2.
Through the scheme, other model training participants can not transmit model input data to the reasoning service executive, and the reasoning service executive can conduct reasoning service according to own data, so that communication cost is low. In addition, the reasoning service is completed by the reasoning service executive side, the process of remotely communicating and transmitting data among a plurality of model training participants is not needed, and the stability of the reasoning service is improved while the communication overhead is reduced.
In another optional implementation manner, the plurality of sub-models are in one-to-one correspondence with a plurality of model training participants, each model training participant has own model input data, and the reasoning service executive party is one of the plurality of model training participants; the inference service executor may obtain the inference result by:
receiving encrypted model input data sent by other model training participants except the reasoning service executive party; and obtaining an inference result through the target model according to the model input data of the inference service executive and the encrypted model input data of the other model training participants.
In order to protect data privacy and ensure data security, other model training participants can encrypt model input data and send the encrypted model input data to an inference service executive party, and the encryption mode is not particularly limited. The reasoning service executor can train the encrypted model input data of the participants according to the model input data of the executor and other models, and obtain a reasoning result through the target model, namely, perform the reasoning service according to the model input data of each sub-model.
Alternatively, the inference service executor may train the participant for the model that requires the least amount of data to receive the encrypted model input data of the other model training participants. Illustratively, if the inference service is performed by model training participant 1, model training participant 2 needs to send model input data M, N of sub-model B to model training participant 1, and if the inference service is performed by model training participant 2, model training participant 1 needs to send model input data X, Y, Z of sub-model a to model training participant 2. If the data amount of model input data M, N is smaller than the data amount of model input data X, Y, Z, the data amount of model input data that model training participant 1 needs to receive is minimal, and model training participant 1 may act as an inference service executive.
By the scheme, if the reasoning service executive side performs the reasoning service according to the model input data of each sub-model, the model training participant with the minimum data quantity of the model input data to be received can be used as the reasoning service executive side, and communication overhead can be reduced to a certain extent. And moreover, the reasoning service executive side performs calculation through the target model, a process of remotely communicating and transmitting data among a plurality of model training participants is not needed, and the stability of the reasoning service process is improved.
In yet another alternative embodiment, the plurality of sub-models are in one-to-one correspondence with a plurality of model training participants, each model training participant having its own model input data, the inference service executor not being a model training participant; the inference service executor may obtain the inference result by:
respectively receiving encrypted model input data sent by each model training participant; and training the encrypted model input data of the participants according to each model, and obtaining an inference result through the target model.
In this embodiment, the reasoning service executives may not be model training participants, for example, may be cloud servers that are preset, and each model training participant may send its own model input data to the reasoning service executives in encrypted form. After the inference service executor obtains the target model, the inference service executor can train the encrypted model input data of the participant according to each model, and obtain an inference result through the target model.
According to the scheme, the reasoning service executive side obtains the reasoning result through the target model, remote communication of data is not needed, the problem of unstable remote transmission caused by the influence of factors such as network routing is effectively avoided, normal operation of the reasoning service is guaranteed, and therefore stability and reliability of the reasoning service are improved.
Fig. 6 is a flowchart illustrating a model processing method applicable to a server device of an inference service executor, that is, an inference service executor, according to an exemplary embodiment, the method may include S501 and S502 as shown in fig. 6.
In S501, a model acquisition request for a target model is transmitted. The target model is obtained by splicing the plurality of sub-models.
The reasoning service executor may send a model acquisition request to the model storage platform, or may send a model acquisition request to a model processing device including the stitching module, which is not limited in this disclosure.
In S502, a target model is received, and an inference result is obtained through the target model.
Through the technical scheme, the reasoning service executive can send a model acquisition request aiming at the target model, wherein the target model is obtained by splicing a plurality of sub-models, and the reasoning service executive can obtain a reasoning result through the target model. Because a plurality of sub-models are spliced together to obtain the target model, when the reasoning service executive side obtains the reasoning result through the target model, the whole reasoning service process can be completed locally by the reasoning service executive side without carrying out remote communication of data, so that the communication cost can be reduced, the problem of unstable remote transmission caused by the influence of factors such as network routing can be effectively avoided, the normal operation of the reasoning service is ensured, and the reliability of the reasoning service is improved.
Optionally, the multiple sub-models are in one-to-one correspondence with multiple model training participants, each model training participant has own model input data, and the reasoning service executive party is one of the multiple model training participants; the obtaining the reasoning result through the target model in S502 may include: and inputting data according to the model of the reasoning service executive party, and obtaining the reasoning result through the target model.
Optionally, the multiple sub-models are in one-to-one correspondence with multiple model training participants, each model training participant has own model input data, and the reasoning service executive party is one of the multiple model training participants; the obtaining the reasoning result through the target model in S502 may include: receiving encrypted model input data sent by other model training participants except the reasoning service executive party; and obtaining the reasoning result through the target model according to the model input data of the reasoning service executive party and the encrypted model input data of the other model training participants.
Optionally, the inference service executor is the model training participant that needs to receive the least amount of encrypted model input data of the other model training participants.
Optionally, the multiple sub-models are in one-to-one correspondence with multiple model training participants, each model training participant has own model input data, and the reasoning service executor is not the model training participant; the obtaining the reasoning result through the target model in S502 may include: respectively receiving encrypted model input data sent by each model training participant; and training the encrypted model input data of the participants according to each model, and obtaining the reasoning result through the target model.
The specific manner in which the operations are performed by the respective steps in the above-described method applied to the inference service executor has been described in detail in the above-described embodiment with respect to the method applied to the model processing system or the model processing apparatus including the stitching module, and will not be described in detail herein.
The present disclosure also provides a model processing system, such as the model processing system shown in fig. 4, which may include a model optimization platform, a model storage platform;
the model optimization platform is used for acquiring a plurality of sub-models, splicing the plurality of sub-models to obtain a target model, and sending the target model to the model storage platform; the model storage platform is used for sending the target model to the reasoning service executive under the condition of receiving a model acquisition request for the target model sent by the reasoning service executive, so that the reasoning service executive obtains a reasoning result through the target model.
Optionally, the model optimization platform is configured to obtain model meta information, where the model meta information includes connection relationship information between a sending node of a sub-model with a sending node and receiving nodes of other sub-models with connection relationships with the sending node; and the model optimization platform is used for connecting the computing nodes of the sub-models connected with the sending node and the computing nodes of the other sub-models connected with the receiving node according to the model meta-information so as to splice the plurality of sub-models.
The specific manner in which the various modules perform the operations in relation to the systems of the above embodiments have been described in detail in relation to the embodiments of the method and will not be described in detail herein.
Fig. 7 is a block diagram of a model processing apparatus according to an exemplary embodiment, and as shown in fig. 7, the model processing apparatus 600 may include:
an acquisition module 601 configured to acquire a plurality of sub-models;
a stitching module 602, configured to stitch the plurality of sub-models to obtain a target model;
a target model sending module 603 configured to send, when receiving a model acquisition request for the target model sent by an inference service executor, the target model to the inference service executor, so that the inference service executor obtains an inference result through the target model.
Optionally, the splicing module 602 may include: an acquisition sub-module configured to acquire model meta information including connection relationship information between a transmitting node of a sub-model having a transmitting node and receiving nodes of other sub-models having a connection relationship with the transmitting node; and the splicing sub-module is configured to connect the computing node of the sub-model connected with the sending node and the computing nodes of the other sub-models connected with the receiving node according to the model meta-information so as to splice the plurality of sub-models.
Optionally, the plurality of sub-models are in one-to-one correspondence with the plurality of model training participants, each model training participant has own model input data, the reasoning service executor is one of the plurality of model training participants, and the reasoning service executor obtains the reasoning result through the target model according to own model input data.
Optionally, the multiple sub-models are in one-to-one correspondence with multiple model training participants, each model training participant has own model input data, and the reasoning service executive party is one of the multiple model training participants; the reasoning service executive side obtains the reasoning result by the following mode: receiving encrypted model input data sent by other model training participants except the reasoning service executive party; and obtaining the reasoning result through the target model according to the model input data of the reasoning service executive party and the encrypted model input data of the other model training participants.
Optionally, the inference service executor is the model training participant that needs to receive the least amount of encrypted model input data of the other model training participants.
Optionally, the multiple sub-models are in one-to-one correspondence with multiple model training participants, each model training participant has own model input data, and the reasoning service executor is not the model training participant; the reasoning service executive side obtains the reasoning result by the following mode: respectively receiving encrypted model input data sent by each model training participant; and training the encrypted model input data of the participants according to each model, and obtaining the reasoning result through the target model.
Fig. 8 is a block diagram of a model processing apparatus 700, which may be applied to an inference service executor, according to an exemplary embodiment, as shown in fig. 8, the model processing apparatus 700 may include:
an acquisition request sending module 701, configured to send a model acquisition request for a target model, where the target model is obtained by stitching the multiple sub-models;
an inference module 702 is configured to receive the target model and obtain an inference result from the target model.
Optionally, the multiple sub-models are in one-to-one correspondence with multiple model training participants, each model training participant has own model input data, and the reasoning service executive party is one of the multiple model training participants; the inference module 702 may include: the first reasoning sub-module is configured to input data according to a model of the reasoning service executive and obtain the reasoning result through the target model.
Optionally, the multiple sub-models are in one-to-one correspondence with multiple model training participants, each model training participant has own model input data, and the reasoning service executive party is one of the multiple model training participants; the inference module 702 may include: a first receiving sub-module configured to receive encrypted model input data transmitted by a model training participant other than the reasoning service executive; and the second reasoning sub-module is configured to train the encrypted model input data of the participants according to the model input data of the reasoning service executive and the other models, and obtain the reasoning result through the target model.
Optionally, the inference service executor is the model training participant that needs to receive the least amount of encrypted model input data of the other model training participants.
Optionally, the multiple sub-models are in one-to-one correspondence with multiple model training participants, each model training participant has own model input data, and the reasoning service executor is not the model training participant; the inference module 702 may include: a second receiving sub-module configured to receive encrypted model input data sent by each of the model training participants, respectively; and the third reasoning sub-module is configured for training the encrypted model input data of the participants according to the models and obtaining the reasoning results through the target model.
Referring now to fig. 9, a schematic diagram of an electronic device 800 suitable for use in implementing embodiments of the present disclosure is shown. The terminal devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), and the like, and stationary terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 9 is merely an example, and should not impose any limitations on the functionality and scope of use of embodiments of the present disclosure.
As shown in fig. 9, the electronic device 800 may include a processing means (e.g., a central processor, a graphics processor, etc.) 801, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 802 or a program loaded from a storage means 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data required for the operation of the electronic device 800 are also stored. The processing device 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to the bus 804.
In general, the following devices may be connected to the I/O interface 805: input devices 806 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, and the like; an output device 807 including, for example, a Liquid Crystal Display (LCD), speakers, vibrators, etc.; storage 808 including, for example, magnetic tape, hard disk, etc.; communication means 809. The communication means 809 may allow the electronic device 800 to communicate wirelessly or by wire with other devices to exchange data. While fig. 9 shows an electronic device 800 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a non-transitory computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via communication device 809, or installed from storage device 808, or installed from ROM 802. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by the processing device 801.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some implementations, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText Transfer Protocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring a plurality of sub-models; splicing the plurality of sub-models to obtain a target model; and under the condition that a model acquisition request aiming at the target model and sent by an reasoning service executive party is received, the target model is sent to the reasoning service executive party, so that the reasoning service executive party obtains a reasoning result through the target model.
Alternatively, the computer-readable medium carries one or more programs that, when executed by the electronic device, cause the electronic device to: sending a model acquisition request aiming at a target model, wherein the target model is obtained by splicing the plurality of sub-models; and receiving the target model, and obtaining an inference result through the target model.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including, but not limited to, an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present disclosure may be implemented in software or hardware. The name of a module is not limited to the module itself in some cases, and for example, a splice module may also be described as a "sub-model splice module".
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
In accordance with one or more embodiments of the present disclosure, example 1 provides a model processing method, the method comprising: acquiring a plurality of sub-models; splicing the plurality of sub-models to obtain a target model; and under the condition that a model acquisition request aiming at the target model and sent by an reasoning service executive party is received, the target model is sent to the reasoning service executive party, so that the reasoning service executive party obtains a reasoning result through the target model.
According to one or more embodiments of the present disclosure, example 2 provides the method of example 1, the stitching the plurality of submodels comprising: obtaining model meta information, wherein the model meta information comprises connection relation information between a sending node of a sub-model with a sending node and receiving nodes of other sub-models with connection relation with the sending node; and connecting the computing node of the sub-model connected with the sending node and the computing nodes of the other sub-models connected with the receiving node according to the model meta-information so as to splice the plurality of sub-models.
According to one or more embodiments of the present disclosure, example 3 provides the method of example 1, wherein the plurality of sub-models are in one-to-one correspondence with a plurality of model training participants, each model training participant has own model input data, the reasoning service executor is one of the plurality of model training participants, and the reasoning service executor obtains the reasoning result through the target model according to own model input data.
Example 4 provides the method of example 1, according to one or more embodiments of the present disclosure, the plurality of sub-models being in one-to-one correspondence with a plurality of model training participants, each of the model training participants having its own model input data, the inference service executor being one of the plurality of model training participants; the reasoning service executive side obtains the reasoning result by the following mode: receiving encrypted model input data sent by other model training participants except the reasoning service executive party; and obtaining the reasoning result through the target model according to the model input data of the reasoning service executive party and the encrypted model input data of the other model training participants.
In accordance with one or more embodiments of the present disclosure, example 5 provides the method of example 4, the inference service executor being a model training participant that needs to receive a minimum amount of data of the encrypted model input data of other model training participants.
Example 6 provides the method of example 1, according to one or more embodiments of the present disclosure, wherein a plurality of sub-models are in one-to-one correspondence with a plurality of model training participants, each model training participant having its own model input data, the inference service executor not being the model training participant; the reasoning service executive side obtains the reasoning result by the following mode: respectively receiving encrypted model input data sent by each model training participant; and training the encrypted model input data of the participants according to each model, and obtaining the reasoning result through the target model.
Example 7 provides a model processing method according to one or more embodiments of the present disclosure, the method comprising: sending a model acquisition request aiming at a target model, wherein the target model is obtained by splicing the plurality of sub-models; and receiving the target model, and obtaining an inference result through the target model.
Example 8 provides the method of example 7, according to one or more embodiments of the present disclosure, the plurality of sub-models being in one-to-one correspondence with a plurality of model training participants, each of the model training participants having its own model input data, the inference service executor being one of the plurality of model training participants; the obtaining the reasoning result through the target model comprises the following steps: and inputting data according to the model of the reasoning service executive party, and obtaining the reasoning result through the target model.
Example 9 provides the method of example 7, according to one or more embodiments of the present disclosure, wherein the plurality of sub-models are in one-to-one correspondence with a plurality of model training participants, each of the model training participants having its own model input data, the inference service executor being one of the plurality of model training participants; the obtaining the reasoning result through the target model comprises the following steps: receiving encrypted model input data sent by other model training participants except the reasoning service executive party; and obtaining the reasoning result through the target model according to the model input data of the reasoning service executive party and the encrypted model input data of the other model training participants.
In accordance with one or more embodiments of the present disclosure, example 10 provides the method of example 9, the inference service executor being a model training participant that needs to receive a minimum amount of data of encrypted model input data of other model training participants.
Example 11 provides the method of example 7, according to one or more embodiments of the present disclosure, in which a plurality of sub-models are in one-to-one correspondence with a plurality of model training participants, each of the model training participants having own model input data, for which an inference service executor is not; the obtaining the reasoning result through the target model comprises the following steps: respectively receiving encrypted model input data sent by each model training participant; and training the encrypted model input data of the participants according to each model, and obtaining the reasoning result through the target model.
Example 12 provides a model processing system, according to one or more embodiments of the present disclosure, comprising a model optimization platform, a model storage platform; the model optimization platform is used for acquiring a plurality of sub-models, splicing the plurality of sub-models to obtain a target model, and sending the target model to the model storage platform; the model storage platform is used for sending the target model to the reasoning service executive under the condition of receiving a model acquisition request for the target model sent by the reasoning service executive, so that the reasoning service executive obtains a reasoning result through the target model.
In accordance with one or more embodiments of the present disclosure, example 13 provides the system of example 12, the model optimization platform to obtain model meta-information including connection relationship information between a sending node of a sub-model having a sending node and receiving nodes of other sub-models having connection relationships with the sending node; and the model optimization platform is used for connecting the computing nodes of the sub-models connected with the sending node and the computing nodes of the other sub-models connected with the receiving node according to the model meta-information so as to splice the plurality of sub-models.
Example 14 provides a model processing apparatus according to one or more embodiments of the present disclosure, the apparatus comprising: an acquisition module configured to acquire a plurality of sub-models; the splicing module is configured to splice the plurality of sub-models to obtain a target model; and the target model sending module is configured to send the target model to the reasoning service executive when receiving a model acquisition request for the target model sent by the reasoning service executive, so that the reasoning service executive obtains a reasoning result through the target model.
Example 15 provides a model processing apparatus according to one or more embodiments of the present disclosure, the apparatus comprising: the acquisition request sending module is configured to send a model acquisition request for a target model, wherein the target model is obtained by splicing the plurality of sub-models; and the reasoning module is configured to receive the target model and obtain a reasoning result through the target model.
According to one or more embodiments of the present disclosure, example 16 provides a computer-readable medium having stored thereon a computer program which, when executed by a processing device, implements the steps of the method of any of examples 1-6.
According to one or more embodiments of the present disclosure, example 17 provides a computer-readable medium having stored thereon a computer program which, when executed by a processing device, implements the steps of the method of any of examples 7-11.
Example 18 provides an electronic device, according to one or more embodiments of the present disclosure, comprising: a storage device having a computer program stored thereon; processing means for executing the computer program in the storage means to implement the steps of the method of any one of examples 1-6.
Example 19 provides an electronic device, according to one or more embodiments of the present disclosure, comprising: a storage device having a computer program stored thereon; processing means for executing the computer program in the storage means to implement the steps of the method of any one of examples 7-11.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).
Moreover, although operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are example forms of implementing the claims. The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.

Claims (17)

1. A method of model processing, the method comprising:
acquiring a plurality of sub-models;
splicing the plurality of sub-models to obtain a target model;
under the condition that a model acquisition request aiming at the target model and sent by an reasoning service executive party is received, the target model is sent to the reasoning service executive party, so that the reasoning service executive party obtains a reasoning result through the target model;
the splicing the plurality of sub-models comprises the following steps:
obtaining model meta information, wherein the model meta information comprises connection relation information between a sending node of a sub-model with a sending node and receiving nodes of other sub-models with connection relation with the sending node;
And connecting the computing node of the sub-model connected with the sending node and the computing nodes of the other sub-models connected with the receiving node according to the model meta-information so as to splice the plurality of sub-models.
2. The method of claim 1, wherein a plurality of sub-models are in one-to-one correspondence with a plurality of model training participants, each model training participant having its own model input data, the inference service executor being one of the plurality of model training participants, the inference service executor obtaining the inference result from the target model based on its own model input data.
3. The method of claim 1, wherein a plurality of sub-models are in one-to-one correspondence with a plurality of model training participants, each model training participant having its own model input data, the inference service executor being one of the plurality of model training participants;
the reasoning service executive side obtains the reasoning result by the following mode:
receiving encrypted model input data sent by other model training participants except the reasoning service executive party;
And obtaining the reasoning result through the target model according to the model input data of the reasoning service executive party and the encrypted model input data of the other model training participants.
4. A method according to claim 3, wherein the inference service executor is the model training participant that needs to receive the least amount of data of the encrypted model input data of the other model training participants.
5. The method of claim 1, wherein a plurality of sub-models are in one-to-one correspondence with a plurality of model training participants, each model training participant having its own model input data, the inference service executor not being the model training participant;
the reasoning service executive side obtains the reasoning result by the following mode:
respectively receiving encrypted model input data sent by each model training participant;
and training the encrypted model input data of the participants according to each model, and obtaining the reasoning result through the target model.
6. A method of model processing, the method comprising:
sending a model acquisition request aiming at a target model, wherein the target model is obtained by splicing a plurality of sub-models;
Receiving the target model, and obtaining an inference result through the target model;
wherein the plurality of submodels are spliced by:
obtaining model meta information, wherein the model meta information comprises connection relation information between a sending node of a sub-model with a sending node and receiving nodes of other sub-models with connection relation with the sending node;
and connecting the computing node of the sub-model connected with the sending node and the computing nodes of the other sub-models connected with the receiving node according to the model meta-information so as to splice the plurality of sub-models.
7. The method of claim 6, wherein a plurality of sub-models are in one-to-one correspondence with a plurality of model training participants, each model training participant having its own model input data, the inference service executor being one of the plurality of model training participants;
the obtaining the reasoning result through the target model comprises the following steps:
and inputting data according to the model of the reasoning service executive party, and obtaining the reasoning result through the target model.
8. The method of claim 6, wherein a plurality of sub-models are in one-to-one correspondence with a plurality of model training participants, each model training participant having its own model input data, the inference service executor being one of the plurality of model training participants;
The obtaining the reasoning result through the target model comprises the following steps:
receiving encrypted model input data sent by other model training participants except the reasoning service executive party;
and obtaining the reasoning result through the target model according to the model input data of the reasoning service executive party and the encrypted model input data of the other model training participants.
9. The method of claim 8, wherein the inference service executor is a model training participant that requires a minimum amount of data to receive encrypted model input data of other model training participants.
10. The method of claim 6, wherein a plurality of sub-models are in one-to-one correspondence with a plurality of model training participants, each model training participant having its own model input data, an inference service executor not being the model training participant;
the obtaining the reasoning result through the target model comprises the following steps:
respectively receiving encrypted model input data sent by each model training participant;
and training the encrypted model input data of the participants according to each model, and obtaining the reasoning result through the target model.
11. A model processing system, which is characterized by comprising a model optimizing platform and a model storing platform;
the model optimization platform is used for acquiring a plurality of sub-models, splicing the plurality of sub-models to obtain a target model, and sending the target model to the model storage platform;
the model storage platform is used for sending the target model to the reasoning service executive party under the condition of receiving a model acquisition request aiming at the target model, which is sent by the reasoning service executive party, so that the reasoning service executive party obtains a reasoning result through the target model;
the model optimization platform is used for acquiring model meta-information, wherein the model meta-information comprises connection relation information between a sending node of a sub-model with a sending node and receiving nodes of other sub-models with connection relation with the sending node;
and the model optimization platform is used for connecting the computing nodes of the sub-models connected with the sending node and the computing nodes of the other sub-models connected with the receiving node according to the model meta-information so as to splice the plurality of sub-models.
12. A model processing apparatus, characterized in that the apparatus comprises:
an acquisition module configured to acquire a plurality of sub-models;
the splicing module is configured to splice the plurality of sub-models to obtain a target model;
a target model sending module configured to send, when receiving a model acquisition request for the target model sent by an inference service executor, the target model to the inference service executor, so that the inference service executor obtains an inference result through the target model;
the splice module comprises:
an acquisition sub-module configured to acquire model meta information including connection relationship information between a transmitting node of a sub-model having a transmitting node and receiving nodes of other sub-models having a connection relationship with the transmitting node;
and the splicing sub-module is configured to connect the computing node of the sub-model connected with the sending node and the computing nodes of the other sub-models connected with the receiving node according to the model meta-information so as to splice the plurality of sub-models.
13. A model processing apparatus, characterized in that the apparatus comprises:
the acquisition request sending module is configured to send a model acquisition request for a target model, wherein the target model is obtained by splicing a plurality of sub-models;
the reasoning module is configured to receive the target model and obtain a reasoning result through the target model;
wherein the acquisition request sending module is configured to splice the plurality of submodels by:
obtaining model meta information, wherein the model meta information comprises connection relation information between a sending node of a sub-model with a sending node and receiving nodes of other sub-models with connection relation with the sending node;
and connecting the computing node of the sub-model connected with the sending node and the computing nodes of the other sub-models connected with the receiving node according to the model meta-information so as to splice the plurality of sub-models.
14. A computer readable medium on which a computer program is stored, characterized in that the program, when being executed by a processing device, carries out the steps of the method according to any one of claims 1-5.
15. A computer readable medium, on which a computer program is stored, characterized in that the program, when being executed by a processing device, carries out the steps of the method according to any one of claims 6-10.
16. An electronic device, comprising:
a storage device having a computer program stored thereon;
processing means for executing said computer program in said storage means to carry out the steps of the method according to any one of claims 1-5.
17. An electronic device, comprising:
a storage device having a computer program stored thereon;
processing means for executing said computer program in said storage means to carry out the steps of the method according to any one of claims 6-10.
CN202011298789.5A 2020-11-18 2020-11-18 Model processing method, system, device, medium and electronic equipment Active CN112418446B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011298789.5A CN112418446B (en) 2020-11-18 2020-11-18 Model processing method, system, device, medium and electronic equipment
PCT/SG2021/050707 WO2022108527A1 (en) 2020-11-18 2021-11-16 Model processing method, system and apparatus, medium, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011298789.5A CN112418446B (en) 2020-11-18 2020-11-18 Model processing method, system, device, medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN112418446A CN112418446A (en) 2021-02-26
CN112418446B true CN112418446B (en) 2024-04-09

Family

ID=74773394

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011298789.5A Active CN112418446B (en) 2020-11-18 2020-11-18 Model processing method, system, device, medium and electronic equipment

Country Status (2)

Country Link
CN (1) CN112418446B (en)
WO (1) WO2022108527A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112346870B (en) * 2020-11-18 2024-04-16 脸萌有限公司 Model processing method and system
CN112966825B (en) * 2021-04-13 2023-05-23 杭州欣禾圣世科技有限公司 Multi-model fusion parallel reasoning method, device and system based on python
CN115374944B (en) * 2022-10-26 2023-04-18 小米汽车科技有限公司 Model reasoning method and device, electronic equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107871160A (en) * 2016-09-26 2018-04-03 谷歌公司 Communicate efficient joint study
CN110633805A (en) * 2019-09-26 2019-12-31 深圳前海微众银行股份有限公司 Longitudinal federated learning system optimization method, device, equipment and readable storage medium
CN111460511A (en) * 2020-04-17 2020-07-28 支付宝(杭州)信息技术有限公司 Federal learning and virtual object distribution method and device based on privacy protection
CN111461874A (en) * 2020-04-13 2020-07-28 浙江大学 Credit risk control system and method based on federal mode
CN111753996A (en) * 2020-06-24 2020-10-09 中国建设银行股份有限公司 Optimization method, device, equipment and storage medium of scheme determination model
CN111797999A (en) * 2020-07-10 2020-10-20 深圳前海微众银行股份有限公司 Longitudinal federal modeling optimization method, device, equipment and readable storage medium
CN111899076A (en) * 2020-08-12 2020-11-06 科技谷(厦门)信息技术有限公司 Aviation service customization system and method based on federal learning technology platform
CN111898769A (en) * 2020-08-17 2020-11-06 中国银行股份有限公司 Method and system for establishing user behavior period model based on horizontal federal learning

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107871160A (en) * 2016-09-26 2018-04-03 谷歌公司 Communicate efficient joint study
CN110633805A (en) * 2019-09-26 2019-12-31 深圳前海微众银行股份有限公司 Longitudinal federated learning system optimization method, device, equipment and readable storage medium
CN111461874A (en) * 2020-04-13 2020-07-28 浙江大学 Credit risk control system and method based on federal mode
CN111460511A (en) * 2020-04-17 2020-07-28 支付宝(杭州)信息技术有限公司 Federal learning and virtual object distribution method and device based on privacy protection
CN111753996A (en) * 2020-06-24 2020-10-09 中国建设银行股份有限公司 Optimization method, device, equipment and storage medium of scheme determination model
CN111797999A (en) * 2020-07-10 2020-10-20 深圳前海微众银行股份有限公司 Longitudinal federal modeling optimization method, device, equipment and readable storage medium
CN111899076A (en) * 2020-08-12 2020-11-06 科技谷(厦门)信息技术有限公司 Aviation service customization system and method based on federal learning technology platform
CN111898769A (en) * 2020-08-17 2020-11-06 中国银行股份有限公司 Method and system for establishing user behavior period model based on horizontal federal learning

Also Published As

Publication number Publication date
CN112418446A (en) 2021-02-26
WO2022108527A1 (en) 2022-05-27

Similar Documents

Publication Publication Date Title
CN112418446B (en) Model processing method, system, device, medium and electronic equipment
CN110909521B (en) Online document information synchronous processing method and device and electronic equipment
CN110390493B (en) Task management method and device, storage medium and electronic equipment
CN113190871B (en) Data protection method and device, readable medium and electronic equipment
CN110765752B (en) Test question generation method and device, electronic equipment and computer readable storage medium
CN111460432B (en) On-line document authority control method, device, equipment and computer readable medium
CN116700907B (en) Service call demand document storage method, device, electronic equipment and readable medium
CN116663609A (en) Model training method, device, equipment and storage medium
CN111596992A (en) Navigation bar display method and device and electronic equipment
CN112346870B (en) Model processing method and system
CN113722738B (en) Data protection method, device, medium and electronic equipment
CN112434064B (en) Data processing method, device, medium and electronic equipment
CN112346661B (en) Data processing method and device and electronic equipment
CN111756833B (en) Node processing method, node processing device, electronic equipment and computer readable medium
CN111538717B (en) Data processing method, device, electronic equipment and computer readable medium
CN111680754B (en) Image classification method, device, electronic equipment and computer readable storage medium
CN110941683B (en) Method, device, medium and electronic equipment for acquiring object attribute information in space
CN111258582B (en) Window rendering method and device, computer equipment and storage medium
CN112036821B (en) Quantization method, quantization device, quantization medium and quantization electronic equipment based on grid map planning private line
CN112036822B (en) Interaction method and device based on color ropes, medium and electronic equipment
CN117132245B (en) Method, device, equipment and readable medium for reorganizing online article acquisition business process
CN111078259B (en) Audio packaging method and device, electronic equipment and storage medium
CN116846021A (en) Robot pile-up verification method, device, equipment and medium
CN117112023A (en) Application program packaging method, device, equipment and medium
CN117474065A (en) Training method and device for multitasking model, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant