WO2021159685A1 - Task processing method, system, and device, and medium - Google Patents

Task processing method, system, and device, and medium Download PDF

Info

Publication number
WO2021159685A1
WO2021159685A1 PCT/CN2020/110469 CN2020110469W WO2021159685A1 WO 2021159685 A1 WO2021159685 A1 WO 2021159685A1 CN 2020110469 W CN2020110469 W CN 2020110469W WO 2021159685 A1 WO2021159685 A1 WO 2021159685A1
Authority
WO
WIPO (PCT)
Prior art keywords
task processing
training
data
task
processing terminal
Prior art date
Application number
PCT/CN2020/110469
Other languages
French (fr)
Chinese (zh)
Inventor
周曦
姚志强
Original Assignee
云从科技集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 云从科技集团股份有限公司 filed Critical 云从科技集团股份有限公司
Publication of WO2021159685A1 publication Critical patent/WO2021159685A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/03Credit; Loans; Processing thereof

Abstract

The present invention provides a task processing method, system, and device, and a medium. The method comprises: each task processing terminal separately obtains a corresponding decomposed training task; each task processing terminal executes the decomposed training task according to local training data and random numbers or parameters obtained from other task processing terminals. After one or more participants initiate training tasks, the present invention can decompose all the training tasks by means of the task processing terminals, and according to the decomposed training tasks, each task processing terminal executes the decomposed training tasks on the basis of the local training data and the random numbers and parameters obtained from other task processing terminals. That is, the present method can aggregate or assist in aggregating the data information of multiple parties and protect the privacy data of the multiple parties in the scene that multiple data providers participate and all the participants do not trust each other.

Description

一种任务处理方法、系统、设备及介质A task processing method, system, equipment and medium 技术领域Technical field
本发明涉及数据处理技术领域,特别是涉及一种任务处理方法、系统、设备及介质。The present invention relates to the field of data processing technology, in particular to a task processing method, system, equipment and medium.
背景技术Background technique
部分企业或机构掌握着一些数据,这些数据可能包括隐私数据和非隐私数据。而对于某些企业或机构而言,其希望利用这些数据来进行分析、评估等。例如,对于金融信贷机构而言,可能用于评估企业资质、企业经营状态、贷款风险等。然而,这些数据可能属于企业或机构的敏感隐私数据,多数企业或机构不方便直接共享给金融机构、政府或公众等。因此,如何在保护数据隐私的情况下,实现数据共享是亟待解决的问题。Some companies or institutions hold some data, which may include private data and non-private data. For some companies or institutions, they hope to use these data for analysis, evaluation, etc. For example, for financial credit institutions, it may be used to evaluate corporate qualifications, corporate operating status, loan risks, etc. However, these data may belong to the sensitive private data of enterprises or institutions, and it is inconvenient for most enterprises or institutions to directly share with financial institutions, the government or the public. Therefore, how to realize data sharing while protecting data privacy is an urgent problem to be solved.
发明内容Summary of the invention
鉴于以上所述现有技术的缺点,本发明的目的在于提供一种任务处理方法、系统、设备及介质,用于解决现有技术中存在的问题。In view of the above-mentioned shortcomings of the prior art, the purpose of the present invention is to provide a task processing method, system, equipment and medium to solve the problems existing in the prior art.
为实现上述目的及其他相关目的,本发明提供一种任务处理方法,包括以下步骤:In order to achieve the above objectives and other related objectives, the present invention provides a task processing method, which includes the following steps:
各个任务处理端分别获取对应的分解后的训练任务;Each task processing terminal obtains the corresponding decomposed training task;
每个任务处理端基于本地训练数据及从其他任务处理端获取的随机数或参数,执行所述分解后的训练任务。Each task processing terminal executes the decomposed training task based on local training data and random numbers or parameters obtained from other task processing terminals.
可选地,所述各个任务处理端分别获取对应的分解后的训练任务之前,还包括:Optionally, before each task processing terminal separately obtains the corresponding decomposed training task, it further includes:
任务处理平台接收所述任务处理端传输的训练任务请求后,对所述训练任务进行分解,获得分解后的训练任务,并将所述分解后的训练任务分配给对应的任务处理端。After receiving the training task request transmitted by the task processing terminal, the task processing platform decomposes the training task, obtains the decomposed training task, and allocates the decomposed training task to the corresponding task processing terminal.
可选地,执行所述分解后的训练任务之后,还包括:输出共享模型。Optionally, after performing the decomposed training task, it further includes: outputting a shared model.
可选地,各个任务处理端分别获取对应的分解后的训练任务后,每个任务处理端读取本地训练数据;Optionally, after each task processing terminal obtains the corresponding decomposed training task, each task processing terminal reads local training data;
每个任务处理端从其他任务处理端获取随机数或加密参数,并基于所述随机数或加密参数,对其他任务处理端传输过来的加密数据进行解密,获得解密后的数据;Each task processing terminal obtains random numbers or encryption parameters from other task processing terminals, and based on the random numbers or encryption parameters, decrypts encrypted data transmitted from other task processing terminals to obtain decrypted data;
通过所述本地训练数据、所述解密后的数据,进行学习训练。Learning and training are performed through the local training data and the decrypted data.
可选地,各个任务处理端分别获取对应的分解后的训练任务后,每个任务处理端读取本地训练数据;Optionally, after each task processing terminal obtains the corresponding decomposed training task, each task processing terminal reads local training data;
每个任务处理端从其他任务处理端获取模型训练过程随机数或模型训练过程参数;Each task processing terminal obtains model training process random numbers or model training process parameters from other task processing terminals;
依据所述模型训练过程随机数或模型训练过程参数及本地训练数据,进行学习训练。According to the model training process random number or model training process parameters and local training data, learning and training are performed.
可选地,,各个任务处理端分别获取对应的分解后的训练任务后,每个任务处理端读取本地训练数据;Optionally, after each task processing terminal obtains the corresponding decomposed training task, each task processing terminal reads local training data;
每个任务处理端从其他任务处理端获取训练参数;Each task processing terminal obtains training parameters from other task processing terminals;
依据所述训练参数及本地训练数据,进行学习训练。Perform learning training based on the training parameters and local training data.
可选地,所述训练参数包括:卷积神经网络层数、卷积核大小。Optionally, the training parameters include: the number of layers of the convolutional neural network and the size of the convolution kernel.
可选地,每个任务处理端从其他任务处理端获取随机数或参数的次数为一次或多次。Optionally, the number of times that each task processing terminal obtains random numbers or parameters from other task processing terminals is one or more times.
可选地,所述学习训练采用的学习算法包括以下至少之一:线性回归、逻辑回归、树模型、深度神经网络、图神经网络。Optionally, the learning algorithm used in the learning training includes at least one of the following: linear regression, logistic regression, tree model, deep neural network, graph neural network.
可选地,所述训练数据包括以下至少之一:社保数据、公积金数据、固定资产数据、流动资产数据。Optionally, the training data includes at least one of the following: social security data, provident fund data, fixed asset data, and current asset data.
可选地,所述流动资产数据包括以下至少之一:存款数据、贷款数据。Optionally, the current asset data includes at least one of the following: deposit data and loan data.
可选地,所述加密数据为非隐私数据。Optionally, the encrypted data is non-private data.
本发明还提供一种任务处理系统,包括有:The present invention also provides a task processing system, including:
获取模块,用于各个任务处理端分别获取对应的分解后的训练任务,以及用于每个任务处理端获取本地训练数据、从其他任务处理端获取随机数或参数;The acquisition module is used for each task processing end to obtain the corresponding decomposed training task, and for each task processing end to obtain local training data, and to obtain random numbers or parameters from other task processing ends;
执行模块,用于执行所述分解后的训练任务。The execution module is used to execute the decomposed training task.
可选地,所述各个任务处理端分别获取对应的分解后的训练任务之前,还包括:Optionally, before each task processing terminal separately obtains the corresponding decomposed training task, it further includes:
任务处理平台接收所述任务处理端传输的训练任务请求后,对所述训练任务进行分解,获得分解后的训练任务,并将所述分解后的训练任务分配给对应的任务处理端。After receiving the training task request transmitted by the task processing terminal, the task processing platform decomposes the training task, obtains the decomposed training task, and allocates the decomposed training task to the corresponding task processing terminal.
可选地,执行所述分解后的训练任务之后,还包括:输出共享模型。Optionally, after performing the decomposed training task, it further includes: outputting a shared model.
可选地,各个任务处理端分别获取对应的分解后的训练任务后,每个任务处理端读取本地训练数据;Optionally, after each task processing terminal obtains the corresponding decomposed training task, each task processing terminal reads local training data;
每个任务处理端从其他任务处理端获取随机数或加密参数,并基于所述随机数或加密参数,对其他任务处理端传输过来的加密数据进行解密,获得解密后的数据;Each task processing terminal obtains random numbers or encryption parameters from other task processing terminals, and based on the random numbers or encryption parameters, decrypts encrypted data transmitted from other task processing terminals to obtain decrypted data;
通过所述本地训练数据、所述解密后的数据,进行学习训练。Learning and training are performed through the local training data and the decrypted data.
可选地,各个任务处理端分别获取对应的分解后的训练任务后,每个任务处理端读取本地训练数据;Optionally, after each task processing terminal obtains the corresponding decomposed training task, each task processing terminal reads local training data;
每个任务处理端从其他任务处理端获取模型训练过程随机数或模型训练过程参数;Each task processing terminal obtains model training process random numbers or model training process parameters from other task processing terminals;
依据所述模型训练过程随机数或模型训练过程参数及本地训练数据,进行学习训练。According to the model training process random number or model training process parameters and local training data, learning and training are performed.
可选地,各个任务处理端分别获取对应的分解后的训练任务后,每个任务处理端读取本地训练数据;Optionally, after each task processing terminal obtains the corresponding decomposed training task, each task processing terminal reads local training data;
每个任务处理端从其他任务处理端获取训练参数;Each task processing terminal obtains training parameters from other task processing terminals;
依据所述训练参数及本地训练数据,进行学习训练。Perform learning training based on the training parameters and local training data.
可选地,所述训练参数包括:卷积神经网络层数、卷积核大小。Optionally, the training parameters include: the number of layers of the convolutional neural network and the size of the convolution kernel.
可选地,每个任务处理端从其他任务处理端获取随机数或参数的次数为一次或多次。Optionally, the number of times that each task processing terminal obtains random numbers or parameters from other task processing terminals is one or more times.
可选地,所述学习训练采用的学习算法包括以下至少之一:线性回归、逻辑回归、树模型、深度神经网络、图神经网络。Optionally, the learning algorithm used in the learning training includes at least one of the following: linear regression, logistic regression, tree model, deep neural network, graph neural network.
可选地,所述训练数据包括以下至少之一:社保数据、公积金数据、固定资产数据、流动资产数据。Optionally, the training data includes at least one of the following: social security data, provident fund data, fixed asset data, and current asset data.
可选地,所述流动资产数据包括以下至少之一:存款数据、贷款数据。Optionally, the current asset data includes at least one of the following: deposit data and loan data.
可选地,所述加密数据为非隐私数据。Optionally, the encrypted data is non-private data.
本发明还提供一种任务处理设备,包括有:The present invention also provides a task processing device, including:
各个任务处理端分别获取对应的分解后的训练任务后,每个任务处理端基于本地训练数据及从其他任务处理端获取的随机数或参数,执行所述分解后的训练任务。After each task processing terminal obtains the corresponding decomposed training task, each task processing terminal executes the decomposed training task based on local training data and random numbers or parameters obtained from other task processing terminals.
本发明还提供一种设备,包括:The present invention also provides a device, including:
一个或多个处理器;和One or more processors; and
其上存储有指令的一个或多个机器可读介质,当所述一个或多个处理器执行时,使得所述设备执行如上述中一个或多个所述的方法。One or more machine-readable media having instructions stored thereon, when executed by the one or more processors, cause the device to perform one or more of the methods described above.
本发明还提供一个或多个机器可读介质,其上存储有指令,当由一个或多个处理器执行时,使得设备执行如上述中一个或多个所述的方法。The present invention also provides one or more machine-readable media on which instructions are stored, which when executed by one or more processors, cause the device to perform one or more of the above-mentioned methods.
如上所述,本发明提供的一种任务处理方法、系统、设备及介质,具有以下有益效果:一个或多个参与方发起训练任务后,本发明能够通过任务处理端分解所有的训练任务,同时每个任务处理端根据分解后的训练任务基于本地训练数据以及从其他任务处理端获取的随机数和参数来执行分解后的训练任务。即本方法在由多个数据提供方参与且在各参与方互相不信任的场景下,能够聚合或协助聚合多方数据信息并保护多方隐私数据。As mentioned above, the task processing method, system, equipment and medium provided by the present invention have the following beneficial effects: after one or more participants initiate a training task, the present invention can decompose all training tasks through the task processing terminal, and at the same time Each task processing terminal executes the decomposed training task based on the local training data and random numbers and parameters obtained from other task processing terminals according to the decomposed training task. That is, this method can aggregate or assist in the aggregation of multi-party data information and protect multi-party private data in a scenario where multiple data providers participate and each participant does not trust each other.
附图说明Description of the drawings
图1为一实施例提供的数据共享学习方法的流程示意图;FIG. 1 is a schematic flowchart of a data sharing learning method provided by an embodiment;
图2为另一实施提供的数据共享学习方法的流程示意图;Figure 2 is a schematic flow diagram of a data sharing learning method provided by another implementation;
图3为一实施例提供的数据共享学习系统的硬件结构示意图;3 is a schematic diagram of the hardware structure of a data sharing learning system provided by an embodiment;
图4为另一实施例提供的数据共享学习系统的硬件结构示意图;4 is a schematic diagram of the hardware structure of a data sharing learning system provided by another embodiment;
图5为一实施例提供的终端设备的硬件结构示意图;FIG. 5 is a schematic diagram of the hardware structure of a terminal device provided by an embodiment;
图6为另一实施例提供的终端设备的硬件结构示意图。FIG. 6 is a schematic diagram of the hardware structure of a terminal device provided by another embodiment.
元件标号说明Component label description
M10      获取模块M10 Obtain the module
M20      执行模块M20 Execution module
1100     输入设备1100 Input equipment
1101     第一处理器1101 First processor
1102     输出设备1102 Output equipment
1103     第一存储器1103 First memory
1104     通信总线1104 Communication bus
1200     处理组件1200 Processing components
1201     第二处理器1201 Second processor
1202     第二存储器1202 Second memory
1203     通信组件1203 Communication components
1204     电源组件1204 Power supply components
1205     多媒体组件1205 Multimedia components
1206     语音组件1206 Voice component
1207     输入/输出接口1207 Input/output interface
1208     传感器组件1208 Sensor assembly
具体实施方式Detailed ways
以下通过特定的具体实例说明本发明的实施方式,本领域技术人员可由本说明书所揭露的内容轻易地了解本发明的其他优点与功效。本发明还可以通过另外不同的具体实施方式加以实施或应用,本说明书中的各项细节也可以基于不同观点与应用,在没有背离本发明的精神下进行各种修饰或改变。需说明的是,在不冲突的情况下,以下实施例及实施例中的特征可以相互组合。The following describes the implementation of the present invention through specific specific examples, and those skilled in the art can easily understand other advantages and effects of the present invention from the content disclosed in this specification. The present invention can also be implemented or applied through other different specific embodiments, and various details in this specification can also be modified or changed based on different viewpoints and applications without departing from the spirit of the present invention. It should be noted that, in the case of no conflict, the following embodiments and the features in the embodiments can be combined with each other.
需要说明的是,以下实施例中所提供的图示仅以示意方式说明本发明的基本构想,遂图式中仅显示与本发明中有关的组件而非按照实际实施时的组件数目、形状及尺寸绘制,其实际实施时各组件的型态、数量及比例可为一种随意的改变,且其组件布局型态也可能更为复 杂。It should be noted that the illustrations provided in the following embodiments only illustrate the basic idea of the present invention in a schematic manner. The figures only show the components related to the present invention instead of the actual implementation of the number, shape and number of components. For the size drawing, the type, quantity, and proportion of each component can be changed at will during actual implementation, and the component layout type may also be more complicated.
请参阅图1,本发明提供一种任务处理方法,包括以下步骤:Referring to Fig. 1, the present invention provides a task processing method, which includes the following steps:
S100,各个任务处理端分别获取对应的分解后的训练任务;S100, each task processing terminal obtains a corresponding decomposed training task;
S200,每个任务处理端基于本地训练数据及从其他任务处理端获取的随机数或参数,执行所述分解后的训练任务。S200: Each task processing terminal executes the decomposed training task based on local training data and random numbers or parameters obtained from other task processing terminals.
根据上述记载,一个或多个参与方发起训练任务后,本方法能够通过任务处理端分解所有的训练任务,同时每个任务处理端根据分解后的训练任务基于本地训练数据以及从其他任务处理端获取的随机数和参数来执行分解后的训练任务。即本方法在由多个数据提供方参与且在各参与方互相不信任的场景下,能够聚合或协助聚合多方数据信息并保护多方隐私数据。其中,参与方包括:银行、企业、政府单位或组织等。According to the above records, after one or more participants initiate a training task, this method can decompose all training tasks through the task processing end, and each task processing end is based on the local training data and other task processing ends based on the decomposed training task. The obtained random numbers and parameters are used to perform the decomposed training task. That is, this method can aggregate or assist in the aggregation of multi-party data information and protect multi-party private data in a scenario where multiple data providers participate and each participant does not trust each other. Among them, the participants include: banks, enterprises, government units or organizations, etc.
在一示例性实施例中,各个任务处理端分别获取对应的分解后的训练任务之前,还包括:任务处理平台接收所述任务处理端传输的训练任务请求后,对所述训练任务进行分解,获得分解后的训练任务,并将所述分解后的训练任务分配给对应的任务处理端。In an exemplary embodiment, before each task processing terminal separately obtains the corresponding decomposed training task, it further includes: after the task processing platform receives the training task request transmitted by the task processing terminal, decomposes the training task, Obtain the decomposed training task, and assign the decomposed training task to the corresponding task processing terminal.
在一示例性实施例中,执行所述分解后的训练任务之后,还包括:输出共享模型。通过共享模型,能够在历史数据中自动发现规律并利用规律对未知数据进行应用,能够帮助用户利用数据做出更好的决策,例如根据历史数据进行预测等。In an exemplary embodiment, after executing the decomposed training task, it further includes: outputting a shared model. Through the sharing model, it is possible to automatically discover rules in historical data and use the rules to apply unknown data, which can help users make better decisions using data, such as making predictions based on historical data.
在一示例性实施例中,各个任务处理端分别获取对应的分解后的训练任务后,每个任务处理端读取本地训练数据;In an exemplary embodiment, after each task processing terminal obtains the corresponding decomposed training task, each task processing terminal reads local training data;
每个任务处理端从其他任务处理端获取随机数或加密参数,并基于所述加密随机数或加密参数,对其他任务处理端传输过来的加密数据进行解密,获得解密后的数据;Each task processing terminal obtains random numbers or encryption parameters from other task processing terminals, and based on the encrypted random numbers or encryption parameters, decrypts the encrypted data transmitted from other task processing terminals to obtain the decrypted data;
通过所述本地训练数据、所述解密后的数据,进行学习训练。Learning and training are performed through the local training data and the decrypted data.
具体地,本申请实施例中的加密数据包括非隐私数据。本实施例把个人、企业、政府等机构或单位未对外公开的数据称为隐私数据;把社会公众通过一般途径能够得知或获得的数据称为非隐私数据。本申请实施例中一个或多个参与方中会泄漏用户隐私的数据不出本地,保证在安全多方计算方案下,一个或多个参与方或参与方基于既定协议下通过交换不泄露隐私的信息来进行共享学习训练。Specifically, the encrypted data in the embodiment of the present application includes non-private data. In this embodiment, data that has not been disclosed to the public by individuals, enterprises, governments, and other institutions or units is called private data; data that the public can learn or obtain through general channels is called non-private data. In the embodiments of this application, the data that will leak user privacy among one or more participants cannot be released locally, ensuring that under the secure multi-party computing solution, one or more participants or participants exchange private information based on established agreements based on established agreements. Come for shared learning and training.
在一示例性实施例中,各个任务处理端分别获取对应的分解后的训练任务后,每个任务处理端读取本地训练数据;In an exemplary embodiment, after each task processing terminal obtains the corresponding decomposed training task, each task processing terminal reads local training data;
每个任务处理端从其他任务处理端获取模型训练过程随机数或模型训练过程参数;Each task processing terminal obtains model training process random numbers or model training process parameters from other task processing terminals;
依据所述模型训练过程随机数或模型训练过程参数及本地训练数据,进行学习训练。According to the model training process random number or model training process parameters and local training data, learning and training are performed.
具体地,本申请实施例通过获取共享学习模型中训练过程中的随机数或模型训练过程参数,保证在训练过程中进行的步骤或做出的设置是一致,保证训练后的模型能够适用共同的训练数据。其中,模型训练过程参数包括卷积神经网络的层数、卷积神经网络的卷积核大小等。作为示例,例如卷积神经网络的层数为50层,卷积核的大小为5×5。Specifically, the embodiment of the present application obtains random numbers in the training process or model training process parameters in the shared learning model to ensure that the steps or settings made during the training process are consistent, and that the trained model can be applied to common Training data. Among them, the model training process parameters include the number of layers of the convolutional neural network, the size of the convolution kernel of the convolutional neural network, and so on. As an example, for example, the number of layers of the convolutional neural network is 50, and the size of the convolution kernel is 5×5.
在一示例性实施例中,各个任务处理端分别获取对应的分解后的训练任务后,每个任务处理端读取本地训练数据;In an exemplary embodiment, after each task processing terminal obtains the corresponding decomposed training task, each task processing terminal reads local training data;
每个任务处理端从其他任务处理端获取训练参数;Each task processing terminal obtains training parameters from other task processing terminals;
依据所述训练参数及本地训练数据,进行学习训练。Perform learning training based on the training parameters and local training data.
具体地,本申请实施例通过只获取本地训练数据,然后结合其他任务端的训练参数,保证初始设置的训练条件是一致的。其中,模型训练参数包括卷积神经网络的层数、卷积神经网络的卷积核大小等。作为示例,例如卷积神经网络的层数为20层,卷积核的大小为3×3。Specifically, in the embodiment of the present application, only local training data is acquired, and then combined with the training parameters of other task terminals to ensure that the initially set training conditions are consistent. Among them, the model training parameters include the number of layers of the convolutional neural network, the size of the convolution kernel of the convolutional neural network, and so on. As an example, for example, the number of layers of the convolutional neural network is 20, and the size of the convolution kernel is 3×3.
根据上述实施例,每个任务处理端从其他任务处理端获取随机数或参数的次数为一次或多次。According to the above embodiment, the number of times that each task processing terminal obtains random numbers or parameters from other task processing terminals is one or more times.
在一示例性实施例中,将至少两个参与方中的数据进行一次或多次交换,包括交换以下至少之一:随机数、加密参数。In an exemplary embodiment, the data in at least two participants are exchanged one or more times, including exchanging at least one of the following: random numbers and encryption parameters.
在一示例性实施例中,具体包括有:In an exemplary embodiment, it specifically includes:
发起一个或多个训练任务;Initiate one or more training tasks;
分解和协调所述一个或多个训练任务;Decompose and coordinate the one or more training tasks;
根据分解和协调后的一个或多个训练任务将至少两个参与方中的数据进行一次或多次交换,使其中的一个或多个参与方中的数据能够与另外的一个或多个参与方中的数据进行共享。还包括获取每个参与方完成一次或多次学习后的共享学习模型。通过共享学习模型,能够在历史数据中自动发现规律并利用规律对未知数据进行应用,能够帮助用户利用数据做出更好的决策,例如根据历史数据进行预测等。According to one or more training tasks after decomposition and coordination, the data in at least two participants will be exchanged one or more times, so that the data in one or more participants can be exchanged with one or more other participants. To share data in. It also includes obtaining a shared learning model after each participant has completed one or more learning sessions. By sharing the learning model, rules can be automatically discovered in historical data and used to apply unknown data, which can help users make better decisions using data, such as making predictions based on historical data.
在一些示例性实施例中,所述学习训练采用的学习算法包括以下至少之一:线性回归、逻辑回归、树模型、深度神经网络、图神经网络。通过对学习算法的管理,能够采用多种方式提高学习算法的鲁棒性,增强数据的安全性。同时还可以对共享学习模型进行训练优化,从而提高共享学习模型的性能和泛化能力;训练优化的指标包括以下至少之一:定义评估、算法策略选择、数据集划分、参数调优等。In some exemplary embodiments, the learning algorithm used in the learning training includes at least one of the following: linear regression, logistic regression, tree model, deep neural network, graph neural network. Through the management of the learning algorithm, a variety of ways can be used to improve the robustness of the learning algorithm and enhance the security of the data. At the same time, the shared learning model can be trained and optimized, thereby improving the performance and generalization ability of the shared learning model; the index of training optimization includes at least one of the following: definition evaluation, algorithm strategy selection, data set division, parameter tuning, etc.
在些一示例性实施例中,所述一个或多个参与方中的数据包括以下至少之一:社保数据、公积金数据、固定资产数据、流动资产数据。其中,所述流动资产数据包括以下至少之一: 存款数据、贷款数据。In some exemplary embodiments, the data in the one or more participants includes at least one of the following: social security data, provident fund data, fixed asset data, and current asset data. Wherein, the current asset data includes at least one of the following: deposit data and loan data.
具体地,在某一具体实施例中,若某一银行需要给某一企业进行一笔贷款的发放,其需要对该某一企业进行贷款资质分析、风险评估等,以确保给该某一企业的贷款不会成为坏账。某一地方政府机构掌握该某一企业的社保数据,另一银行掌握该某一企业的其它一笔或多笔贷款数据,另一企业掌握其与该某一企业的正常经营数据。社保数据包括该某一企业参加社保缴纳的员工人数、每一位员工缴纳社保的基数等;贷款数据包括该某一企业的贷款数额、贷款时间、还款数额、还款时间等;经营数据包括货款支付途径、货款支付时间等。在相互不信任、确保隐私数据不会泄漏的条件,将该某一地方政府机构、该另一银行、该另一企业分别作为一个参与方,分别将它们掌握的数据作为共享学习的数据进行共享学习,获取共享学习后的共享学习模型;该银行可以基于该共享学习模型对该企业进行贷款资质分析、风险评估,辅助该银行对该企业的贷款,例如预测该企业未来的营业收入、还款能力等。Specifically, in a specific embodiment, if a bank needs to issue a loan to a certain company, it needs to perform loan qualification analysis, risk assessment, etc. The loan will not become a bad debt. A certain local government agency has the social security data of the certain enterprise, another bank has the data of one or more other loans of the certain enterprise, and the other enterprise has the data of its normal operation with the certain enterprise. Social security data includes the number of employees participating in the social security payment of a certain company, the base number of each employee’s social security payment, etc.; loan data includes the amount of loans, loan time, repayment amount, repayment time, etc. of the certain company; operating data includes Payment method, payment time, etc. Under the condition of distrusting each other and ensuring that private data will not be leaked, the local government agency, the other bank, and the other enterprise are each as a participant, and the data they hold are shared as data for shared learning. Learn to obtain a shared learning model after shared learning; the bank can perform loan qualification analysis and risk assessment on the company based on the shared learning model, and assist the bank in lending to the company, such as predicting the company’s future operating income and repayment Ability etc.
综上所述,一个或多个参与方发起训练任务后,本发明能够通过任务处理端分解所有的训练任务,同时每个任务处理端根据分解后的训练任务基于本地训练数据以及从其他任务处理端获取的随机数和参数来执行分解后的训练任务。即本方法在由多个参与方参与且在各参与方互相不信任的场景下,能够聚合或协助聚合多方数据信息并保护多方隐私数据。同时本方法能够将某一个或多个参与方中的非隐私数据与其余的一个或多个参与方中的非隐私数据进行交换,使其中的一个或多个参与方中的非隐私数据能够与另外的一个或多个参与方中的非隐私数据进行共享;然后再基于共享后的非隐私数据进行一次或多次学习,实现共享学习,并获取共享学习训练模型。即本方法在由多个数据提供方参与且在各参与方互相不信任的场景下,能够聚合或协助聚合多方数据信息并保护多方隐私数据。同时通过共享学习模型,本方法能够在历史数据中自动发现规律,并利用规律对未知数据进行应用,帮助用户利用数据做出更好的决策,例如根据历史数据进行预测等。In summary, after one or more participants initiate training tasks, the present invention can decompose all training tasks through the task processing terminal, and each task processing terminal can process the training tasks based on local training data and other tasks according to the decomposed training tasks. The random numbers and parameters obtained by the terminal are used to perform the decomposed training task. That is, this method can aggregate or assist in the aggregation of multi-party data information and protect multi-party private data in a scenario where multiple participants participate and each participant does not trust each other. At the same time, this method can exchange the non-private data in one or more parties with the non-private data in the other one or more parties, so that the non-private data in one or more parties can be exchanged with the non-private data in one or more parties. The non-private data in the other one or more participants is shared; then one or more learning is performed based on the shared non-private data to realize shared learning and obtain a shared learning training model. That is, this method can aggregate or assist in the aggregation of multi-party data information and protect multi-party private data in a scenario where multiple data providers participate and each participant does not trust each other. At the same time, by sharing the learning model, this method can automatically discover laws in historical data, and use the laws to apply unknown data to help users make better decisions using data, such as making predictions based on historical data.
如图2所示,还提供一种任务处理方法,包括有:As shown in Figure 2, a task processing method is also provided, including:
S1,一个或多个参与方用于提供非隐私数据以及部署机器学习模块;S1, one or more participants are used to provide non-private data and deploy machine learning modules;
S2,一个或多个参与方发起一个或多个机器学习训练任务;S2, one or more participants initiate one or more machine learning training tasks;
S3,模型平台对一个或多个机器学习训练任务进行分解和协调,S3, the model platform decomposes and coordinates one or more machine learning training tasks,
S4,模型平台下发一个或多个训练任务到各个参与方;S4: The model platform issues one or more training tasks to each participant;
S5,每个参与方将本地中的训练数据以及从其他任务处理端获取的随机数或参数读取至本地机器学习模块;S5, each participant reads the local training data and random numbers or parameters obtained from other task processing terminals to the local machine learning module;
S6,每个参与方的机器学习模块进行一次或多次参数交换,使其中的一个或多个参与方 中的数据(包括训练数据和非隐私数据)能够与另外的一个或多个参与方中的数据(包括训练数据和非隐私数据)进行共享;S6, the machine learning module of each participant exchanges one or more parameters, so that the data in one or more participants (including training data and non-private data) can be exchanged with one or more other participants. Data (including training data and non-private data) for sharing;
S7,基于共享后的数据(包括训练数据和非隐私数据)进行一次或多次学习;完成一次或多次共享学习训练后,获取每个参与方的共享学习模型。S7: Perform one or more learning based on the shared data (including training data and non-private data); after completing one or more shared learning training, obtain a shared learning model of each participant.
其中,模型平台用于触发和协调学习训练任务;参与方中的本地机器学习模块,用于接收模型平台下发的分解后和协调后的机器学习任务,基于本地训练数据及从其他参与方获取的随机数或参数,进行机器学习。Among them, the model platform is used to trigger and coordinate learning and training tasks; the local machine learning module in the participants is used to receive the decomposed and coordinated machine learning tasks issued by the model platform, based on local training data and obtained from other participants Random numbers or parameters for machine learning.
根据上述记载,本方法能够将某一个或多个参与方中的非隐私数据与其余的一个或多个参与方中的非隐私数据进行交换,使其中的一个或多个参与方中的非隐私数据能够与另外的一个或多个参与方中的非隐私数据进行共享;然后再基于共享后的非隐私数据进行一次或多次学习,实现共享学习,并获取共享学习训练模型。即本方法在由多个数据提供方参与且在各参与方互相不信任的场景下,能够聚合或协助聚合多方数据信息并保护多方隐私数据。同时通过共享学习模型,本方法能够在历史数据中自动发现规律,并利用规律对未知数据进行应用,帮助用户利用数据做出更好的决策,例如根据历史数据进行预测等。According to the above records, this method can exchange non-private data in one or more participants with non-private data in the other one or more participants, so that the non-private data in one or more participants can be exchanged. The data can be shared with the non-private data of one or more other participants; then one or more learnings are performed based on the shared non-private data to achieve shared learning and obtain a shared learning training model. That is, this method can aggregate or assist in the aggregation of multi-party data information and protect multi-party private data in a scenario where multiple data providers participate and each participant does not trust each other. At the same time, by sharing the learning model, this method can automatically discover rules in historical data, and use the rules to apply unknown data to help users make better decisions using data, such as making predictions based on historical data.
如图3所示,一种任务处理系统,包括有:As shown in Figure 3, a task processing system includes:
获取模块M10,用于各个任务处理端分别获取对应的分解后的训练任务,以及用于每个任务处理端获取本地训练数据、从其他任务处理端获取随机数或参数;The obtaining module M10 is used for each task processing end to obtain the corresponding decomposed training tasks, and for each task processing end to obtain local training data, and to obtain random numbers or parameters from other task processing ends;
执行模块M20,用于执行所述分解后的训练任务。The execution module M20 is used to execute the decomposed training task.
在一示例性实施例中,各个任务处理端分别获取对应的分解后的训练任务之前,还包括:任务处理平台接收所述任务处理端传输的训练任务请求后,对所述训练任务进行分解,获得分解后的训练任务,并将所述分解后的训练任务分配给对应的任务处理端。In an exemplary embodiment, before each task processing terminal separately obtains the corresponding decomposed training task, it further includes: after the task processing platform receives the training task request transmitted by the task processing terminal, decomposes the training task, Obtain the decomposed training task, and assign the decomposed training task to the corresponding task processing terminal.
在一示例性实施例中,执行所述分解后的训练任务之后,还包括:输出共享模型。通过共享模型,能够在历史数据中自动发现规律并利用规律对未知数据进行应用,能够帮助用户利用数据做出更好的决策,例如根据历史数据进行预测等。In an exemplary embodiment, after executing the decomposed training task, it further includes: outputting a shared model. Through the sharing model, it is possible to automatically discover rules in historical data and use the rules to apply unknown data, which can help users make better decisions using data, such as making predictions based on historical data.
在一示例性实施例中,各个任务处理端分别获取对应的分解后的训练任务后,每个任务处理端读取本地训练数据;In an exemplary embodiment, after each task processing terminal obtains the corresponding decomposed training task, each task processing terminal reads local training data;
每个任务处理端从其他任务处理端获取随机数或加密参数,并基于所述加密随机数或加密参数,对其他任务处理端传输过来的加密数据进行解密,获得解密后的数据;Each task processing terminal obtains random numbers or encryption parameters from other task processing terminals, and based on the encrypted random numbers or encryption parameters, decrypts the encrypted data transmitted from other task processing terminals to obtain the decrypted data;
通过所述本地训练数据、所述解密后的数据,进行学习训练。Learning and training are performed through the local training data and the decrypted data.
具体地,本申请实施例中的加密数据包括非隐私数据。本实施例把个人、企业、政府等 机构或单位未对外公开的数据称为隐私数据;把社会公众通过一般途径能够得知或获得的数据称为非隐私数据。本申请实施例中一个或多个参与方中会泄漏用户隐私的数据不出本地,保证在安全多方计算方案下,一个或多个参与方或参与方基于既定协议下通过交换不泄露隐私的信息来进行共享学习训练。Specifically, the encrypted data in the embodiment of the present application includes non-private data. In this embodiment, data that has not been disclosed to the public by individuals, enterprises, governments, and other institutions or units is called private data; data that the public can learn or obtain through general channels is called non-private data. In the embodiments of this application, the data that will leak user privacy among one or more participants cannot be released locally, ensuring that under the secure multi-party computing solution, one or more participants or participants exchange private information based on established agreements based on established agreements. Come for shared learning and training.
在一示例性实施例中,各个任务处理端分别获取对应的分解后的训练任务后,每个任务处理端读取本地训练数据;In an exemplary embodiment, after each task processing terminal obtains the corresponding decomposed training task, each task processing terminal reads local training data;
每个任务处理端从其他任务处理端获取模型训练过程随机数或模型训练过程参数;Each task processing terminal obtains model training process random numbers or model training process parameters from other task processing terminals;
依据所述模型训练过程随机数或模型训练过程参数及本地训练数据,进行学习训练。According to the model training process random number or model training process parameters and local training data, learning and training are performed.
具体地,本申请实施例通过获取共享学习模型中训练过程中的随机数或模型训练过程参数,保证在训练过程中进行的步骤或做出的设置是一致,保证训练后的模型能够适用共同的训练数据。其中,模型训练过程参数包括卷积神经网络的层数、卷积神经网络的卷积核大小等。作为示例,例如卷积神经网络的层数为50层,卷积核的大小为5×5。Specifically, the embodiment of the present application obtains random numbers in the training process or model training process parameters in the shared learning model to ensure that the steps or settings made during the training process are consistent, and that the trained model can be applied to common Training data. Among them, the model training process parameters include the number of layers of the convolutional neural network, the size of the convolution kernel of the convolutional neural network, and so on. As an example, for example, the number of layers of the convolutional neural network is 50, and the size of the convolution kernel is 5×5.
在一示例性实施例中,各个任务处理端分别获取对应的分解后的训练任务后,每个任务处理端读取本地训练数据;In an exemplary embodiment, after each task processing terminal obtains the corresponding decomposed training task, each task processing terminal reads local training data;
每个任务处理端从其他任务处理端获取训练参数;Each task processing terminal obtains training parameters from other task processing terminals;
依据所述训练参数及本地训练数据,进行学习训练。Perform learning training based on the training parameters and local training data.
具体地,本申请实施例通过只获取本地训练数据,然后结合其他任务端的训练参数,保证初始设置的训练条件是一致的。其中,模型训练参数包括卷积神经网络的层数、卷积神经网络的卷积核大小等。作为示例,例如卷积神经网络的层数为20层,卷积核的大小为3×3。Specifically, in the embodiment of the present application, only local training data is acquired, and then combined with the training parameters of other task terminals to ensure that the initially set training conditions are consistent. Among them, the model training parameters include the number of layers of the convolutional neural network, the size of the convolution kernel of the convolutional neural network, and so on. As an example, for example, the number of layers of the convolutional neural network is 20, and the size of the convolution kernel is 3×3.
根据上述实施例,每个任务处理端从其他任务处理端获取随机数或参数的次数为一次或多次。According to the above embodiment, the number of times that each task processing terminal obtains random numbers or parameters from other task processing terminals is one or more times.
在一示例性实施例中,将至少两个参与方中的数据进行一次或多次交换,包括交换以下至少之一:随机数、加密参数。In an exemplary embodiment, the data in at least two participants are exchanged one or more times, including exchanging at least one of the following: random numbers and encryption parameters.
在一示例性实施例中,具体包括有:In an exemplary embodiment, it specifically includes:
发起一个或多个训练任务;Initiate one or more training tasks;
分解和协调所述一个或多个训练任务;Decompose and coordinate the one or more training tasks;
根据分解和协调后的一个或多个训练任务将至少两个参与方中的数据进行一次或多次交换,使其中的一个或多个参与方中的数据能够与另外的一个或多个参与方中的数据进行共享。还包括获取每个参与方完成一次或多次学习后的共享学习模型。通过共享学习模型,能够在历史数据中自动发现规律并利用规律对未知数据进行应用,能够帮助用户利用数据做出更好 的决策,例如根据历史数据进行预测等。According to one or more training tasks after decomposition and coordination, the data in at least two participants will be exchanged one or more times, so that the data in one or more participants can be exchanged with one or more other participants. To share data in. It also includes obtaining a shared learning model after each participant has completed one or more learning sessions. By sharing the learning model, rules can be automatically discovered in historical data and used to apply unknown data, which can help users make better decisions using data, such as making predictions based on historical data.
在一些示例性实施例中,所述学习训练采用的学习算法包括以下至少之一:线性回归、逻辑回归、树模型、深度神经网络、图神经网络。通过对学习算法的管理,能够采用多种方式提高学习算法的鲁棒性,增强数据的安全性。同时还可以对共享学习模型进行训练优化,从而提高共享学习模型的性能和泛化能力;训练优化的指标包括以下至少之一:定义评估、算法策略选择、数据集划分、参数调优等。In some exemplary embodiments, the learning algorithm used in the learning training includes at least one of the following: linear regression, logistic regression, tree model, deep neural network, graph neural network. Through the management of the learning algorithm, a variety of ways can be used to improve the robustness of the learning algorithm and enhance the security of the data. At the same time, the shared learning model can be trained and optimized, thereby improving the performance and generalization ability of the shared learning model; the index of training optimization includes at least one of the following: definition evaluation, algorithm strategy selection, data set division, parameter tuning, etc.
在些一示例性实施例中,所述一个或多个参与方中的数据包括以下至少之一:社保数据、公积金数据、固定资产数据、流动资产数据。其中,所述流动资产数据包括以下至少之一:存款数据、贷款数据。In some exemplary embodiments, the data in the one or more participants includes at least one of the following: social security data, provident fund data, fixed asset data, and current asset data. Wherein, the current asset data includes at least one of the following: deposit data and loan data.
具体地,在某一具体实施例中,若某一银行需要给某一企业进行一笔贷款的发放,其需要对该某一企业进行贷款资质分析、风险评估等,以确保给该某一企业的贷款不会成为坏账。某一地方政府机构掌握该某一企业的社保数据,另一银行掌握该某一企业的其它一笔或多笔贷款数据,另一企业掌握其与该某一企业的正常经营数据。社保数据包括该某一企业参加社保缴纳的员工人数、每一位员工缴纳社保的基数等;贷款数据包括该某一企业的贷款数额、贷款时间、还款数额、还款时间等;经营数据包括货款支付途径、货款支付时间等。在相互不信任、确保隐私数据不会泄漏的条件,将该某一地方政府机构、该另一银行、该另一企业分别作为一个参与方,分别将它们掌握的数据作为共享学习的数据进行共享学习,获取共享学习后的共享学习模型;该银行可以基于该共享学习模型对该企业进行贷款资质分析、风险评估,辅助该银行对该企业的贷款,例如预测该企业未来的营业收入、还款能力等。Specifically, in a specific embodiment, if a bank needs to issue a loan to a certain company, it needs to perform loan qualification analysis, risk assessment, etc. The loan will not become a bad debt. A certain local government agency has the social security data of the certain enterprise, another bank has the data of one or more other loans of the certain enterprise, and the other enterprise has the data of its normal operation with the certain enterprise. Social security data includes the number of employees participating in the social security payment of a certain company, the base number of each employee’s social security payment, etc.; loan data includes the amount of loans, loan time, repayment amount, repayment time, etc. of the certain company; operating data includes Payment method, payment time, etc. Under the condition of distrusting each other and ensuring that private data will not be leaked, the local government agency, the other bank, and the other enterprise are each as a participant, and the data they hold are shared as data for shared learning. Learn to obtain a shared learning model after shared learning; the bank can perform loan qualification analysis and risk assessment on the company based on the shared learning model, and assist the bank in lending to the company, such as predicting the company’s future operating income and repayment Ability etc.
综上所述,一个或多个参与方发起训练任务后,本发明能够通过任务处理端分解所有的训练任务,同时每个任务处理端根据分解后的训练任务基于本地训练数据以及从其他任务处理端获取的随机数和参数来执行分解后的训练任务。即本方法在由多个参与方参与且在各参与方互相不信任的场景下,能够聚合或协助聚合多方数据信息并保护多方隐私数据。同时本方法能够将某一个或多个参与方中的非隐私数据与其余的一个或多个参与方中的非隐私数据进行交换,使其中的一个或多个参与方中的非隐私数据能够与另外的一个或多个参与方中的非隐私数据进行共享;然后再基于共享后的非隐私数据进行一次或多次学习,实现共享学习,并获取共享学习训练模型。即本方法在由多个数据提供方参与且在各参与方互相不信任的场景下,能够聚合或协助聚合多方数据信息并保护多方隐私数据。同时通过共享学习模型,本方法能够在历史数据中自动发现规律,并利用规律对未知数据进行应用,帮助用户利用数据做出更好的决策,例如根据历史数据进行预测等。In summary, after one or more participants initiate training tasks, the present invention can decompose all training tasks through the task processing terminal, and each task processing terminal can process the training tasks based on local training data and other tasks according to the decomposed training tasks. The random numbers and parameters obtained by the terminal are used to perform the decomposed training task. That is, this method can aggregate or assist in the aggregation of multi-party data information and protect multi-party private data in a scenario where multiple participants participate and each participant does not trust each other. At the same time, this method can exchange the non-private data in one or more parties with the non-private data in the other one or more parties, so that the non-private data in one or more parties can be exchanged with the non-private data in one or more parties. The non-private data in the other one or more participants is shared; then one or more learning is performed based on the shared non-private data to realize shared learning and obtain a shared learning training model. That is, this method can aggregate or assist in the aggregation of multi-party data information and protect multi-party private data in a scenario where multiple data providers participate and each participant does not trust each other. At the same time, by sharing the learning model, this method can automatically discover laws in historical data, and use the laws to apply unknown data to help users make better decisions using data, such as making predictions based on historical data.
如图4所示,还提供一种数据共享学习系统,包括有控制模块、学习模块;控制模块位于模型平台内;学习模块位于参与方内。其中,参与方的学习模块通过交换随机数或加密参数的方式,在控制模块的触发与协调下,进行共享机器学习;各个参与方各自部署机器学习模块,各个参与方发起一个或多个机器学习训练任务;控制模块收到一个或多个训练任务后,进行分解和协调,并下发一个或多个训练任务到各个参与方;各个参与方读取本地一个或多个参与方中的非隐私数据以及其他参与方中的非隐私数据到本地的机器学习模块;使其中的一个或多个参与方中的非隐私数据能够与另外的一个或多个参与方中的非隐私数据进行共享;基于共享后的非隐私数据进行一次或多次学习;完成一次或多次共享学习训练后,获取每个参与方的共享学习模型。As shown in Figure 4, a data sharing learning system is also provided, which includes a control module and a learning module; the control module is located in the model platform; and the learning module is located in the participants. Among them, the learning module of the participants exchanges random numbers or encryption parameters, and is triggered and coordinated by the control module to perform shared machine learning; each participant deploys its own machine learning module, and each participant initiates one or more machine learning Training tasks; after receiving one or more training tasks, the control module decomposes and coordinates, and sends one or more training tasks to each participant; each participant reads the non-privacy of one or more local participants Data and non-private data from other participants to the local machine learning module; enable non-private data from one or more participants to be shared with non-private data from one or more other participants; based on After sharing the non-private data, learn one or more times; after completing one or more shared learning training, obtain the shared learning model of each participant.
其中,模型平台用于触发和协调学习训练任务;参与方中的本地机器学习模块,用于接收模型平台下发的分解后和协调后的机器学习任务,基于本地训练数据及从其他参与方获取的随机数或参数,进行机器学习。Among them, the model platform is used to trigger and coordinate learning and training tasks; the local machine learning module in the participants is used to receive the decomposed and coordinated machine learning tasks issued by the model platform, based on local training data and obtained from other participants Random numbers or parameters for machine learning.
根据上述记载,本系统能够将某一个或多个参与方中的非隐私数据与其余的一个或多个参与方中的非隐私数据进行交换,使其中的一个或多个参与方中的非隐私数据能够与另外的一个或多个参与方中的非隐私数据进行共享;然后再基于共享后的非隐私数据进行一次或多次学习,实现共享学习,并获取共享学习训练模型。即本方法在由多个数据提供方参与且在各参与方互相不信任的场景下,能够聚合或协助聚合多方数据信息并保护多方隐私数据。同时通过共享学习模型,本方法能够在历史数据中自动发现规律,并利用规律对未知数据进行应用,帮助用户利用数据做出更好的决策,例如根据历史数据进行预测等。According to the above records, this system can exchange non-private data in one or more participants with non-private data in the other one or more participants, so that the non-private data in one or more participants The data can be shared with the non-private data of one or more other participants; then one or more learnings are performed based on the shared non-private data to achieve shared learning and obtain a shared learning training model. That is, this method can aggregate or assist in the aggregation of multi-party data information and protect multi-party private data in a scenario where multiple data providers participate and each participant does not trust each other. At the same time, by sharing the learning model, this method can automatically discover laws in historical data, and use the laws to apply unknown data to help users make better decisions using data, such as making predictions based on historical data.
本申请实施例还提供了一种任务处理设备,包括有:The embodiment of the present application also provides a task processing device, including:
各个任务处理端分别获取对应的分解后的训练任务后,每个任务处理端基于本地训练数据及从其他任务处理端获取的随机数或参数;After each task processing terminal obtains the corresponding decomposed training task, each task processing terminal is based on local training data and random numbers or parameters obtained from other task processing terminals;
执行所述分解后的训练任务。Perform the decomposed training task.
在本实施例中,该任务处理设备执行上述系统或方法,具体功能和技术效果参照上述实施例即可,此处不再赘述。In this embodiment, the task processing device executes the above-mentioned system or method. For specific functions and technical effects, refer to the above-mentioned embodiment, which will not be repeated here.
本申请实施例还提供了一种设备,该设备可以包括:一个或多个处理器;和其上存储有指令的一个或多个机器可读介质,当由所述一个或多个处理器执行时,使得所述设备执行图1所述的方法。在实际应用中,该设备可以作为终端设备,也可以作为服务器,终端设备的例子可以包括:智能手机、平板电脑、电子书阅读器、MP3(动态影像专家压缩标准语音层面3,Moving Picture Experts Group Audio Layer III)播放器、MP4(动态影像专家压缩标准语音层 面4,Moving Picture Experts Group Audio Layer IV)播放器、膝上型便携计算机、车载电脑、台式计算机、机顶盒、智能电视机、可穿戴设备等等,本申请实施例对于具体的设备不加以限制。The embodiment of the present application also provides a device, which may include: one or more processors; and one or more machine-readable media on which instructions are stored, when executed by the one or more processors At this time, the device is caused to execute the method described in FIG. 1. In practical applications, the device can be used as a terminal device or a server. Examples of terminal devices include: smart phones, tablets, e-book readers, MP3 (moving picture experts compress standard voice level 3, Moving Picture Experts Group Audio Layer III) Players, MP4 (Moving Picture Experts Group Audio Layer IV) Players, laptop portable computers, car computers, desktop computers, set-top boxes, smart TVs, wearable devices And so on, the embodiments of the present application do not impose restrictions on specific devices.
本申请实施例还提供了一种非易失性可读存储介质,该存储介质中存储有一个或多个模块(programs),该一个或多个模块被应用在设备时,可以使得该设备执行本申请实施例的图1中所述方法所包含步骤的指令(instructions)。The embodiment of the present application also provides a non-volatile readable storage medium. The storage medium stores one or more modules (programs). When the one or more modules are applied to a device, the device can execute Instructions for the steps included in the method described in FIG. 1 of the embodiment of the present application.
图5为本申请一实施例提供的终端设备的硬件结构示意图。如图所示,该终端设备可以包括:输入设备1100、第一处理器1101、输出设备1102、第一存储器1103和至少一个通信总线1104。通信总线1104用于实现元件之间的通信连接。第一存储器1103可能包含高速RAM存储器,也可能还包括非易失性存储NVM,例如至少一个磁盘存储器,第一存储器1103中可以存储各种程序,用于完成各种处理功能以及实现本实施例的方法步骤。FIG. 5 is a schematic diagram of the hardware structure of a terminal device provided by an embodiment of the application. As shown in the figure, the terminal device may include: an input device 1100, a first processor 1101, an output device 1102, a first memory 1103, and at least one communication bus 1104. The communication bus 1104 is used to implement communication connections between components. The first memory 1103 may include a high-speed RAM memory, or may also include a non-volatile storage NVM, such as at least one disk memory. The first memory 1103 may store various programs for completing various processing functions and implementing this embodiment. Method steps.
可选的,上述第一处理器1101例如可以为中央处理器(Central Processing Unit,简称CPU)、应用专用集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理设备(DSPD)、可编程逻辑器件(PLD)、现场可编程门阵列(FPGA)、控制器、微控制器、微处理器或其他电子元件实现,该第一处理器1101通过有线或无线连接耦合到上述输入设备1100和输出设备1102。Optionally, the foregoing first processor 1101 may be, for example, a central processing unit (Central Processing Unit, CPU for short), an application specific integrated circuit (ASIC), a digital signal processor (DSP), a digital signal processing device (DSPD), and A programmable logic device (PLD), a field programmable gate array (FPGA), a controller, a microcontroller, a microprocessor or other electronic components are implemented, and the first processor 1101 is coupled to the aforementioned input device 1100 and via a wired or wireless connection. Output device 1102.
可选的,上述输入设备1100可以包括多种输入设备,例如可以包括面向用户的用户接口、面向设备的设备接口、软件的可编程接口、摄像头、传感器中至少一种。可选的,该面向设备的设备接口可以是用于设备与设备之间进行数据传输的有线接口、还可以是用于设备与设备之间进行数据传输的硬件插入接口(例如USB接口、串口等);可选的,该面向用户的用户接口例如可以是面向用户的控制按键、用于接收语音输入的语音输入设备以及用户接收用户触摸输入的触摸感知设备(例如具有触摸感应功能的触摸屏、触控板等);可选的,上述软件的可编程接口例如可以是供用户编辑或者修改程序的入口,例如芯片的输入引脚接口或者输入接口等;输出设备1102可以包括显示器、音响等输出设备。Optionally, the aforementioned input device 1100 may include multiple input devices, for example, it may include at least one of a user-oriented user interface, a device-oriented device interface, a software programmable interface, a camera, and a sensor. Optionally, the device-oriented device interface may be a wired interface for data transmission between the device and the device, or a hardware plug-in interface for data transmission between the device and the device (such as a USB interface, a serial port, etc.) ); Optionally, the user-oriented user interface may be, for example, user-oriented control buttons, a voice input device for receiving voice input, and a touch sensing device for receiving user touch input (such as a touch screen with touch sensing function, touch Control board, etc.); Optionally, the programmable interface of the above software may be, for example, an entry for the user to edit or modify the program, such as the input pin interface or input interface of the chip, etc.; the output device 1102 may include output devices such as a display and audio .
在本实施例中,该终端设备的处理器包括用于执行各设备中语音识别装置各模块的功能,具体功能和技术效果参照上述实施例即可,此处不再赘述。In this embodiment, the processor of the terminal device includes functions for executing each module of the speech recognition device in each device. For specific functions and technical effects, please refer to the above-mentioned embodiment, which will not be repeated here.
图6为本申请的一个实施例提供的终端设备的硬件结构示意图。图6是对图5在实现过程中的一个具体的实施例。如图所示,本实施例的终端设备可以包括第二处理器1201以及第二存储器1202。FIG. 6 is a schematic diagram of the hardware structure of a terminal device provided by an embodiment of the application. Fig. 6 is a specific embodiment of Fig. 5 in the implementation process. As shown in the figure, the terminal device of this embodiment may include a second processor 1201 and a second memory 1202.
第二处理器1201执行第二存储器1202所存放的计算机程序代码,实现上述实施例中图1所述方法。The second processor 1201 executes the computer program code stored in the second memory 1202 to implement the method described in FIG. 1 in the foregoing embodiment.
第二存储器1202被配置为存储各种类型的数据以支持在终端设备的操作。这些数据的示例包括用于在终端设备上操作的任何应用程序或方法的指令,例如消息,图片,视频等。第二存储器1202可能包含随机存取存储器(random access memory,简称RAM),也可能还包括非易失性存储器(non-volatile memory),例如至少一个磁盘存储器。The second memory 1202 is configured to store various types of data to support operations on the terminal device. Examples of these data include instructions for any application or method operating on the terminal device, such as messages, pictures, videos, and so on. The second memory 1202 may include a random access memory (random access memory, RAM for short), and may also include a non-volatile memory (non-volatile memory), such as at least one disk memory.
可选地,第二处理器1201设置在处理组件1200中。该终端设备还可以包括:通信组件1203,电源组件1204,多媒体组件1205,语音组件1206,输入/输出接口1207和/或传感器组件1208。终端设备具体所包含的组件等依据实际需求设定,本实施例对此不作限定。Optionally, the second processor 1201 is provided in the processing component 1200. The terminal device may also include: a communication component 1203, a power supply component 1204, a multimedia component 1205, a voice component 1206, an input/output interface 1207 and/or a sensor component 1208. The specific components included in the terminal device are set according to actual requirements, which is not limited in this embodiment.
处理组件1200通常控制终端设备的整体操作。处理组件1200可以包括一个或多个第二处理器1201来执行指令,以完成上述数据处理方法中的全部或部分步骤。此外,处理组件1200可以包括一个或多个模块,便于处理组件1200和其他组件之间的交互。例如,处理组件1200可以包括多媒体模块,以方便多媒体组件1205和处理组件1200之间的交互。The processing component 1200 generally controls the overall operation of the terminal device. The processing component 1200 may include one or more second processors 1201 to execute instructions to complete all or part of the steps in the foregoing data processing method. In addition, the processing component 1200 may include one or more modules to facilitate the interaction between the processing component 1200 and other components. For example, the processing component 1200 may include a multimedia module to facilitate the interaction between the multimedia component 1205 and the processing component 1200.
电源组件1204为终端设备的各种组件提供电力。电源组件1204可以包括电源管理系统,一个或多个电源,及其他与为终端设备生成、管理和分配电力相关联的组件。The power supply component 1204 provides power for various components of the terminal device. The power supply component 1204 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for terminal devices.
多媒体组件1205包括在终端设备和用户之间的提供一个输出接口的显示屏。在一些实施例中,显示屏可以包括液晶显示器(LCD)和触摸面板(TP)。如果显示屏包括触摸面板,显示屏可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。所述触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与所述触摸或滑动操作相关的持续时间和压力。The multimedia component 1205 includes a display screen that provides an output interface between the terminal device and the user. In some embodiments, the display screen may include a liquid crystal display (LCD) and a touch panel (TP). If the display screen includes a touch panel, the display screen may be implemented as a touch screen to receive input signals from the user. The touch panel includes one or more touch sensors to sense touch, sliding, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure related to the touch or slide operation.
语音组件1206被配置为输出和/或输入语音信号。例如,语音组件1206包括一个麦克风(MIC),当终端设备处于操作模式,如语音识别模式时,麦克风被配置为接收外部语音信号。所接收的语音信号可以被进一步存储在第二存储器1202或经由通信组件1203发送。在一些实施例中,语音组件1206还包括一个扬声器,用于输出语音信号。The voice component 1206 is configured to output and/or input voice signals. For example, the voice component 1206 includes a microphone (MIC). When the terminal device is in an operating mode, such as a voice recognition mode, the microphone is configured to receive external voice signals. The received voice signal may be further stored in the second memory 1202 or transmitted via the communication component 1203. In some embodiments, the voice component 1206 further includes a speaker for outputting voice signals.
输入/输出接口1207为处理组件1200和外围接口模块之间提供接口,上述外围接口模块可以是点击轮,按钮等。这些按钮可包括但不限于:音量按钮、启动按钮和锁定按钮。The input/output interface 1207 provides an interface between the processing component 1200 and a peripheral interface module. The peripheral interface module may be a click wheel, a button, or the like. These buttons may include, but are not limited to: volume buttons, start buttons, and lock buttons.
传感器组件1208包括一个或多个传感器,用于为终端设备提供各个方面的状态评估。例如,传感器组件1208可以检测到终端设备的打开/关闭状态,组件的相对定位,用户与终端设备接触的存在或不存在。传感器组件1208可以包括接近传感器,被配置用来在没有任何的物理接触时检测附近物体的存在,包括检测用户与终端设备间的距离。在一些实施例中,该传感器组件1208还可以包括摄像头等。The sensor component 1208 includes one or more sensors, which are used to provide various aspects of state evaluation for the terminal device. For example, the sensor component 1208 can detect the open/close state of the terminal device, the relative positioning of the component, and the presence or absence of contact between the user and the terminal device. The sensor component 1208 may include a proximity sensor configured to detect the presence of nearby objects when there is no physical contact, including detecting the distance between the user and the terminal device. In some embodiments, the sensor component 1208 may also include a camera and the like.
通信组件1203被配置为便于终端设备和其他设备之间有线或无线方式的通信。终端设备 可以接入基于通信标准的无线网络,如WiFi,2G或3G,或它们的组合。在一个实施例中,该终端设备中可以包括SIM卡插槽,该SIM卡插槽用于插入SIM卡,使得终端设备可以登录GPRS网络,通过互联网与服务器建立通信。The communication component 1203 is configured to facilitate wired or wireless communication between the terminal device and other devices. Terminal devices can access wireless networks based on communication standards, such as WiFi, 2G or 3G, or a combination of them. In one embodiment, the terminal device may include a SIM card slot for inserting a SIM card so that the terminal device can log in to the GPRS network and establish communication with the server via the Internet.
由上可知,在图6实施例中所涉及的通信组件1203、语音组件1206以及输入/输出接口1207、传感器组件1208均可以作为图5实施例中的输入设备的实现方式。It can be seen from the above that the communication component 1203, the voice component 1206, the input/output interface 1207, and the sensor component 1208 involved in the embodiment in FIG. 6 can all be used as implementations of the input device in the embodiment in FIG. 5.
上述实施例仅例示性说明本发明的原理及其功效,而非用于限制本发明。任何熟悉此技术的人士皆可在不违背本发明的精神及范畴下,对上述实施例进行修饰或改变。因此,举凡所属技术领域中具有通常知识者在未脱离本发明所揭示的精神与技术思想下所完成的一切等效修饰或改变,仍应由本发明的权利要求所涵盖。The above-mentioned embodiments only exemplarily illustrate the principles and effects of the present invention, but are not used to limit the present invention. Anyone familiar with this technology can modify or change the above-mentioned embodiments without departing from the spirit and scope of the present invention. Therefore, all equivalent modifications or changes made by those with ordinary knowledge in the technical field without departing from the spirit and technical ideas disclosed in the present invention should still be covered by the claims of the present invention.

Claims (27)

  1. 一种任务处理方法,其特征在于,包括有:A task processing method, characterized in that it includes:
    各个任务处理端分别获取对应的分解后的训练任务;Each task processing terminal obtains the corresponding decomposed training task;
    每个任务处理端基于本地训练数据及从其他任务处理端获取的随机数或参数,执行所述分解后的训练任务。Each task processing terminal executes the decomposed training task based on local training data and random numbers or parameters obtained from other task processing terminals.
  2. 根据权利要求1所述的任务处理方法,其特征在于,所述各个任务处理端分别获取对应的分解后的训练任务之前,还包括:The task processing method according to claim 1, wherein before each task processing terminal separately obtains the corresponding decomposed training task, the method further comprises:
    任务处理平台接收所述任务处理端传输的训练任务请求后,对所述训练任务进行分解,获得分解后的训练任务,并将所述分解后的训练任务分配给对应的任务处理端。After receiving the training task request transmitted by the task processing terminal, the task processing platform decomposes the training task, obtains the decomposed training task, and allocates the decomposed training task to the corresponding task processing terminal.
  3. 根据权利要求1所述的任务处理方法,其特征在于,执行所述分解后的训练任务之后,还包括:输出共享模型。The task processing method according to claim 1, wherein after executing the decomposed training task, it further comprises: outputting a shared model.
  4. 根据权利要求1所述的任务处理方法,其特征在于,各个任务处理端分别获取对应的分解后的训练任务后,每个任务处理端读取本地训练数据;The task processing method according to claim 1, wherein after each task processing terminal obtains the corresponding decomposed training task, each task processing terminal reads local training data;
    每个任务处理端从其他任务处理端获取随机数或加密参数,并基于所述随机数或加密参数,对其他任务处理端传输过来的加密数据进行解密,获得解密后的数据;Each task processing terminal obtains random numbers or encryption parameters from other task processing terminals, and based on the random numbers or encryption parameters, decrypts encrypted data transmitted from other task processing terminals to obtain decrypted data;
    通过所述本地训练数据、所述解密后的数据,进行学习训练。Learning and training are performed through the local training data and the decrypted data.
  5. 根据权利要求1所述的任务处理方法,其特征在于,各个任务处理端分别获取对应的分解后的训练任务后,每个任务处理端读取本地训练数据;The task processing method according to claim 1, wherein after each task processing terminal obtains the corresponding decomposed training task, each task processing terminal reads local training data;
    每个任务处理端从其他任务处理端获取模型训练过程随机数或模型训练过程参数;Each task processing terminal obtains model training process random numbers or model training process parameters from other task processing terminals;
    依据所述模型训练过程随机数或模型训练过程参数及本地训练数据,进行学习训练。According to the model training process random number or model training process parameters and local training data, learning and training are performed.
  6. 根据权利要求1所述的任务处理方法,其特征在于,各个任务处理端分别获取对应的分解后的训练任务后,每个任务处理端读取本地训练数据;The task processing method according to claim 1, wherein after each task processing terminal obtains the corresponding decomposed training task, each task processing terminal reads local training data;
    每个任务处理端从其他任务处理端获取训练参数;Each task processing terminal obtains training parameters from other task processing terminals;
    依据所述训练参数及本地训练数据,进行学习训练。Perform learning training based on the training parameters and local training data.
  7. 根据权利要求6所述的任务处理方法,其特征在于,所述训练参数包括:卷积神经网络层数、卷积核大小。The task processing method according to claim 6, wherein the training parameters include: the number of layers of the convolutional neural network and the size of the convolution kernel.
  8. 根据权利要求1所述的任务处理方法,其特征在于,每个任务处理端从其他任务处理端获取随机数或参数的次数为一次或多次。The task processing method according to claim 1, wherein the number of times that each task processing terminal obtains random numbers or parameters from other task processing terminals is one or more times.
  9. 根据权利要求4至6任一所述的任务处理方法,其特征在于,所述学习训练采用的学习算法包括以下至少之一:线性回归、逻辑回归、树模型、深度神经网络、图神经网络。The task processing method according to any one of claims 4 to 6, wherein the learning algorithm used in the learning training includes at least one of the following: linear regression, logistic regression, tree model, deep neural network, graph neural network.
  10. 根据权利要求4至6任一所述的任务处理方法,其特征在于,所述训练数据包括以下至少之一:社保数据、公积金数据、固定资产数据、流动资产数据。The task processing method according to any one of claims 4 to 6, wherein the training data includes at least one of the following: social security data, provident fund data, fixed asset data, and current asset data.
  11. 根据权利要求10所述的任务处理方法,其特征在于,所述流动资产数据包括以下至少之一:存款数据、贷款数据。The task processing method according to claim 10, wherein the current asset data includes at least one of the following: deposit data and loan data.
  12. 根据权利要求4所述的任务处理方法,其特征在于,所述加密数据为非隐私数据。The task processing method according to claim 4, wherein the encrypted data is non-private data.
  13. 一种任务处理系统,其特征在于,包括有:A task processing system, characterized in that it includes:
    获取模块,用于各个任务处理端分别获取对应的分解后的训练任务,以及用于每个任务处理端获取本地训练数据、从其他任务处理端获取随机数或参数;The acquisition module is used for each task processing end to obtain the corresponding decomposed training task, and for each task processing end to obtain local training data, and to obtain random numbers or parameters from other task processing ends;
    执行模块,用于执行所述分解后的训练任务。The execution module is used to execute the decomposed training task.
  14. 根据权利要求13所述的任务处理系统,其特征在于,所述各个任务处理端分别获取对应的分解后的训练任务之前,还包括:The task processing system according to claim 13, wherein before each task processing terminal obtains the corresponding decomposed training task, it further comprises:
    任务处理平台接收所述任务处理端传输的训练任务请求后,对所述训练任务进行分解,获得分解后的训练任务,并将所述分解后的训练任务分配给对应的任务处理端。After receiving the training task request transmitted by the task processing terminal, the task processing platform decomposes the training task, obtains the decomposed training task, and allocates the decomposed training task to the corresponding task processing terminal.
  15. 根据权利要求13所述的任务处理系统,其特征在于,执行所述分解后的训练任务之后,还包括:输出共享模型。The task processing system according to claim 13, wherein after executing the decomposed training task, it further comprises: outputting a shared model.
  16. 根据权利要求13所述的任务处理系统,其特征在于,各个任务处理端分别获取对应的分解后的训练任务后,每个任务处理端读取本地训练数据;The task processing system according to claim 13, wherein after each task processing terminal obtains the corresponding decomposed training task, each task processing terminal reads local training data;
    每个任务处理端从其他任务处理端获取随机数或加密参数,并基于所述随机数或加密参 数,对其他任务处理端传输过来的加密数据进行解密,获得解密后的数据;Each task processing terminal obtains random numbers or encryption parameters from other task processing terminals, and based on the random numbers or encryption parameters, decrypts the encrypted data transmitted from other task processing terminals to obtain decrypted data;
    通过所述本地训练数据、所述解密后的数据,进行学习训练。Learning and training are performed through the local training data and the decrypted data.
  17. 根据权利要求13所述的任务处理系统,其特征在于,各个任务处理端分别获取对应的分解后的训练任务后,每个任务处理端读取本地训练数据;The task processing system according to claim 13, wherein after each task processing terminal obtains the corresponding decomposed training task, each task processing terminal reads local training data;
    每个任务处理端从其他任务处理端获取模型训练过程随机数或模型训练过程参数;Each task processing terminal obtains model training process random numbers or model training process parameters from other task processing terminals;
    依据所述模型训练过程随机数或模型训练过程参数及本地训练数据,进行学习训练。According to the model training process random number or model training process parameters and local training data, learning and training are performed.
  18. 根据权利要求13所述的任务处理系统,其特征在于,各个任务处理端分别获取对应的分解后的训练任务后,每个任务处理端读取本地训练数据;The task processing system according to claim 13, wherein after each task processing terminal obtains the corresponding decomposed training task, each task processing terminal reads local training data;
    每个任务处理端从其他任务处理端获取训练参数;Each task processing terminal obtains training parameters from other task processing terminals;
    依据所述训练参数及本地训练数据,进行学习训练。Perform learning training based on the training parameters and local training data.
  19. 根据权利要求18所述的任务处理系统,其特征在于,所述训练参数包括:卷积神经网络层数、卷积核大小。The task processing system according to claim 18, wherein the training parameters include: the number of layers of the convolutional neural network and the size of the convolution kernel.
  20. 根据权利要求13所述的任务处理系统,其特征在于,每个任务处理端从其他任务处理端获取随机数或参数的次数为一次或多次。The task processing system according to claim 13, wherein the number of times that each task processing terminal obtains a random number or a parameter from another task processing terminal is one or more times.
  21. 根据权利要求16至18任一所述的任务处理系统,其特征在于,所述学习训练采用的学习算法包括以下至少之一:线性回归、逻辑回归、树模型、深度神经网络、图神经网络。The task processing system according to any one of claims 16 to 18, wherein the learning algorithm used in the learning training includes at least one of the following: linear regression, logistic regression, tree model, deep neural network, graph neural network.
  22. 根据权利要求16至18任一所述的任务处理系统,其特征在于,所述训练数据包括以下至少之一:社保数据、公积金数据、固定资产数据、流动资产数据。The task processing system according to any one of claims 16 to 18, wherein the training data includes at least one of the following: social security data, provident fund data, fixed asset data, and current asset data.
  23. 根据权利要求22所述的任务处理系统,其特征在于,所述流动资产数据包括以下至少之一:存款数据、贷款数据。The task processing system according to claim 22, wherein the current asset data includes at least one of the following: deposit data and loan data.
  24. 根据权利要求16所述的任务处理系统,其特征在于,所述加密数据为非隐私数据。The task processing system according to claim 16, wherein the encrypted data is non-private data.
  25. 一种任务处理设备,其特征在于,包括有:A task processing device, characterized in that it includes:
    各个任务处理端分别获取对应的分解后的训练任务后,每个任务处理端基于本地训练数据及从其他任务处理端获取的随机数或参数,执行所述分解后的训练任务。After each task processing terminal obtains the corresponding decomposed training task, each task processing terminal executes the decomposed training task based on local training data and random numbers or parameters obtained from other task processing terminals.
  26. 一种设备,其特征在于,包括:A device, characterized in that it comprises:
    一个或多个处理器;和One or more processors; and
    其上存储有指令的一个或多个机器可读介质,当所述一个或多个处理器执行时,使得所述设备执行如权利要求1-12中一个或多个所述的方法。One or more machine-readable media having instructions stored thereon, when executed by the one or more processors, cause the device to execute the method according to one or more of claims 1-12.
  27. 一个或多个机器可读介质,其特征在于,其上存储有指令,当由一个或多个处理器执行时,使得设备执行如权利要求1-12中一个或多个所述的方法。One or more machine-readable media, characterized in that instructions are stored thereon, which when executed by one or more processors, cause the device to execute the method according to one or more of claims 1-12.
PCT/CN2020/110469 2020-02-14 2020-08-21 Task processing method, system, and device, and medium WO2021159685A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010093590.2A CN111339553A (en) 2020-02-14 2020-02-14 Task processing method, system, device and medium
CN202010093590.2 2020-02-14

Publications (1)

Publication Number Publication Date
WO2021159685A1 true WO2021159685A1 (en) 2021-08-19

Family

ID=71181527

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/110469 WO2021159685A1 (en) 2020-02-14 2020-08-21 Task processing method, system, and device, and medium

Country Status (2)

Country Link
CN (1) CN111339553A (en)
WO (1) WO2021159685A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111339553A (en) * 2020-02-14 2020-06-26 云从科技集团股份有限公司 Task processing method, system, device and medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108712260A (en) * 2018-05-09 2018-10-26 曲阜师范大学 The multi-party deep learning of privacy is protected to calculate Proxy Method under cloud environment
US20190334716A1 (en) * 2018-04-27 2019-10-31 The University Of Akron Blockchain-empowered crowdsourced computing system
CN110399742A (en) * 2019-07-29 2019-11-01 深圳前海微众银行股份有限公司 A kind of training, prediction technique and the device of federation's transfer learning model
CN110472747A (en) * 2019-08-16 2019-11-19 第四范式(北京)技术有限公司 For executing the distributed system and its method of multimachine device learning tasks
CN111339553A (en) * 2020-02-14 2020-06-26 云从科技集团股份有限公司 Task processing method, system, device and medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7904804B1 (en) * 2001-11-20 2011-03-08 Vignette Software Llc System and method for web sites in hierarchical relationship to share assets
JP4710932B2 (en) * 2008-07-09 2011-06-29 ソニー株式会社 Learning device, learning method, and program
CN109299487B (en) * 2017-07-25 2023-01-06 展讯通信(上海)有限公司 Neural network system, accelerator, modeling method and device, medium and system
KR102036968B1 (en) * 2017-10-19 2019-10-25 한국과학기술원 Confident Multiple Choice Learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190334716A1 (en) * 2018-04-27 2019-10-31 The University Of Akron Blockchain-empowered crowdsourced computing system
CN108712260A (en) * 2018-05-09 2018-10-26 曲阜师范大学 The multi-party deep learning of privacy is protected to calculate Proxy Method under cloud environment
CN110399742A (en) * 2019-07-29 2019-11-01 深圳前海微众银行股份有限公司 A kind of training, prediction technique and the device of federation's transfer learning model
CN110472747A (en) * 2019-08-16 2019-11-19 第四范式(北京)技术有限公司 For executing the distributed system and its method of multimachine device learning tasks
CN111339553A (en) * 2020-02-14 2020-06-26 云从科技集团股份有限公司 Task processing method, system, device and medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CHEN TIANJIAN: "Federated Learning Inside: Introduction to Ant Financial Shared Learning", ZHUANLAN.ZHIHU, CN, XP009529730, Retrieved from the Internet <URL:https://zhuanlan.zhihu.com/p/71896430> *
CLUJ MIAO HAN: "Shared learning: Ant Financial Data Island Solution", ALIYUN - DEVELOPER COMMUNITY > ANT FINANCIAL TECHNOLOGY, CN, XP009529731, Retrieved from the Internet <URL:https://developer.aliyun.com/article/714993> *

Also Published As

Publication number Publication date
CN111339553A (en) 2020-06-26

Similar Documents

Publication Publication Date Title
WO2021159684A1 (en) Data processing method, system and platform, and device and machine-readable medium
WO2020108046A1 (en) Cross-block chain interaction method and system, computer device, and storage medium
TWI734041B (en) Method and device for data audit
CN106651303B (en) Intelligent contract processing method and system based on template
US10142316B2 (en) Computerized method and system for managing an email input facility in a networked secure collaborative exchange environment
US20170270527A1 (en) Assessing trust to facilitate blockchain transactions
CN111008709A (en) Federal learning and data risk assessment method, device and system
WO2021239070A1 (en) Method for creating node group in consortium blockchain network, and node group-based transaction method
CN104915835A (en) Credit account creating method, system and method
WO2021000575A1 (en) Data interaction method and apparatus, and electronic device
TW202107313A (en) Model parameter determination method and apparatus, and electronic device
WO2020233137A1 (en) Method and apparatus for determining value of loss function, and electronic device
WO2020108152A1 (en) Method, device and electronic equipment for preventing misuse of identity data
EP4198783A1 (en) Federated model training method and apparatus, electronic device, computer program product, and computer-readable storage medium
CN109993528A (en) It is a kind of for managing the method and apparatus of committal charge
TW202103149A (en) Data exchange for multi-party computation
CN116468543A (en) Credit risk assessment method, device, equipment and medium based on federal learning
WO2021159685A1 (en) Task processing method, system, and device, and medium
Garcia Bringas et al. BlockChain platforms in financial services: current perspective
CN112507323A (en) Model training method and device based on unidirectional network and computing equipment
CN116015840B (en) Data operation auditing method, system, equipment and storage medium
CN107528822A (en) A kind of business performs method and device
CN116186755A (en) Privacy calculating method, device, terminal equipment and storage medium
CN115640613A (en) Privacy data distributed control method and system based on RPA and electronic terminal
CN114741446A (en) Data uplink method, device, terminal and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20918542

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20918542

Country of ref document: EP

Kind code of ref document: A1