WO2022121030A1 - 中心方选择方法、存储介质和系统 - Google Patents
中心方选择方法、存储介质和系统 Download PDFInfo
- Publication number
- WO2022121030A1 WO2022121030A1 PCT/CN2020/140832 CN2020140832W WO2022121030A1 WO 2022121030 A1 WO2022121030 A1 WO 2022121030A1 CN 2020140832 W CN2020140832 W CN 2020140832W WO 2022121030 A1 WO2022121030 A1 WO 2022121030A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- network
- node
- term
- task
- partner
- Prior art date
Links
- 238000003860 storage Methods 0.000 title claims abstract description 20
- 238000010187 selection method Methods 0.000 title claims description 6
- 238000000034 method Methods 0.000 claims abstract description 45
- 238000013135 deep learning Methods 0.000 claims abstract description 27
- 230000002776 aggregation Effects 0.000 claims abstract description 22
- 238000004220 aggregation Methods 0.000 claims abstract description 22
- 238000013136 deep learning model Methods 0.000 claims abstract description 13
- 238000012549 training Methods 0.000 claims abstract description 12
- 230000005856 abnormality Effects 0.000 claims abstract description 6
- 238000011156 evaluation Methods 0.000 claims description 9
- 230000005540 biological transmission Effects 0.000 claims description 7
- 238000004891 communication Methods 0.000 claims description 7
- 238000005304 joining Methods 0.000 claims description 5
- 230000008569 process Effects 0.000 abstract description 9
- 238000012545 processing Methods 0.000 description 6
- 230000002159 abnormal effect Effects 0.000 description 5
- 230000009286 beneficial effect Effects 0.000 description 4
- 238000004590 computer program Methods 0.000 description 4
- 238000012986 modification Methods 0.000 description 4
- 230000004048 modification Effects 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 238000012937 correction Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000010801 machine learning Methods 0.000 description 3
- 238000003491 array Methods 0.000 description 2
- 230000019771 cognition Effects 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000000712 assembly Effects 0.000 description 1
- 238000000429 assembly Methods 0.000 description 1
- 238000013145 classification model Methods 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 238000007418 data mining Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 239000004615 ingredient Substances 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 239000010977 jade Substances 0.000 description 1
- 238000012417 linear regression Methods 0.000 description 1
- 238000007477 logistic regression Methods 0.000 description 1
- 239000003607 modifier Substances 0.000 description 1
- 230000002085 persistent effect Effects 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
- 230000001902 propagating effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000004083 survival effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Definitions
- the present invention relates to multi-party data joint processing, in particular to a center party selection method, storage medium and system.
- Cooperative deep learning is a process in which each partner trains a deep learning model collectively. The collective outperformed the individual partners.
- each terminal device partner
- each terminal device has different usage environments and different local data characteristics, resulting in different understanding and cognition of the same deep learning task. Therefore, for a deep learning task in the IoT scenario, in order to improve the accuracy and generalization of the deep learning model, each terminal device uses its own local data to train the deep learning model, and then aggregates the model through interaction and sharing. After many iterations to complete the entire deep learning task, a well-performing joint model result is obtained.
- the central party undertakes the model parameter aggregation task from each client, and choosing different devices as the central party has a great impact on the cooperative learning task.
- the central party undertakes the model parameter aggregation task from each client, and choosing different devices as the central party has a great impact on the cooperative learning task.
- central parties with different performances have different computing capabilities and different downtime probabilities.
- an inappropriate central party will lead to the failure of cooperative learning tasks.
- a suitable central party must first be designated to start the cooperative deep learning task; in addition, in the IoT scenario, the central party may be a terminal device with limited resources. Therefore, compared with the central party with good performance that is usually specified by distributed computing, the probability of abnormality in the central party in the IoT scenario is relatively high.
- the purpose of the present invention is to provide a center party selection method, storage medium and system, which can solve the above problems.
- a central party selection method in a cooperative deep learning task includes the initial selection of the central party in the initial stage of the task and the updated selection of the central party in the task progress stage, wherein the central party is initialized based on the performance score values of each partner during the initial term.
- the selection is used for deep learning model aggregation.
- the center side is updated and selected based on the performance score value of the surviving nodes in the current tenure network, and the model aggregation task is continued with the updated center side.
- the initial selection of the central party in the initial stage of the task includes the following steps:
- S140 partners join the network successively, provide performance score values, and record the local IP address and port number;
- step S150 uses the total number of partners, the coefficient and the time window to determine whether the partner n is the last node to join the network at this time, if the partner n is the last node to join the network, go to step S160, if the partner n is not to join the network the last node of the network, go to step S140;
- step S170 the one with the highest total performance score in step S160 is selected as the optimal node Node_Center, and is used as the initial center of this cooperative deep learning task for deep learning model aggregation;
- the partners are terminal devices that perform model training tasks
- the performance evaluation indicators of the partners include cpu, memory, and power
- the method for judging whether the partner n is the last node to join the network is: if the number of partners that have joined at this time accounts for more than the preset ratio of the total number N of partners in step S110 and within the time window If no new partner joins, the partner n is the last node, otherwise it continues to join until the conditions are met.
- the preset ratio is 70%, 80% or 90%.
- the central side update selection in the task progress stage includes the following steps:
- Each node in the S210 network dynamically senses the abnormality of the central side
- each node in the network determines the surviving node in the network
- each node in the network compares the total performance score of each node in the network
- Each node in the S240 network selects the optimal node Node_Center as the center of this cooperative deep learning task
- the present invention also provides a computer-readable storage medium on which computer instructions are stored, and when the computer instructions are executed, the steps of the aforementioned method are performed.
- the present invention also provides a cooperative learning system based on the dynamic updating of the central party.
- the initial central party of the system is telecommunicationly connected with each partner and runs the steps of the aforementioned method.
- the system includes:
- the partner determination module determines the available partners and their metrics and corresponding weights for performance comparison through the network connection state
- the performance evaluation module calculates the performance score value based on the partner's metrics and corresponding weights to evaluate the partner's performance
- the optimal module independently selects the optimal partner in the network as the central party for model aggregation;
- Communication transmission module the central party establishes connections with all current partners
- Dynamically update the learning module to determine whether the current task is in the initial state. If so, start the cooperative learning model to aggregate the task and predict the risk of the central side; if not, enter the central side to update, and the task continues until the end.
- the beneficial effect of the present invention is that: the central party selection scheme in the cooperative deep learning task provided by this solution includes the initial selection of the central party based on the performance score values of each partner during the initial term and used for deep learning model aggregation. , in the process of the task, based on the performance score value of the surviving nodes in the current tenure network, the center side is updated and selected, and the model aggregation task is continued with the updated center side, which provides a stable center side for the learning method or model aggregation, and the center side is abnormal. The next step is to quickly select the optimal data provider in the cooperative learning task as the central party, which can quickly connect so that the model training can continue to run.
- Fig. 1 is the center party selection flow chart in the initial stage of cooperative deep learning of the present invention
- FIG. 2 is a flow chart of center selection in the stage of the cooperative deep learning task of the present invention.
- system means for distinguishing different components, elements, parts, parts or assemblies at different levels.
- device means for converting signals into signals.
- unit means for converting signals into signals.
- module means for converting signals into signals.
- a large amount of information data is flooded in various industries such as economy, culture, education, medical care, and public management.
- Data processing and analysis such as data analysis, data mining, and trend forecasting are widely used in more and more scenarios.
- data cooperation multiple data owners can obtain better data processing results.
- more accurate model parameters can be obtained through multi-way cooperative learning.
- the method of dynamically updating the cooperative learning of the central party can be applied to a scenario in which all parties cooperate to train a machine learning model for use by multiple parties under the condition of ensuring data security of all parties.
- multiple data parties have their own data, and they want to jointly use each other's data for unified modeling (eg, classification models, linear regression models, logistic regression models, etc.), but do not want their own data (especially privacy data) were leaked.
- Internet savings institution A has a batch of user data
- bank B has another batch of user data.
- the training sample set determined based on the user data of A and B can be trained to obtain a machine learning model with better performance.
- Both A and B are willing to participate in model training through each other's user data, but for some reasons A and B do not want their user data information to be leaked, or at least do not want to let the other party know their user data information.
- cooperative learning can be performed using a federated learning approach.
- Federated Learning can carry out efficient machine learning among multiple parties or computing nodes. Federated learning enables multi-party data to perform model training without the local training samples, and only transfers the trained model or calculates the gradient, which protects the privacy of the training samples held by all parties.
- federated learning is often used in situations where the model is computationally intensive and has many parameters.
- the pressure of communication transmission is relatively large. Therefore, in the scenario of federated learning, it is often necessary to adopt a certain method to reduce the communication pressure during the transmission process.
- the cooperative learning task judgment (including model gradient values or model parameters obtained by training) updated by the central server may be used for compression.
- the client model can be trained without interruption without retraining, thereby reducing the communication pressure.
- the abnormal situation of the central server is predicted to ensure the stability of the model.
- a center party selection method in a cooperative deep learning task includes the center party initial selection in the task initial stage and the center party update selection in the task progress stage.
- the central party is initially selected based on the performance score values of each partner during the initial term for deep learning model aggregation.
- the central party is updated and selected based on the performance score values of the surviving nodes in the current term network, and the updated central party is used. Continue with the model aggregation task.
- the initial selection of the central party in the initial stage of the task includes the following steps.
- the "global term” is relative to the "local term" on each terminal device.
- all partners in the cooperative learning task have the right to make their own choices. After the previous central party is abnormal, each partner perceives it successively, and then goes to find the optimal terminal equipment as the new central party.
- the terminal devices of the type learning task are successively added to prepare for cooperation.
- the central party needs to communicate with each terminal device to realize knowledge sharing, and the communication requires knowing the IP address and port number of each terminal device.
- step S150 uses the total number of partners, the coefficient and the time window to determine whether the partner n is the last node to join the network at this time, if the partner n is the last node to join the network, go to step S160, if the partner n is not to join the network the last node of the network, go to step S140;
- step S150 the method for judging whether the partner n is the last node to join the network is: if the number of partners that have joined at this time accounts for more than the preset ratio of the total number N of partners in step S110 and is within the time window If no new partner joins, the partner n is the last node, otherwise it continues to join until the conditions are met.
- the reason for this judgment is that in the IoT scenario, the survival status of each device cannot be guaranteed, and only a preset proportion of the number of devices can join the network so that the task can run normally, instead of requiring all partners N to join the network.
- the preset ratio includes but is not limited to 70%, 80% or 90%, preferably 80%.
- step S170 the one with the highest total performance score in step S160 is selected as the optimal node Node_Center, and is used as the initial center of this cooperative deep learning task for deep learning model aggregation;
- the central side update selection in the task progress phase includes the following steps:
- Each node in the S210 network dynamically senses the abnormality of the central side
- each node in the network determines the surviving node in the network
- each node in the network compares the total performance score of each node in the network
- Each node in the S240 network selects the optimal node Node_Center as the center of this cooperative deep learning task
- Each node in the S250 network successively determines whether the global network parameter global term glob_term is equal to the node's local term local_term.
- each partner participating in the cooperative deep learning task refers to various terminal devices, such as laptop computers, mobile phones and other devices that can perform model training tasks. Different devices have different computing and processing capabilities due to their different resources, such as cpu, memory, power, etc.
- the performance score value of each terminal device is related to each performance index and corresponding weight. like
- Score ⁇ 1 X cpu + ⁇ 2 X memory + ⁇ 3 X energy .
- the performance score is used to evaluate its performance.
- Operation speed is an important indicator to measure computer performance. Commonly referred to as computer operation speed (average operation speed), it refers to the number of instructions that can be executed per second, and is generally described by "million instructions/second”.
- main memory Internal memory, also referred to as main memory, is the memory that the CPU can directly access.
- the programs to be executed and the data to be processed are stored in the main memory.
- the size of the internal memory reflects the ability of the computer to store information in real time. The larger the memory capacity, the more powerful the system, and the larger the amount of data that can be processed.
- the central party in the cooperative learning task is used to aggregate the model, and "tenure" indicates the current central party is the number of the central party.
- the "term” parameter needs to be initialized to 0. If a central party is selected for the first time after the task starts, then the "term” changes from 0 to 1, indicating that the central party is the first central party. If there is an abnormality in the central party during the subsequent tasks, and the central party cannot be used for model aggregation at this time, a new central party needs to be re-selected, so the new central party is the second central party. ” changed from 1 to 2.
- the "term" parameter will be increased by one on the original basis.
- each node is managed by the temporary node list Existing_Node after joining the network co_DL. At the same time, it is also stored in the list after being designated by the central node Node_Center; the permanent node list Center_Info manages the central party’s information, including the current network global “term of office”. ” and the center party logo.
- the present invention also provides a computer-readable storage medium on which computer instructions are stored, and when the computer instructions are executed, the steps of the aforementioned method are performed.
- a computer-readable storage medium on which computer instructions are stored, and when the computer instructions are executed, the steps of the aforementioned method are performed.
- Examples of computer storage media include, but are not limited to, phase-change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), Flash Memory or other memory technology, Compact Disc Read Only Memory (CD-ROM), Digital Versatile Disc (DVD) or other optical storage, Magnetic tape cassettes, magnetic tape magnetic disk storage or other magnetic storage devices or any other non-transmission medium that can be used to store information that can be accessed by a computing device.
- computer-readable media does not include transitory computer-readable media, such as modulated data signals and carrier waves.
- a cooperative learning system based on the dynamic update center side, the initial center side of the system is telecommunicationly connected with each partner, and runs the steps of the aforementioned method, wherein the system includes:
- the partner determination module determines the available partners and their metrics and corresponding weights for performance comparison through the network connection state
- the performance evaluation module calculates the performance score value based on the partner's metrics and corresponding weights to evaluate the partner's performance
- the optimal module independently selects the optimal partner in the network as the central party for model aggregation;
- Communication transmission module the central party establishes connections with all current partners
- Dynamically update the learning module to determine whether the current task is in the initial state. If so, start the cooperative learning model to aggregate the task and predict the risk of the central side; if not, enter the central side to update, and the task continues until the end.
- the systems and modules thereof described in one or more implementations of this specification can be implemented in a variety of ways.
- the system and its modules may be implemented in hardware, software, or a combination of software and hardware.
- the hardware part can be realized by using dedicated logic;
- the software part can be stored in a memory and executed by a suitable instruction execution system, such as a microprocessor or specially designed hardware.
- a suitable instruction execution system such as a microprocessor or specially designed hardware.
- the methods and systems described above may be implemented using computer-executable instructions and/or embodied in processor control code, for example on a carrier medium such as a disk, CD or DVD-ROM, such as a read-only memory (firmware) ) or a data carrier such as an optical or electronic signal carrier.
- the system and its modules of the present application can not only be implemented by hardware circuits such as very large scale integrated circuits or gate arrays, semiconductors such as logic chips, transistors, etc., or programmable hardware devices such as field programmable gate arrays, programmable logic devices, etc. , can also be implemented by, for example, software executed by various types of processors, and can also be implemented by a combination of the above-mentioned hardware circuits and software (eg, firmware).
- aspects of this application may be illustrated and described in several patentable categories or situations, including any new and useful process, machine, product, or combination of matter, or combinations of them. of any new and useful improvements. Accordingly, various aspects of the present application may be performed entirely by hardware, entirely by software (including firmware, resident software, microcode, etc.), or by a combination of hardware and software.
- the above hardware or software may be referred to as a "data block”, “module”, “engine”, “unit”, “component” or “system”.
- aspects of the present application may be embodied as a computer product comprising computer readable program code embodied in one or more computer readable media.
- a computer storage medium may contain a propagated data signal with the computer program code embodied therein, for example, on baseband or as part of a carrier wave.
- the propagating signal may take a variety of manifestations, including electromagnetic, optical, etc., or a suitable combination.
- Computer storage media can be any computer-readable media other than computer-readable storage media that can communicate, propagate, or transmit a program for use by coupling to an instruction execution system, apparatus, or device.
- Program code on a computer storage medium may be transmitted over any suitable medium, including radio, cable, fiber optic cable, RF, or the like, or a combination of any of the foregoing.
- the computer program code required for the operation of the various parts of this application may be written in any one or more programming languages, including object-oriented programming languages such as Java, Scala, Smalltalk, Eiffel, JADE, Emerald, C++, C#, VB.NET, Python etc., conventional procedural programming languages such as C language, VisualBasic, Fortran2003, Perl, COBOL2002, PHP, ABAP, dynamic programming languages such as Python, Ruby and Groovy, or other programming languages, etc.
- the program code may run entirely on the user's computer, or as a stand-alone software package on the user's computer, or partly on the user's computer and partly on a remote computer, or entirely on the remote computer or processing device.
- the remote computer may be connected to the user's computer through any network, such as a local area network (LAN) or wide area network (WAN), or to an external computer (eg, through the Internet), or in a cloud computing environment, or as a service Use eg software as a service (SaaS).
- LAN local area network
- WAN wide area network
- SaaS software as a service
- the embodiments of the present application may be provided as methods, apparatuses, systems or computer program products. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
- computer-usable storage media including, but not limited to, disk storage, CD-ROM, optical storage, etc.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Artificial Intelligence (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Medical Informatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
- Computer And Data Communications (AREA)
Abstract
一种合作式深度学习任务中的中心方选择方法、存储介质和系统,方案包括基于初始任期内各合作方性能评分值进行中心方初始选择用于深度学习模型聚合,在任务进行中基于当前任期网络中存活节点中性能评分值进行中心方更新选择并以更新的中心方继续模型聚合任务,这为学习方法或模型聚合提供了稳定的中心方,中心方异常情况下快速选取参与合作式学习任务中最优数据提供方为中心方,能快速衔接,以便模型训练过程可以继续运行,可广泛应用于经济、文化、教育、医疗、公共管理等行业。
Description
本发明涉及多方数据联合处理,具体涉及一种中心方选择方法、存储介质和系统。
合作式深度学习是各合作方以集体为单位训练深度学习模型的过程。集体的表现优于各合作方。具体来说,针对物联网-IoT场景下各终端设备(合作方)的使用环境不同,本地数据特征不同,导致对于同一个深度学习任务的理解和认知会不一样。因此,针对IoT场景下的某个深度学习任务,为了提高深度学习模型的准确性和泛化性,各终端设备使用自己的本地数据训练得到深度学习模型,然后通过交互和共享进行模型聚合,最后经过多次迭代完成整个深度学习任务,得到一个表现良好的联合模型结果。
与联邦学习从隐私保护的角度出发不同,合作式深度学习主要是考虑到各终端设备(合作方)的使用环境不同,本地数据特征不同,导致对于同一个深度学习任务的理解和认知会不一样,因此为了提高深度学习任务的准确性和泛化性,需要各终端设备在使用本地数据训练得到深度学习模型,即获得本地知识后,将本地知识在群体内进行共享,通过对多个满足独立性要求的本地知识进行组合汇聚,获得群体智慧,从而学习得到一个表现良好的深度学习模型。
在合作式深度学习任务中,存在两种角色:中心方和客户端,其中,中心方承担着来自各个客户端的模型参数聚合任务,选择不同的设备作为中心方对合作式学习任务影响很大,比如具有不同性能的中心方运算能力 不同、宕机概率不同,严重来说不合适的中心方会导致合作式学习任务无法进行。
因此,在任务的初始阶段,必须首先指定合适的中心方才能开始合作式深度学习任务;此外,在IoT场景下,中心方可能是资源有限的终端设备。因此相较于通常分布式计算指定的性能良好的中心方,IoT场景中的中心方发生异常的概率比较大。
在任务进行过程中,如果负责模型聚合任务的中心方发生异常情况,如宕机、失联等,此时合作式学习任务面临中断的危险情况,因此必须考虑重新指定新的合适的中心方继续执行模型聚合任务。
综上,在合作式学习任务中,如何在任务初始阶段选择合适的中心方,以及当在中心方异常时,系统如何重新指定合适的中心方是亟待解决的关键技术。
发明内容
为了克服现有技术的不足,本发明的目的在于提供一种中心方选择方法、存储介质和系统,其能解决上述问题。
一种合作式深度学习任务中的中心方选择方法,方法包括任务初始阶段的中心方初始选择和任务进行阶段的中心方更新选择,其中,基于初始任期内各合作方性能评分值进行中心方初始选择用于深度学习模型聚合,在任务进行中基于当前任期网络中存活节点中性能评分值进行中心方更新选择并以更新的中心方继续模型聚合任务。
优选的,任务初始阶段的中心方初始选择包括以下步骤:
S110合作式深度学习任务被发起,确定参与此次任务的合作方总数N, 同时协商确定用于性能比较的度量指标及其对应权重;
S120各合作方对本地任期参数进行初始化:local_term=0,同时利用步骤S110的度量指标和权重计算各自的性能评分值;
S130对网络全局参数进行初始化,包括全局任期:glob_term=0,中心方标识:center=0,节点序号:n=0,系数:coef,时间窗口:time_interval;
S140各合作方先后加入网络,提供性能评分值,并记录本地的IP地址和端口号;
S150利用合作方总数、系数和时间窗口判断此时该合作方n是否为加入网络的最后一个节点,若合作方n是加入网络的最后一个节点,则转至步骤S160,若合作方n不是加入网络的最后一个节点,则转至步骤S140;
S160加入网络的最后一个节点对网络中各节点的性能总分值进行比较;
S170将根据步骤S160中的性能总分值最高者选取为最优节点Node_Center,并作为此次合作式深度学习任务的初始中心方,用于深度学习模型聚合;
S180加入网络的最后一个节点更新网络全局参数:center=Node_Center,glob_term=glob_term+1,网络中各节点更新本地参数local_term=local_term+1;
S190任务初始阶段选择合适的中心方完成。
优选的,所述合作方为执行模型训练任务的各终端设备,合作方的性能评估指标包括cpu、内存和电量,而合作方的性能评分值score为: Score=ω
1X
cpu+ω
2X
memory+ω
3X
energy,其中Xcpu、Xmemory和Xenergy分别为cpu、内存和电量的度量指标,而ω
1、ω
2、ω
3为为cpu、内存和电量的权重。
优选的,在步骤S150中,判断合作方n是否为加入网络的最后一个节点的方法为:若此时已加入的合作方数量占步骤S110中合作方总数N的预设比例以上且在时间窗口内没有新的合作方加入,则该合作方n为最后一个节点,否则继续加入直到满足条件。
其中,所述预设比例为70%、80%或90%。
优选的,在任务进行阶段的中心方更新选择包括以下步骤:
S210网络中各节点动态感知到中心方异常;
S220网络中各节点确定网络中存活节点;
S230网络中各节点对网络中各节点的性能总分值进行比较;
S240网络中各节点选取最优节点Node_Center作为此次合作式深度学习任务的中心方;
S250网络中各节点先后判断此时网络全局参数全局任期glob_term是否等于节点本地任期local_term;若是,表示此时网络中仍然没有选出中心方,则先判断出来的节点更新网络全局参数:center=Node_Center,glob_term=glob_term+1,并更新本地参数local_term=local_term+1;若否,表示此时网络中已经选出中心方,则该节点只更新本地参数local_term=local_term+1。
本发明还提供了一种计算机可读存储介质,其上存储有计算机指令,所述计算机指令运行时执行前述方法的步骤。
本发明还提供了一种基于动态更新中心方的合作式学习系统,系统的初始中心方与各合作方电讯连接,并运行前述方法的步骤,系统包括:
合作方确定模块,通过网络连接状态确定可用合作方及其用于性能比较的度量指标和对应权重;
性能评估模块,基于合作方的度量指标和对应权重计算性能评分值以评估合作方的性能;
优选模块,根据评估性能自主选择网路中最优合作方作为中心方进行模型聚合;
通信传输模块,中心方与所有当期合作方建立连接;
动态更新学习模块,判定当前任务是否处于初始状态,若是,开始合作式学习的模型聚合任务并预测中心方风险;若否,则进入中心方更新,任务继续直至结束。
相比现有技术,本发明的有益效果在于:本方案提供的合作式深度学习任务中的中心方选择方案包括基于初始任期内各合作方性能评分值进行中心方初始选择并用于深度学习模型聚合,在任务进行中基于当前任期网络中存活节点中性能评分值进行中心方更新选择并以更新的中心方继续模型聚合任务,这为学习方法或模型聚合提供了稳定的中心方,中心方异常情况下快速选取参与合作式学习任务中最优数据提供方为中心方,能快速衔接,以便模型训练可以继续运行。
图1为本发明合作式深度学习初始阶段的中心方选择流程图;
图2为本发明合作式深度学习任务进行阶段的中心方选择流程图。
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明的一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本发明保护的范围。
应当理解,本说明书中所使用的“系统”、“装置”、“单元”和/或“模组”是用于区分不同级别的不同组件、元件、部件、部分或装配的一种方法。然而,如果其他词语可实现相同的目的,则可通过其他表达来替换所述词语。
如本说明书和权利要求书中所示,除非上下文明确提示例外情形,“一”、“一个”、“一种”和/或“该”等词并非特指单数,也可包括复数。一般说来,术语“包括”与“包含”仅提示包括已明确标识的步骤和元素,而这些步骤和元素不构成一个排它性的罗列,方法或者设备也可能包含其它的步骤或元素。
本说明书中使用了流程图用来说明根据本说明书的实施例的系统所执行的操作。应当理解的是,前面或后面操作不一定按照顺序来精确地执行。相反,可以按照倒序或同时处理各个步骤。同时,也可以将其他操作添加到这些过程中,或从这些过程移除某一步或数步操作。
在经济、文化、教育、医疗、公共管理等各行各业充斥的大量信息数据,对其进行例如数据分析、数据挖掘、以及趋势预测等的数据处理分析在越来越多场景中得到广泛应用。其中,通过数据合作的方式可以使多个数据拥有方获得更好的数据处理结果。例如,可以通过多方合作式学习来 获得更为准确的模型参数。
在一些实施例中,动态更新中心方的合作式学习的方法可以应用于在保证各方数据安全的情况下,各方协同训练机器学习模型供多方使用的场景。在这个场景中,多个数据方拥有自己的数据,他们想共同使用彼此的数据来统一建模(例如,分类模型、线性回归模型、逻辑回归模型等),但并不想各自的数据(尤其是隐私数据)被泄露。例如,互联网储蓄机构A拥有一批用户数据,银行B拥有另一批用户数据,基于A和B的用户数据确定的训练样本集可以训练得到性能更好的机器学习模型。A和B都愿意通过彼此的用户数据共同参与模型训练,但因为一些原因A和B不愿意自己的用户数据信息遭到泄露,或者至少不愿意让对方知道自己的用户数据信息。
在一些实施例中,可以采用联邦学习的方法进行合作式学习。联邦学习(Federated Learning)可以在多参与方或多计算节点之间开展高效率的机器学习。联邦学习可以使得多方数据在训练样本不出本地的情况下进行模型训练,只传递训练好的模型或者是计算梯度,这使得各方持有的训练样本的隐私性得到了保护。
在一些实施例中,联邦学习常应用于模型计算量大、参数多的情况。在该场景的实施例中,由于联邦学习过程中数据传输量大,因而通讯传输的压力较大。因此,在采用联邦学习的场景中,往往需要采用一定的方法降低传输过程中的通讯压力。
在本说明书的一些实施例中,模型进行每一次迭代更新的过程中,可以采用中心服务器更新的合作式学习任务判断(包括训练得到的模型梯度值或模型参数)进行压缩。具体地,通过更新后的任务恢复及继续,可以 使得客户端模型训练不中断,无需重新训练,从而降低通讯压力。同时,对中心服务器异常情况进行预测,保证了模型的稳定性。
第一实施例
一种合作式深度学习任务中的中心方选择方法,方法包括任务初始阶段的中心方初始选择和任务进行阶段的中心方更新选择。
其中,基于初始任期内各合作方性能评分值进行中心方初始选择用于深度学习模型聚合,在任务进行中基于当前任期网络中存活节点中性能评分值进行中心方更新选择并以更新的中心方继续模型聚合任务。
参见图1,任务初始阶段的中心方初始选择包括以下步骤。
S110合作式深度学习任务被发起,确定参与此次任务的合作方总数N,同时协商确定用于性能比较的度量指标及其对应权重;
S120各合作方对本地任期参数进行初始化:local_term=0,同时利用步骤S110的度量指标和权重计算各自的性能评分值(score)。
S130对网络全局参数进行初始化,包括全局任期:glob_term=0,中心方标识:center=0,节点序号:n=0,系数:coef,时间窗口:time_interval。
其中,“全局任期”是相对于每个终端设备上的“本地任期”来说的。对于中心方的产生,合作式学习任务中所有合作方都有权做出自己的选择。在前一个中心方异常后,各合作方先后感知到,然后去寻找最优终端设备作为新一任中心方。那么为了防止新中心方重复产生,当某一个合作方准备对新中心方做出自己的选择时,首先需要查看一下当前的“全局任期”,看这个参数是否等于这个合作方的“本地任期”,如果相等,则表示此时 还没有产生新的中心方;但是如果当前的“全局任期”已经大于这个合作方的“本地任期”时,说明此时已经有另外的合作方找到了新的中心方,那么这个合作方不对新中心方进行任何修改,并将“本地任期”的值更新为“全局任期”的值,使得各合作方的“本地任期”与“全局任期”保持一致。
S140各合作方先后加入网络,提供性能评分值(score),并记录本地的IP地址(ip)和端口号(port);先后加入关系表示为合作方n=n+1,表示参与本次合作式学习任务的终端设备先后加入准备进行合作。
需要说明的是,中心方需要与各终端设备进行通信来实现知识共享,而通信要求知道各终端设备的IP地址和端口号。
S150利用合作方总数、系数和时间窗口判断此时该合作方n是否为加入网络的最后一个节点,若合作方n是加入网络的最后一个节点,则转至步骤S160,若合作方n不是加入网络的最后一个节点,则转至步骤S140;
其中,在步骤S150中,判断合作方n是否为加入网络的最后一个节点的方法为:若此时已加入的合作方数量占步骤S110中合作方总数N的预设比例以上且在时间窗口内没有新的合作方加入,则该合作方n为最后一个节点,否则继续加入直到满足条件。这样判断的理由是,IoT场景下,各设备的存活状态无法保证,只需满足预设比例的设备数量加入网络使得任务可正常运行即可,而不要求全部合作方N均加入网络。
其中,所述预设比例包括但不限于70%、80%或90%,优选80%。
S160加入网络的最后一个节点对网络中各节点的性能总分值进行比较;
S170将根据步骤S160中的性能总分值最高者选取为最优节点Node_Center,并作为此次合作式深度学习任务的初始中心方,用于深度学习模型聚合;
S180加入网络的最后一个节点更新网络全局参数:center=Node_Center,glob_term=glob_term+1,网络中各节点更新本地参数local_term=local_term+1;
S190任务初始阶段选择合适的中心方完成。
其中,所述合作方为执行模型训练任务的各终端设备,合作方的性能评估指标包括cpu、内存和电量,而合作方的性能评分值score为:Score=ω
1X
cpu+ω
2X
memory+ω
3X
energy,其中Xcpu、Xmemory和Xenergy分别为cpu、内存和电量的度量指标,而ω
1、ω
2、ω
3为为cpu、内存和电量的权重。
参见图2,在任务进行阶段的中心方更新选择包括以下步骤:
S210网络中各节点动态感知到中心方异常;
S220网络中各节点确定网络中存活节点;
S230网络中各节点对网络中各节点的性能总分值进行比较;
S240网络中各节点选取最优节点Node_Center作为此次合作式深度学习任务的中心方;
S250网络中各节点先后判断此时网络全局参数全局任期glob_term是否等于节点本地任期local_term。
若是,即glob_term等于local_term,表示此时网络中仍然没有选出中心方,则先判断出来的节点更新网络全局参数:center=Node_Center, glob_term=glob_term+1,并更新本地参数local_term=local_term+1;
若否,即glob_term大于local_term,表示此时网络中已经选出中心方,则该节点只更新本地参数local_term=local_term+1。
需要说明的是:参与合作式深度学习任务的各个合作方指的是各终端设备,如手提电脑、手机等能够执行模型训练任务的设备。不同的设备由于具有不同的资源,如cpu、内存、电量等,因此具有不同的计算和处理能力。各终端设备的性能评分值与各性能指标和相应权重有关。如
Score=ω
1X
cpu+ω
2X
memory+ω
3X
energy。
网络构架中采用性能评分值(score)评价其性能。
运算速度是衡量计算机性能的一项重要指标。通常所说的计算机运算速度(平均运算速度),是指每秒钟所能执行的指令条数,一般用“百万条指令/秒”来描述。
内存储器,也简称主存,是CPU可以直接访问的存储器,需要执行的程序与需要处理的数据就是存放在主存中的。内存储器容量的大小反映了计算机即时存储信息的能力。内存容量越大,系统功能就越强大,能处理的数据量就越庞大。
合作式学习任务中的中心方用来聚合模型,“任期”表示当前中心方是第几任中心方。任务开始时,需要将“任期”这个参数初始化为0。如果任务开始后第一次选择出来了一个中心方,那么此时“任期”由0变为1,表示该中心方是第一任中心方。如果之后的任务进行过程中,中心方出现异常,此时无法使用该中心方进行模型聚合,那么需要重新选择一个新的中心方,所以新的中心方就是第二任中心方,此时“任期”由1变为 2。以此类推,一旦有新的中心方产生,那么“任期”参数会在原来基础上加一。
合作式深度学习任务中各节点加入网络co_DL后由临时节点列表Existing_Node管理,同时中心方Node_Center指定后也保存在该列表中;永久节点列表Center_Info中管理的是中心方信息,包括当前网络全局“任期”和中心方标识。
第二实施例
本发明还提供了一种计算机可读存储介质,其上存储有计算机指令,所述计算机指令运行时执行前述方法的步骤。其中,所述方法请参见前述部分的详细介绍,此处不再赘述。
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序可以存储于计算机可读存储介质中,计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括暂存电脑可读媒体(transitory media),如调制的数据信号和载 波。
第三实施例
一种基于动态更新中心方的合作式学习系统,系统的初始中心方与各合作方电讯连接,并运行前述方法的步骤,其特征在于,系统包括:
合作方确定模块,通过网络连接状态确定可用合作方及其用于性能比较的度量指标和对应权重;
性能评估模块,基于合作方的度量指标和对应权重计算性能评分值以评估合作方的性能;
优选模块,根据评估性能自主选择网路中最优合作方作为中心方进行模型聚合;
通信传输模块,中心方与所有当期合作方建立连接;
动态更新学习模块,判定当前任务是否处于初始状态,若是,开始合作式学习的模型聚合任务并预测中心方风险;若否,则进入中心方更新,任务继续直至结束。
当理解,本说明书一个或多个实施中的所述系统及其模块可以利用各种方式来实现。例如,在一些实施例中,系统及其模块可以通过硬件、软件或者软件和硬件的结合来实现。其中,硬件部分可以利用专用逻辑来实现;软件部分则可以存储在存储器中,由适当的指令执行系统,例如微处理器或者专用设计硬件来执行。本领域技术人员可以理解上述的方法和系统可以使用计算机可执行指令和/或包含在处理器控制代码中来实现,例如在诸如磁盘、CD或DVD-ROM的载体介质、诸如只读存储器(固件)的可编程的存储器或者诸如光学或电子信号载体的数据载体上提供了这样 的代码。本申请的系统及其模块不仅可以有诸如超大规模集成电路或门阵列、诸如逻辑芯片、晶体管等的半导体、或者诸如现场可编程门阵列、可编程逻辑设备等的可编程硬件设备的硬件电路实现,也可以用例如由各种类型的处理器所执行的软件实现,还可以由上述硬件电路和软件的结合(例如,固件)来实现。
需要说明的是,不同实施例可能产生的有益效果不同,在不同的实施例里,可能产生的有益效果可以是以上任意一种或几种的组合,也可以是其他任何可能获得的有益效果。
需要注意的是,以上对于处理设备及其模块的描述,仅为描述方便,并不能把本申请限制在所举实施例范围之内。可以理解,对于本领域的技术人员来说,在了解该系统的原理后,可能在不背离这一原理的情况下,对各个模块进行任意组合,或者构成子系统与其他模块连接。
上文已对基本概念做了描述,显然,对于本领域技术人员来说,上述详细披露仅仅作为示例,而并不构成对本申请的限定。虽然此处并没有明确说明,本领域技术人员可能会对本申请进行各种修改、改进和修正。该类修改、改进和修正在本申请中被建议,所以该类修改、改进、修正仍属于本申请示范实施例的精神和范围。
同时,本申请使用了特定词语来描述本申请的实施例。如“一个实施例”、“一实施例”、和/或“一些实施例”意指与本申请至少一个实施例相关的某一特征、结构或特点。因此,应强调并注意的是,本说明书中在不同位置两次或多次提及的“一实施例”或“一个实施例”或“一个替代性实施例”并不一定是指同一实施例。此外,本申请的一个或多个实 施例中的某些特征、结构或特点可以进行适当的组合。
此外,本领域技术人员可以理解,本申请的各方面可以通过若干具有可专利性的种类或情况进行说明和描述,包括任何新的和有用的工序、机器、产品或物质的组合,或对他们的任何新的和有用的改进。相应地,本申请的各个方面可以完全由硬件执行、可以完全由软件(包括固件、常驻软件、微码等)执行、也可以由硬件和软件组合执行。以上硬件或软件均可被称为“数据块”、“模块”、“引擎”、“单元”、“组件”或“系统”。此外,本申请的各方面可能表现为位于一个或多个计算机可读介质中的计算机产品,该产品包括计算机可读程序编码。
计算机存储介质可能包含一个内含有计算机程序编码的传播数据信号,例如在基带上或作为载波的一部分。该传播信号可能有多种表现形式,包括电磁形式、光形式等,或合适的组合形式。计算机存储介质可以是除计算机可读存储介质之外的任何计算机可读介质,该介质可以通过连接至一个指令执行系统、装置或设备以实现通讯、传播或传输供使用的程序。位于计算机存储介质上的程序编码可以通过任何合适的介质进行传播,包括无线电、电缆、光纤电缆、RF、或类似介质,或任何上述介质的组合。
本申请各部分操作所需的计算机程序编码可以用任意一种或多种程序语言编写,包括面向对象编程语言如Java、Scala、Smalltalk、Eiffel、JADE、Emerald、C++、C#、VB.NET、Python等,常规程序化编程语言如C语言、VisualBasic、Fortran2003、Perl、COBOL2002、PHP、ABAP,动态编程语言如Python、Ruby和Groovy,或其他编程语言等。该程序编码可以完全在用户计算机上运行、或作为独立的软件包在用户计算机上运行、或部分在用户计算机上运行部分在远程计算机运行、或完全在远程计算机 或处理设备上运行。在后种情况下,远程计算机可以通过任何网络形式与用户计算机连接,比如局域网(LAN)或广域网(WAN),或连接至外部计算机(例如通过因特网),或在云计算环境中,或作为服务使用如软件即服务(SaaS)。
一些实施例中使用了描述成分、属性数量的数字,应当理解的是,此类用于实施例描述的数字,在一些示例中使用了修饰词“大约”、“近似”或“大体上”来修饰。除非另外说明,“大约”、“近似”或“大体上”表明所述数字允许有±20%的变化。相应地,在一些实施例中,说明书和权利要求中使用的数值参数均为近似值,该近似值根据个别实施例所需特点可以发生改变。在一些实施例中,数值参数应考虑规定的有效数位并采用一般位数保留的方法。尽管本申请一些实施例中用于确认其范围广度的数值域和参数为近似值,在具体实施例中,此类数值的设定在可行范围内尽可能精确。
还需要说明的是,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、商品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、商品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个......”限定的要素,并不排除在包括所述要素的过程、方法、商品或者设备中还存在另外的相同要素。
本领域技术人员应明白,本申请的实施例可提供为方法、装置、系统或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限 于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
最后应说明的是:以上实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的精神和范围。
Claims (8)
- 一种合作式深度学习任务中的中心方选择方法,其特征在于,方法包括任务初始阶段的中心方初始选择和任务进行阶段的中心方更新选择,其中,基于初始任期内各合作方性能评分值进行中心方初始选择用于深度学习模型聚合,在任务进行中基于当前任期网络中存活节点中性能评分值进行中心方更新选择并以更新的中心方继续模型聚合任务。
- 根据权利要求1所述的方法,其特征在于,任务初始阶段的中心方初始选择包括以下步骤:S110合作式深度学习任务被发起,确定参与此次任务的合作方总数N,同时协商确定用于性能比较的度量指标及其对应权重;S120各合作方对本地任期参数进行初始化:local_term=0,同时利用步骤S110的度量指标和权重计算各自的性能评分值;S130对网络全局参数进行初始化,包括全局任期:glob_term=0,中心方标识:center=0,节点序号:n=0,系数:coef,时间窗口:time_interval;S140各合作方先后加入网络,提供性能评分值,并记录本地的IP地址和端口号;S150利用合作方总数、系数和时间窗口判断此时该合作方n是否为加入网络的最后一个节点,若合作方n是加入网络的最后一个节点,则转至步骤S160,若合作方n不是加入网络的最后一个节点,则转至步骤S140;S160加入网络的最后一个节点对网络中各节点的性能总分值进行比较;S170将根据步骤S160中的性能总分值最高者选取为最优节点 Node_Center,并作为此次合作式深度学习任务的初始中心方,用于深度学习模型聚合;S180加入网络的最后一个节点更新网络全局参数:center=Node_Center,glob_term=glob_term+1,网络中各节点更新本地参数local_term=local_term+1;S190任务初始阶段选择合适的中心方完成。
- 根据权利要求2所述的方法,其特征在于:所述合作方为执行模型训练任务的各终端设备,合作方的性能评估指标包括cpu、内存和电量,而合作方的性能评分值score为:Score=ω 1X cpu+ω 2X memory+ω 3X energy,其中Xcpu、Xmemory和Xenergy分别为cpu、内存和电量的度量指标,而ω 1、ω 2、ω 3为为cpu、内存和电量的权重。
- 根据权利要求2所述的方法,其特征在于,在步骤S150中,判断合作方n是否为加入网络的最后一个节点的方法为:若此时已加入的合作方数量占步骤S110中合作方总数N的预设比例以上且在时间窗口内没有新的合作方加入,则该合作方n为最后一个节点,否则继续加入直到满足条件。
- 根据权利要求4所述的方法,其特征在于:其中,所述预设比例为70%、80%或90%。
- 根据权利要求1-5任一项所述的方法,其特征在于,在任务进行阶段的中心方更新选择包括以下步骤:S210网络中各节点动态感知到中心方异常;S220网络中各节点确定网络中存活节点;S230网络中各节点对网络中各节点的性能总分值进行比较;S240网络中各节点选取最优节点Node_Center作为此次合作式深度学习任务的中心方;S250网络中各节点先后判断此时网络全局参数全局任期glob_term是否等于节点本地任期local_term;若是,表示此时网络中仍然没有选出中心方,则先判断出来的节点更新网络全局参数:center=Node_Center,glob_term=glob_term+1,并更新本地参数local_term=local_term+1;若否,表示此时网络中已经选出中心方,则该节点只更新本地参数local_term=local_term+1。
- 一种计算机可读存储介质,其上存储有计算机指令,其特征在于:所述计算机指令运行时执行权利要求1-6任一项所述方法的步骤。
- 一种基于动态更新中心方的合作式学习系统,系统的初始中心方与各合作方电讯连接,并运行权利要求1-6任一项所述方法的步骤,其特征在于,系统包括:合作方确定模块,通过网络连接状态确定可用合作方及其用于性能比较的度量指标和对应权重;性能评估模块,基于合作方的度量指标和对应权重计算性能评分值以评估合作方的性能;优选模块,根据评估性能自主选择网路中最优合作方作为中心方进行模型聚合;通信传输模块,中心方与所有当期合作方建立连接;动态更新学习模块,判定当前任务是否处于初始状态,若是,开始合 作式学习的模型聚合任务并预测中心方风险;若否,则进入中心方更新,任务继续直至结束。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011458168.9A CN112686369B (zh) | 2020-12-10 | 2020-12-10 | 中心方选择方法、存储介质和系统 |
CN202011458168.9 | 2020-12-10 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022121030A1 true WO2022121030A1 (zh) | 2022-06-16 |
Family
ID=75449179
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/140832 WO2022121030A1 (zh) | 2020-12-10 | 2020-12-29 | 中心方选择方法、存储介质和系统 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN112686369B (zh) |
WO (1) | WO2022121030A1 (zh) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117009095A (zh) * | 2023-10-07 | 2023-11-07 | 湘江实验室 | 一种隐私数据处理模型生成方法、装置、终端设备及介质 |
CN118138589A (zh) * | 2024-05-08 | 2024-06-04 | 中国科学院空天信息创新研究院 | 服务簇调度方法、装置、设备及介质 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103676649A (zh) * | 2013-10-09 | 2014-03-26 | 江苏师范大学 | 局部自适应小波神经网络训练系统、设备及方法 |
US20200005191A1 (en) * | 2018-06-28 | 2020-01-02 | International Business Machines Corporation | Ranking and updating machine learning models based on data inputs at edge nodes |
CN110929880A (zh) * | 2019-11-12 | 2020-03-27 | 深圳前海微众银行股份有限公司 | 一种联邦学习方法、装置及计算机可读存储介质 |
CN111475853A (zh) * | 2020-06-24 | 2020-07-31 | 支付宝(杭州)信息技术有限公司 | 一种基于分布式数据的模型训练方法及系统 |
CN111768008A (zh) * | 2020-06-30 | 2020-10-13 | 平安科技(深圳)有限公司 | 联邦学习方法、装置、设备和存储介质 |
CN111966698A (zh) * | 2020-07-03 | 2020-11-20 | 华南师范大学 | 一种基于区块链的可信联邦学习方法、系统、装置及介质 |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111538608A (zh) * | 2020-04-30 | 2020-08-14 | 深圳前海微众银行股份有限公司 | 预防终端设备宕机的方法、终端设备及存储介质 |
-
2020
- 2020-12-10 CN CN202011458168.9A patent/CN112686369B/zh active Active
- 2020-12-29 WO PCT/CN2020/140832 patent/WO2022121030A1/zh active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103676649A (zh) * | 2013-10-09 | 2014-03-26 | 江苏师范大学 | 局部自适应小波神经网络训练系统、设备及方法 |
US20200005191A1 (en) * | 2018-06-28 | 2020-01-02 | International Business Machines Corporation | Ranking and updating machine learning models based on data inputs at edge nodes |
CN110929880A (zh) * | 2019-11-12 | 2020-03-27 | 深圳前海微众银行股份有限公司 | 一种联邦学习方法、装置及计算机可读存储介质 |
CN111475853A (zh) * | 2020-06-24 | 2020-07-31 | 支付宝(杭州)信息技术有限公司 | 一种基于分布式数据的模型训练方法及系统 |
CN111768008A (zh) * | 2020-06-30 | 2020-10-13 | 平安科技(深圳)有限公司 | 联邦学习方法、装置、设备和存储介质 |
CN111966698A (zh) * | 2020-07-03 | 2020-11-20 | 华南师范大学 | 一种基于区块链的可信联邦学习方法、系统、装置及介质 |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117009095A (zh) * | 2023-10-07 | 2023-11-07 | 湘江实验室 | 一种隐私数据处理模型生成方法、装置、终端设备及介质 |
CN117009095B (zh) * | 2023-10-07 | 2024-01-02 | 湘江实验室 | 一种隐私数据处理模型生成方法、装置、终端设备及介质 |
CN118138589A (zh) * | 2024-05-08 | 2024-06-04 | 中国科学院空天信息创新研究院 | 服务簇调度方法、装置、设备及介质 |
Also Published As
Publication number | Publication date |
---|---|
CN112686369A (zh) | 2021-04-20 |
CN112686369B (zh) | 2024-02-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022121030A1 (zh) | 中心方选择方法、存储介质和系统 | |
WO2021208914A1 (zh) | 基于网络调度的算力共享方法及相关产品 | |
CN113011602B (zh) | 一种联邦模型训练方法、装置、电子设备和存储介质 | |
WO2021249490A1 (zh) | 区块链网络中的通信方法、业务数据传输方法 | |
US20200105273A1 (en) | Routing Voice Commands to Virtual Assistants | |
WO2023092792A1 (zh) | 联邦学习建模优化方法、电子设备、存储介质及程序产品 | |
US9740753B2 (en) | Using spheres-of-influence to characterize network relationships | |
WO2022121026A1 (zh) | 更新中心方的合作式学习方法、存储介质、终端和系统 | |
JP2019518292A (ja) | 会話サンプルに基づいて自然言語機械学習を使用してユーザ要求に応答する技術 | |
WO2021000696A1 (zh) | 一种添加好友的方法与设备 | |
US20140140331A1 (en) | Schemes for connecting to wireless network | |
Zhou et al. | A time-ordered aggregation model-based centrality metric for mobile social networks | |
US10666803B2 (en) | Routing during communication of help desk service | |
WO2022111042A1 (zh) | 一种多节点分布式训练方法、装置、设备及可读介质 | |
WO2023284387A1 (zh) | 基于联邦学习的模型训练方法、装置、系统、设备和介质 | |
WO2024109454A1 (zh) | 一种关联网络的标签传播方法、装置及计算机可读存储介质 | |
TW202113716A (zh) | 基於信用的交互處理方法以及裝置 | |
WO2023201663A1 (zh) | 感知代理sbp终止方法及装置、电子设备及存储介质 | |
WO2023193572A1 (zh) | 一种数据管理方法、装置、服务器和存储介质 | |
US9330359B2 (en) | Degree of closeness based on communication contents | |
US10063502B2 (en) | Generation of a communication request based on visual selection | |
CN114780465A (zh) | 可共享远程直接数据存取链接的创建方法及装置 | |
CN113342759A (zh) | 内容共享方法、装置、设备以及存储介质 | |
US9569802B2 (en) | Invitation management based on invitee's behavior | |
CN114221736A (zh) | 数据处理的方法、装置、设备及介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20964959 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20964959 Country of ref document: EP Kind code of ref document: A1 |