CN109558909A - Combined depth learning method based on data distribution - Google Patents

Combined depth learning method based on data distribution Download PDF

Info

Publication number
CN109558909A
CN109558909A CN201811482576.0A CN201811482576A CN109558909A CN 109558909 A CN109558909 A CN 109558909A CN 201811482576 A CN201811482576 A CN 201811482576A CN 109558909 A CN109558909 A CN 109558909A
Authority
CN
China
Prior art keywords
node
data
training
data distribution
learning method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811482576.0A
Other languages
Chinese (zh)
Other versions
CN109558909B (en
Inventor
王智
胡成豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Graduate School Tsinghua University
Original Assignee
Shenzhen Graduate School Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Graduate School Tsinghua University filed Critical Shenzhen Graduate School Tsinghua University
Priority to CN201811482576.0A priority Critical patent/CN109558909B/en
Priority to PCT/CN2019/071857 priority patent/WO2020113782A1/en
Publication of CN109558909A publication Critical patent/CN109558909A/en
Application granted granted Critical
Publication of CN109558909B publication Critical patent/CN109558909B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/142Network analysis or design using statistical or mathematical methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Analysis (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Algebra (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Physics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Pure & Applied Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Cable Transmission Systems, Equalization Of Radio And Reduction Of Echo (AREA)

Abstract

The present invention proposes a kind of machine deep learning method based on data distribution, and the quantity and type of data are possessed according to each node, is assessed the significance level of the node in the training process, for instructing the integration of transmission with model;And according to the data distribution for participating in node, make the transmission frequency of each node in training process inconsistent, so that the preferable node of data distribution propagates the model of oneself, model that is on the contrary then more receiving other nodes as much as possible.Not only reducing data distribution imbalance influences training result bring, and since the poor node of data distribution propagates the model of oneself as few as possible, so network transmission can be reduced in the case where not influencing training effect.

Description

Combined depth learning method based on data distribution
Technical field
The machine deep learning method based on data distribution that the present invention relates to a kind of.
Background technique
With the continuous development of neural network and depth learning technology, industry is increasingly tended to using depth network mould The problems such as type solves such as image classification, face alignment, speech recognition.And in order to obtain a satisfactory depth mould Type, the data for generally requiring to provide magnanimity are trained.Traditional deep learning method requires to concentrate all training datas On a machine, the training of model is carried out using parallel computation equipment such as GPU of high speed etc..
Since the speed that data increase is far longer than the promotion of individual machine computing capability, it has been proposed that distributed depth Learning method is trained to utilize multiple machines to cooperate, however it substantially still needs first to collect all training datas To data center, computing capability is then shared in small-scale local area network.And in practice, data not only include user Privacy, enterprise eye in data even more directly represent value, these data obviously can not be supplied in data by people The heart.In order to be learnt and be trained using these valuable data, most straightforward approach is by the distributed depth in local area network Learning method expands in wide area network.
What traditional distributed deep learning method was transmitted between the individual nodes is the intermediate result of model training process, Such as model parameter, gradient etc., the initial data without transmitting user, this algorithm have the following to ask in practical applications Topic: 1. need the coordination of parameter server, are easy to produce Single Point of Faliure;2. requiring the data of each node with all data is only Vertical with distribution, almost impossible in reality, data distribution is almost always unbalanced;3. training process brings huge net Network expense.
Currently, not can be effectively solved in wide area network for the deep learning method of different data distribution efficiently still These problems.
Summary of the invention
The problem to be solved in the present invention is: proposing a kind of machine deep learning method based on data distribution, reduces data Distribution is uneven to influence training result bring, and in the case where not influencing training effect, reduces network transmission.
For this purpose, the machine deep learning method proposed by the invention based on data distribution includes the following steps: that A participates in instruction Experienced node establishes connection, determines maximum training rounds, and exchange data information;B, the importance of each node is calculated;C, The transmission interval of each node is calculated according to the importance of node;D, the model parameter of all nodes is initialized, it is each to save Point rounds set 0;E, each node is trained, the temporary pattern parameter after being trained, and each node rounds are counted; F, to all nodes, gap is sent according to it, judges whether current pass is to send bout;If then node trains this bout Temporary pattern parameter afterwards is sent to other nodes;If otherwise not doing transmission operation;G, each node judges that the bout will receive To the model from which node, receive from whole models after, it is polymerize with local model;H, judge Whether current pass reaches maximum training rounds, if otherwise returning to step E, if training terminates.
In some embodiment of the invention can also include following improve:
In step A, the data information exchanged includes: the classification that training data quantity that node possesses, node are covered Number.
In step B, the calculation method of the importance pi of i-th of node are as follows:
Wherein, qiFor the training data quantity that i-th of node possesses, viFor the classification number of i-th of coverage;Q is by institute The quantity for having the training data of node to be combined, V are the covering classification sum of all nodes;
The data bulk for representing node i accounts for whole specific gravity,Then represent the level of coverage of the data class of node i; α is used to the influence degree of equilibrium quantity and type, prevents the node of the single enormous amount of type or A wide selection of colours and designs but rare numbers Higher importance is obtained, α value range is 0 to 1.
In step C, for node i, transmission intercal is
Wherein pmaxFor the maximum value of all node importances.
In step D, when initialization, it is consistent the initial model of each node, i.e., to any two node i, j has Wi(0)=Wj(0), wherein Wi(0)=Wj(0)、Wi(0)=Wj(0) two node is when being respectively initial pass, the initial model of j Parameter.
In step E, in any t bout, arbitrary node i is in local model Wi(t) one is carried out using local data on Secondary traditional training process is to obtain temporary pattern parameter
In step F, for arbitrary node i, temporary pattern parameter is obtainedLater, judged, if current return Conjunction number t is node i transmission intercal τiIntegral multiple, then node i willIt is sent to other nodes;If current pass number t It is not node i transmission intercal τiIntegral multiple, then do not do transmission operation.
In step G, node i judges that the bout will receive the model from which node, these node sets are remembered Make ΓI, t, receiving from ΓI, tWhole models after, it is polymerize with local model:
That is, the temporary pattern parameter that will be receivedWith local temporary pattern parameterWith the important of node Property for weight be weighted and averaged operation, the final result is the node i final result trained in the wheel.
It is interacted in the form of P2P between node.
When having node to go offline or increasing node newly, step A-C is re-started in all nodes, step is carried out to newly-increased node D, then all nodes continue step E-H.
In step A, node is grouped according to distribution characteristics vector so that the data of all nodes in each group with Overall data deviation is less than threshold value beta, wherein 0≤β≤1, and the node in same group possesses identical transmission intercal.
The present invention also proposes a kind of machine deep learning device based on data distribution, including computer software, the meter Calculation machine software is for executing to realize above-mentioned method.
After adopting the above technical scheme, the present invention has the advantage that the present invention possesses the number of data according to each node Amount and type, assess the significance level of the node in the training process, for instructing the integration of transmission with model;And root According to the data distribution for participating in node, make the transmission frequency of each node in training process inconsistent, so that data distribution feelings The preferable node of condition propagates the model of oneself, model that is on the contrary then more receiving other nodes as much as possible.In this way, not only Reducing data distribution imbalance influences training result bring, and as far as possible due to the poor node of data distribution The model of oneself is propagated less, so network transmission can be reduced in the case where not influencing training effect.
Detailed description of the invention
The overall flow schematic diagram of Fig. 1 embodiment of the present invention.
Fig. 2 is the system signal of one embodiment of the invention.
Fig. 3 be this hair be one embodiment grouping flow diagram
Specific embodiment
Embodiment one
As shown in Figure 1, the present embodiment learning method is as follows:
1, the node for participating in training establishes connection, determines maximum training rounds T, and exchange data information.Assuming that total Shared n node participates in training, wherein the training data quantity that i-th of node possesses is qi, cover viKind classification.To own The quantity that the training data of node is combined is denoted as Q, and covering classification is V, then can be with the importance p of calculate node iiAre as follows:
WhereinThe data bulk for representing node i accounts for whole specific gravity,Then represent the covering of the data class of node i Degree.α is used to the influence degree of equilibrium quantity and type, prevents the single enormous amount of type or A wide selection of colours and designs but rare numbers Node obtain higher importance, α value range is 0 to 1.Data volume it can be seen from the formula when a node is more, Type is more complete, then importance is also higher it is considered that data distribution is preferable.
2, the transmission interval of each node is calculated according to the importance of node, for node i, transmission intercal is
Wherein pmaxFor the maximum value of all node importances, divided by the importance p of node iiAfter round up so that pass Defeated interval is inversely proportional with importance as far as possible, it is not difficult to find out that wherein the maximum node-node transmission interval of importance is minimum, transmits also most Frequently.
3, the model parameter W of all nodes is initialized, current training bout t=0, the preliminary examination model of each node It is consistent, i.e., to any two node i, j has Wi(0)=Wj(0), start to be trained below:
3.1 in t bout, and arbitrary node i is in local model Wi(t) primary traditional instruction is carried out using local data on Practice process to obtain temporary pattern parameterSuch as using gradient descent method, so that
Here gradient descent method can also use other modes instead.
3.2, for arbitrary node i, obtain temporary pattern parameterLater, judged
If current pass number t is node i transmission intercal τiIntegral multiple, then node i willIt is sent to other sections Point;If current pass number t is not node i transmission intercal τiIntegral multiple, then do not do transmission operation.
3.3 due to transmission intercal be it is globally shared, node i may determine which node the bout will receive from Model, these node sets are denoted as Γi,t, receiving from Γi,tWhole models after, by its with local model into Row polymerization
As shown in above-mentioned formula, the temporary pattern parameter that will receiveWith local temporary pattern parameter It is weighted and averaged operation by weight of the importance of node, the final result is node i in the final of wheel training As a result judge whether current pass reaches maximum training rounds, if t < T, returns to the 3.1st step, otherwise training terminates.
Embodiment two
The main thought of the present embodiment: since it was found that, in the scheme of embodiment one, training effect is to data kind The sensitivity of class distribution is much larger than the distribution of data bulk, when the data class of some coverage is extremely rare, allows it Trained effect can be seriously affected by individually sending data.Therefore using the thought of grouping, each working node (Worker) will be certainly Oneself data bulk and information is given packet manager (Group Controller), and packet manager utilizes these information Several combination of nodes are formed into training group (Group), so that the data of all nodes in each group and overall data type Deviation is not too large.Packet manager can be served as by any working node, when having new working node to be added or existing section When point goes offline, grouping is re-started by packet manager.Node in same group possesses identical transmission intercal, so that each Secondary transmission is all effective.The data distribution characteristics of the feature of all data distributions and each node can be with a vector come table Show (representation method is in specific implementation).Therefore it can use the cosine similarity between vector to measure the deviation of Species distributing. Given threshold β (0≤β≤1) needs the data class point of each node in group when threshold value beta is 1 as the foundation of grouping Cloth and all data are completely the same, such as when the ratio data of each type is 1:1 in all data, then require every in group A type data are also 1:1.Otherwise it when β is 0, has no requirement for Species distributing, at this time any one node Individually to become one group.The calculating of (system schematic Fig. 2) transmission intercal still depends on its importance, only calculation It changes, here is specific embodiment.
1, the node for participating in training establishes connection, determine maximum training rounds T, Species distributing deviation threshold β (0≤β≤ , and exchange data information 1).Assuming that a total of n node participates in training, training data can be divided into v class, wherein i-th of section The training data quantity for each type that point possesses is respectively qi,1…qi,v, then the data distribution characteristics of i-th of node can with to Amount indicatesSimilarly, the distribution characteristics vector that the data of all nodes are combined is known as target distribution spy Levy vectorNode is grouped according to distribution characteristics vector:
A) to arbitrary node i, if cosine similarityThen node i individually becomes one group, willIt is melted into number Multiply unit vector formThen the importance of node i is pi
B) left node is combined.The node inadequate for each similarity traverses other remaining nodes, The node that finding promotes its similarity forms combination, the stopping when the data similarity in combination is met the requirements.So that each The distribution of the sum of the node data that combination C includes meets IbidCombine each node i in C Importance beWhereinFor combination C in all data distribution vector, | C | for combination C in interstitial content.Grouping Flow diagram such as Fig. 3.
The embodiment of the present invention has the advantages that
1. training process does not need the participation of parameter server, each node is directly communicated, and will not generate single-point event Barrier.
2. training process considers the distribution situation of data, using from the more complete node of data distribution train come The whole training direction of model guidance, improves training effectiveness.
Reduce 3. the poor node of data distribution sends frequency, reduces whole network overhead.
The present invention can be used for a variety of usage scenarios, such as coorinated training between enterprise, when many enterprises have common model When demand, the joint training of model can be efficiently carried out in the case where not direct shared data;For another example, enterprise and user assist With training, enterprise can use the data sophisticated model in user hand, and user is leaked without concern of data privacy;Deng Deng.
It is to be illustrated to what the present invention made above, is not considered as limiting the invention.Those skilled in the art Some deformations can be made under the inspiration of the present invention, belong to protection scope of the present invention.

Claims (10)

1. a kind of machine deep learning method based on data distribution, it is characterised in that include the following steps:
A, the node for participating in training establishes connection, determines maximum training rounds, and exchange data information;
B, the importance of each node is calculated;
C, the transmission interval of each node is calculated according to the importance of node;
D, the model parameter of all nodes is initialized, each node rounds set 0;
E, each node is trained, the temporary pattern parameter after being trained, and each node rounds are counted;
F, to all nodes, gap is sent according to it, judges whether current pass is to send bout;If then node is by this bout Temporary pattern parameter after training is sent to other nodes;If otherwise not doing transmission operation;
G, each node judges that the bout will receive the model from which node, receive from whole models it Afterwards, it is polymerize with local model;
H, judge whether current pass reaches maximum training rounds, if otherwise returning to step E, if training terminates.
2. the machine deep learning method based on data distribution as described in claim 1, it is characterised in that: in step A, handed over The data information changed includes: the classification number that training data quantity that node possesses, node are covered;According to distribution characteristics vector pair Node is grouped, so that the data and overall data deviation of all nodes in each group are less than threshold value, and same group Interior node possesses identical transmission intercal.
3. the machine deep learning method based on data distribution as claimed in claim 2, it is characterised in that: in step B, i-th The importance p of a nodeiCalculation method are as follows:
Wherein, qiFor the training data quantity that i-th of node possesses, viFor the classification number of i-th of coverage;Q is by all sections The quantity that the training data of point is combined, V are the covering classification sum of all nodes;
The data bulk for representing node i accounts for whole specific gravity,Then represent the level of coverage of the data class of node i;α is used to The influence degree of equilibrium quantity and type prevents the node of the single enormous amount of type or A wide selection of colours and designs but rare numbers from obtaining Higher importance, α value range are 0 to 1.
4. the machine deep learning method based on data distribution as claimed in claim 3, it is characterised in that: in step C, for Node i, transmission intercal are
Wherein pmaxFor the maximum value of all node importances.
5. the machine deep learning method based on data distribution as described in claim 1, it is characterised in that: in step D, initially When change, it is consistent the initial model of each node, i.e., to any two node i, j has Wi(0)=Wj(0), wherein Wi(0) =Wj(0)、Wi(0)=Wj(0) two node is when being respectively initial pass, the original model parameter of j.
6. the machine deep learning method based on data distribution as described in claim 1, it is characterised in that: in office in step E It anticipates t bout, arbitrary node i is in local model Wi(t) primary traditional training process is carried out to obtain using local data on Obtain temporary pattern parameter
7. the machine deep learning method based on data distribution as claimed in claim 6, it is characterised in that: in step F, for Arbitrary node i obtains temporary pattern parameterLater, judged, if current pass number t is node i transmission intercal τiIntegral multiple, then node i willIt is sent to other nodes;If current pass number t is not node i transmission intercal τi's Integral multiple does not do transmission operation then.
8. the machine deep learning method based on data distribution as claimed in claim 7, it is characterised in that: in step G, node I judges that the bout will receive the model from which node, these node sets are denoted as Γi,t, come from receiving Γi,tWhole models after, it is polymerize with local model:
That is, the temporary pattern parameter that will be receivedWith local temporary pattern parameterIt is with the importance of node Weight is weighted and averaged operation, and the final result is final result of the node i in wheel training.
9. the machine deep learning method based on data distribution as described in claim 1, it is characterised in that: between node with The form of P2P interacts;When thering is node to go offline or increasing node newly, step A-C is re-started in all nodes, to newly-increased section Step carries out step D, and then all nodes continue step E-H.
10. a kind of machine deep learning device based on data distribution, including computer software, the computer software is for holding Row is to realize method as claimed in claims 1-9.
CN201811482576.0A 2018-12-05 2018-12-05 Machine deep learning method based on data distribution Active CN109558909B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201811482576.0A CN109558909B (en) 2018-12-05 2018-12-05 Machine deep learning method based on data distribution
PCT/CN2019/071857 WO2020113782A1 (en) 2018-12-05 2019-01-16 Data-distribution-based joint deep learning method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811482576.0A CN109558909B (en) 2018-12-05 2018-12-05 Machine deep learning method based on data distribution

Publications (2)

Publication Number Publication Date
CN109558909A true CN109558909A (en) 2019-04-02
CN109558909B CN109558909B (en) 2020-10-23

Family

ID=65868926

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811482576.0A Active CN109558909B (en) 2018-12-05 2018-12-05 Machine deep learning method based on data distribution

Country Status (2)

Country Link
CN (1) CN109558909B (en)
WO (1) WO2020113782A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113033800A (en) * 2019-12-25 2021-06-25 香港理工大学深圳研究院 Distributed deep learning method and device, parameter server and main working node
CN114650288A (en) * 2020-12-02 2022-06-21 中国科学院深圳先进技术研究院 Distributed training method and system, terminal device and computer readable storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105743783A (en) * 2016-04-12 2016-07-06 同济大学 Car-Networking Node Selecting Method based on BS-TS and Autoencoder Network, and Accessibility Routing Mechanism Thereof
CN107786958A (en) * 2017-10-12 2018-03-09 中国科学院合肥物质科学研究院 A kind of data fusion method based on deep learning model
WO2018171925A1 (en) * 2017-03-22 2018-09-27 International Business Machines Corporation Decision-based data compression by means of deep learning

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106033554A (en) * 2015-03-13 2016-10-19 中国科学院声学研究所 Big data processing method for two-stage depth learning model based on sectionalization
US10380500B2 (en) * 2015-09-24 2019-08-13 Microsoft Technology Licensing, Llc Version control for asynchronous distributed machine learning
CN106815644B (en) * 2017-01-26 2019-05-03 北京航空航天大学 Machine learning method and system
US20180300653A1 (en) * 2017-04-18 2018-10-18 Distributed Systems, Inc. Distributed Machine Learning System
CN107944566B (en) * 2017-11-28 2020-12-22 杭州云脑科技有限公司 Machine learning method, main node, working node and system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105743783A (en) * 2016-04-12 2016-07-06 同济大学 Car-Networking Node Selecting Method based on BS-TS and Autoencoder Network, and Accessibility Routing Mechanism Thereof
WO2018171925A1 (en) * 2017-03-22 2018-09-27 International Business Machines Corporation Decision-based data compression by means of deep learning
CN107786958A (en) * 2017-10-12 2018-03-09 中国科学院合肥物质科学研究院 A kind of data fusion method based on deep learning model

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HUAI-ZHI WANG: "Deep learning based ensemble approach for probabilistic wind power", 《APPLIED ENERGY》 *
盛益强等: "用于个性化数据挖掘的粗粒度分布式深度学习", 《网络新媒体技术》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113033800A (en) * 2019-12-25 2021-06-25 香港理工大学深圳研究院 Distributed deep learning method and device, parameter server and main working node
CN113033800B (en) * 2019-12-25 2023-11-17 香港理工大学深圳研究院 Distributed deep learning method and device, parameter server and main working node
CN114650288A (en) * 2020-12-02 2022-06-21 中国科学院深圳先进技术研究院 Distributed training method and system, terminal device and computer readable storage medium
CN114650288B (en) * 2020-12-02 2024-03-08 中国科学院深圳先进技术研究院 Distributed training method and system, terminal equipment and computer readable storage medium

Also Published As

Publication number Publication date
WO2020113782A1 (en) 2020-06-11
CN109558909B (en) 2020-10-23

Similar Documents

Publication Publication Date Title
Zhang et al. Adaptive federated learning on non-iid data with resource constraint
CN110851429B (en) Edge computing credible cooperative service method based on influence self-adaptive aggregation
CN105207821B (en) A kind of network synthesis performance estimating method of service-oriented
CN110263908A (en) Federal learning model training method, equipment, system and storage medium
CN110490335A (en) A kind of method and device calculating participant&#39;s contribution rate
CN109558909A (en) Combined depth learning method based on data distribution
CN105183543B (en) A kind of gunz based on mobile social networking calculates online method for allocating tasks
CN106454958B (en) A kind of network resource allocation method and device
CN104035987B (en) A kind of micro blog network user force arrangement method
CN103152436A (en) P2P (peer-to-peer) internet trust cloud model computing method based on interest group
CN113283778B (en) Layered convergence federal learning method based on security evaluation
CN114301935B (en) Reputation-based internet of things edge cloud collaborative federal learning node selection method
CN114553661A (en) Mobile user equipment clustering training method for wireless federal learning
Liu et al. Correlated analytic hierarchy process
CN113672684B (en) Layered user training management system and method for non-independent co-distributed data
CN115952532A (en) Privacy protection method based on federation chain federal learning
CN115292413A (en) Crowd sensing excitation method based on block chain and federal learning
CN110188123A (en) User matching method and equipment
CN109041065B (en) Node trust management method for two-hop multi-copy ad hoc network
CN107103416A (en) Based on the method and device for mutually commenting the role of result feedback regulation ability weight to distribute
CN110209704A (en) User matching method and equipment
Allahbakhsh et al. Harnessing implicit teamwork knowledge to improve quality in crowdsourcing processes
Schneebeli et al. A practical federated learning framework for small number of stakeholders
Zhang et al. Adaptive Digital Twin Placement and Transfer in Wireless Computing Power Network
CN106603657A (en) IMS-based video conference resource optimization method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant