CN109710289A - The update method of distributed parameters server based on deeply learning algorithm - Google Patents

The update method of distributed parameters server based on deeply learning algorithm Download PDF

Info

Publication number
CN109710289A
CN109710289A CN201811568466.6A CN201811568466A CN109710289A CN 109710289 A CN109710289 A CN 109710289A CN 201811568466 A CN201811568466 A CN 201811568466A CN 109710289 A CN109710289 A CN 109710289A
Authority
CN
China
Prior art keywords
aid
parameter
working node
node
server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811568466.6A
Other languages
Chinese (zh)
Inventor
王堃
陆静远
孙雁飞
亓晋
岳东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Post and Telecommunication University
Nanjing University of Posts and Telecommunications
Original Assignee
Nanjing Post and Telecommunication University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Post and Telecommunication University filed Critical Nanjing Post and Telecommunication University
Priority to CN201811568466.6A priority Critical patent/CN109710289A/en
Publication of CN109710289A publication Critical patent/CN109710289A/en
Pending legal-status Critical Current

Links

Landscapes

  • Computer And Data Communications (AREA)

Abstract

Present invention discloses a kind of update methods of distributed parameters server based on deeply learning algorithm, include the following steps: S1, agency is deployed in working node, establish neural network model;S2, working node obtain the parameter of update from parameter server;S3, the task schedule person aid that determines working node of each working node group and identity by aid;S4, working node take out one group of input data from local training set;S5, aid back up part by the part input data of aid in advance, and when being slow node by aid, aid helps the node calculating section input data;S6, each working node calculate parameter gradients, and parameter gradients are sent to parameter server;The parameter gradients that S7, parameter server are sent according to each working node carry out global update, calculate new parameter.Work slow problem present invention efficiently solves part of nodes, improves the convergence time of parameter server.

Description

The update method of distributed parameters server based on deeply learning algorithm
Technical field
The present invention relates to a kind of server updating method, in particular to a kind of based on deeply learning algorithm The update method of distributed parameters server belongs to machine learning techniques field.
Background technique
In recent years, with the continuous development of computer and artificial intelligence field, machine learning has become a kind of from big The mainstream technology means of potential, valuable result are extracted in amount information.But it to handle large data collection and therefrom extract higher-dimension Model, machine learning application program may need largely to calculate, and only be no longer satisfied current make by a computer Use demand.This phenomenon results in the appearance of distributed machines mode of learning, i.e., handling machine study is answered in computer cluster Use program.
Nowadays, machine learning application program is widely used in Web search, spam detection, recommender system, meter Calculate advertisement and document analysis etc..These application automatic learning models, referred to as training data from example, usually by three portions It is grouped as: feature extraction, objective function and study.Wherein, feature extraction is mainly for the treatment of original training data, as document, Image and user query log, to obtain feature vector, an attribute of each feature capture training data.
General normal form of the parameter server as distributed machines learning training and storage, obtains in academia and industry It is widely applied.In parameter server system, server node is collected on entire working node and shared parameter, from service Device node reads parameter and independently updated training result.Whole system structure is based on a series of training iteration, when all When returning to the parameter of update after having trained batch of data, iteration just completes working node.Under this model, slowly appoint Business process becomes the maximum bottleneck of limitation parameter server performance.It is smart occurring being difficult due to task process is slack-off suddenly It is accurate fixed, therefore the countermeasure mainly used at present will work from the node for wanting slow than expected for work off-load mode It is transferred to idle node.
Specifically, requiring to undergo following four steps when each working node carries out an iteration:
Step 1: working node obtains the model parameter in the machine learning updated from parameter server.Mould in training process Shape parameter can reach 109To 1012, these a large amount of parameters will frequently obtain by all working node.
Step 2: working node takes out one group of training data from local training set carries out operation, parameter gradients are calculated. Since the quantity of hands-on data can be between 1TB and 1PB, so how reasonably to disperse training data to each work Node is particularly significant.
Step 3: each calculated parameter gradients of working node are sent to parameter server.
Step 4: the parameter gradients that parameter server is sent according to multiple working nodes, calculate new parameter.
In above-mentioned steps, step 2 and step 4 are to improve parameter service to propose high performance key point, and in step The task execution encountered in two problem slack-off suddenly is that present some parameter server frameworks are of interest.Currently, solving to become The method of slow node has very much.Wherein, other nodes, which help slow node, is considered by emphasis.But due to slowly appointing The appearance randomness of business is very big, the factors such as dynamic change of cluster, it is desirable to design the reasonable comparison for helping strategy to compile It is difficult.
In conclusion how to propose a kind of novel server updating method, on the basis of existing technology with existing skill Based on art, the drawbacks of giving full play to the advantages of prior art, overcome in the prior art, also just become technology people in the art Member's urgent problem to be solved.
Summary of the invention
In view of the prior art there are drawbacks described above, the purpose of the present invention is to propose to a kind of based on deeply learning algorithm The update method of distributed parameters server, includes the following steps:
S1, agency is deployed in working node, establishes neural network model;
S2, working node obtain the parameter of update from parameter server;
S3, each working node group task schedule person by deeply learning algorithm determine working node aid and helped The identity for the person of helping;
S4, working node take out one group of input data from local training set;
S5, aid are backed up partially in advance by the part input data of aid, when being judged as slow node by aid, Aid helps the slow node calculating section input data;
S6, each working node calculate parameter gradients, and parameter gradients are sent to parameter server;
The parameter gradients that S7, parameter server are sent according to each working node carry out global update, calculate new parameter.
Preferably, the framework of the distributed parameters server based on deeply learning algorithm includes parameter server Layer and dispatch layer;
The parameter server layer includes server group and working group.Each working group runs the same machine learning application, In one working group, each working node can both play the part of the role that aid may also act as being helped;
The dispatch layer is allocated based on role of the nitrification enhancement to working node.
Preferably, S1, which specifically comprises the following steps: to act on behalf of each DRL, is deployed in working group composed by same application In, neural network model is established in machine learning system TensorFlow.
Preferably, S2 specifically comprises the following steps:
S21, initialization commentator's network, performer's network and state;
S22, according to current state housing choice behavior, observation state return and next state;
S23, by current conversion samples storage in the buffer area of experience replay;
S24, commentator's network is trained by minimizing loss function, returns the scoring of commentator's network really closer to environment Report;
The marking of S25, performer's network according to commentator's network, the strategy of adjustment update oneself.
Preferably, in S3, state conversion and return are random and Markovian;The state is cluster dynamic State, the state of the state of state, disk, the state of memory and network including CPU;The selection of behavior is shown as to one The aid of working node and role's judgement by aid in a working node group;Return is according to every variation for taking turns iteration time To define.
Preferably, S5 specifically comprises the following steps:
S51, aid send message, inquired work progress by aid to it;
S52, progress report is sent to aid by aid;
If S53, aid's discovery are excessively slow by the progress of aid, this is diagnosed as slow task by aid;
S54, aid send the message for providing help to slow task;
S55, aid help the calculating of slow task progress part input data.
Preferably, progress described in S53 refers to that actual progress is slower than 20% or more of estimated progress slowly excessively.
Preferably, the mode of the global update of progress described in S7 is average operation.
Compared with prior art, advantages of the present invention is mainly reflected in the following aspects:
The update method of distributed parameters server based on deeply learning algorithm proposed by the invention, efficiently solves Part of nodes works slow problem, the convergence time of parameter server is improved, at the improving parameter server of the task The output of reason efficiency and effective training result plays very big help.
In addition, the present invention also provides reference for other relevant issues in same domain, can be opened up on this basis Extension is stretched, and is applied in the technical solution of other server updating methods same domain Nei, has very wide application prospect.
Just attached drawing in conjunction with the embodiments below, the embodiment of the present invention is described in further detail, so that of the invention Technical solution is more readily understood, grasps.
Detailed description of the invention
Fig. 1 is the architecture diagram of the distributed parameters server the present invention is based on deeply learning algorithm;
Fig. 2 is the schematic diagram that task is redistributed in the present invention.
Specific embodiment
As shown in Figure 1, present invention discloses a kind of distributed parameters servers based on deeply learning algorithm more New method includes the following steps:
S1, agency is deployed in working node, establishes neural network model;
S2, working node obtain the parameter of update from parameter server;
S3, each working node group task schedule person by deeply learning algorithm determine working node aid and helped The identity for the person of helping;
S4, working node take out one group of input data from local training set;
S5, aid are backed up partially in advance by the part input data of aid, when being judged as slow node by aid, Aid helps the slow node calculating section input data;
S6, each working node calculate parameter gradients, and parameter gradients are sent to parameter server;
The parameter gradients that S7, parameter server are sent according to each working node carry out global update, calculate new parameter.
This frame supports the such as outmoded synchronous protocol of common communications protocol and batch synchronization agreement.The purpose of dispatch layer is Slow task is detected, and shares the partial task of slow node.The slow node of this wheel iteration would not be far behind, no Negative effect can be played to parameter convergence.So batch synchronization agreement is compared, outmoded synchronous protocol and dispatch layer combination can be more The problem of effectively alleviating slow task.
The framework of the distributed parameters server based on deeply learning algorithm includes parameter server layer and tune Spend layer;
The parameter server layer includes server group and working group.Each working group runs the same machine learning application, In one working group, each working node can both play the part of the role that aid may also act as being helped;Aid is defined as this A working node is designated as the qualified a few thing node that is provided as and provides assistance.Working node thus is defined by aid Help can be provided by a few thing node.Input data can distribute to working node before every wheel iteration.Aid can shift to an earlier date Back up the part input data for the node that it wants help.
The dispatch layer is allocated based on role of the nitrification enhancement to working node.Specifically, state is converted Be with return it is random and Markovian, state is the dynamic state of cluster (such as the shape of CPU, disk, memory, network State), action selection is that the role of working node distributes, and return is defined according to the variation of every wheel iteration time.In order to collect more Most evidences, each working node group are deployed agency, and the distributed enhancing study of multiple agencies accelerates the training of model Speed.
P2P communication has been used to the identification of slow working node.The working node that may be wanted help can be periodically to it Help node Report Tasks progress.If Task Progress will help node that can send a message to slow section slowly than expected Point, tells it, oneself to start to help slow node calculating section data.When in this way parameter being updated Excessively outmoded parameter gradients are not had, to influence to restrain.
Synchronous protocol in the parameter server can both support outmoded synchronous protocol, batch synchronization can also be supported to assist View.
S1, which specifically comprises the following steps: to act on behalf of each DRL, to be deployed in working group composed by same application, in machine Neural network model is established in device learning system TensorFlow.Deeply learning algorithm quilt i.e. in technical solution of the present invention It is deployed on working node, is learnt by parallel DRL agency.
The specific implementation process of deeply learning algorithm in the present invention, i.e. S2 specifically comprise the following steps:
S21, initialization commentator's network, performer's network and state;
S22, according to current state housing choice behavior, observation state return and next state;
S23, by current conversion samples storage in the buffer area of experience replay;
S24, commentator's network is trained by minimizing loss function, returns the scoring of commentator's network really closer to environment Report;
The marking of S25, performer's network according to commentator's network, the strategy of adjustment update oneself.
In S3, state conversion and return are random and Markovian;The state is the dynamic state of cluster, The state of the state of state, disk, the state of memory and network including CPU;The selection of behavior shows as working to one The aid of working node and role's judgement by aid in node group;One working node is mentioned to some other working node It has supplied to help, in turn again the partial task of oneself to other people, etc., working nodes all in this way can obtain similar Progress.Return is defined according to the variation of every wheel iteration time.
It should be noted that in the inventive solutions, the same or similar label corresponds to the same or similar portion Part.
As shown in Fig. 2, the specific implementation process that task is redistributed, i.e. S5 specifically comprise the following steps:
S51, aid send message, inquired work progress by aid to it.
S52, progress report is sent to aid by aid.
If S53, aid's discovery are excessively slow by the progress of aid, this is diagnosed as slow task by aid, The progress refers to that actual progress is slower than 20% or more of estimated progress slowly excessively.
S54, aid send the message for providing help to slow task.
S55, aid help the calculating of slow task progress part input data.
Communicated when task is redistributed, between node using point-to-point mode rather than center to center communications, in order to more The convenient progress completed with other working node comparison tasks.To discover slow task out faster.
The mode of the global update of progress described in S7 includes average operation etc..Since the parameter that local data corresponds to is The sub-fraction of global parameter, local one wheel of training finish, and after obtaining the update of all parameters, update is uploaded to parameter service Device.The average operation refers to the parameter that parameter server is uploaded according to different nodes, is averaged to them, to update Parameter in parameter server.
The update method of distributed parameters server based on deeply learning algorithm proposed by the invention, effectively It is slow to solve the problems, such as that part of nodes works, improves the convergence time of parameter server, appoints to parameter server is improved The output of business treatment effeciency and effective training result plays very big help.
In addition, the present invention also provides reference for other relevant issues in same domain, can be opened up on this basis Extension is stretched, and is applied in the technical solution of other server updating methods same domain Nei, has very wide application prospect.
It is obvious to a person skilled in the art that invention is not limited to the details of the above exemplary embodiments, Er Qie In the case where without departing substantially from spirit and essential characteristics of the invention, the present invention can be realized in other specific forms.Therefore, no matter From the point of view of which point, the present embodiments are to be considered as illustrative and not restrictive, and the scope of the present invention is by appended power Benefit requires rather than above description limits, it is intended that all by what is fallen within the meaning and scope of the equivalent elements of the claims Variation is included within the present invention, and any reference signs in the claims should not be construed as limiting the involved claims.
In addition, it should be understood that although this specification is described in terms of embodiments, but not each embodiment is only wrapped Containing an independent technical solution, this description of the specification is merely for the sake of clarity, and those skilled in the art should It considers the specification as a whole, the technical solutions in the various embodiments may also be suitably combined, forms those skilled in the art The other embodiments being understood that.

Claims (8)

1. a kind of update method of the distributed parameters server based on deeply learning algorithm, which is characterized in that including such as Lower step:
S1, agency is deployed in working node, establishes neural network model;
S2, working node obtain the parameter of update from parameter server;
S3, each working node group task schedule person by deeply learning algorithm determine working node aid and helped The identity for the person of helping;
S4, working node take out one group of input data from local training set;
S5, aid are backed up partially in advance by the part input data of aid, when being judged as slow node by aid, Aid helps the slow node calculating section input data;
S6, each working node calculate parameter gradients, and parameter gradients are sent to parameter server;
The parameter gradients that S7, parameter server are sent according to each working node carry out global update, calculate new parameter.
2. the update method of the distributed parameters server according to claim 1 based on deeply learning algorithm, Be characterized in that: the framework of the distributed parameters server based on deeply learning algorithm includes parameter server layer and tune Spend layer;
The parameter server layer includes server group and working group.Each working group runs the same machine learning application, In one working group, each working node can both play the part of the role that aid may also act as being helped;
The dispatch layer is allocated based on role of the nitrification enhancement to working node.
3. the update method of the distributed parameters server according to claim 1 based on deeply learning algorithm, It being characterized in that, S1, which specifically comprises the following steps: to act on behalf of each DRL, to be deployed in working group composed by same application, Neural network model is established in machine learning system TensorFlow.
4. the update method of the distributed parameters server according to claim 1 based on deeply learning algorithm, It is characterized in that, S2 specifically comprises the following steps:
S21, initialization commentator's network, performer's network and state;
S22, according to current state housing choice behavior, observation state return and next state;
S23, by current conversion samples storage in the buffer area of experience replay;
S24, commentator's network is trained by minimizing loss function, returns the scoring of commentator's network really closer to environment Report;
The marking of S25, performer's network according to commentator's network, the strategy of adjustment update oneself.
5. the update method of the distributed parameters server according to claim 1 based on deeply learning algorithm, Be characterized in that: in S3, state conversion and return are random and Markovian;The state is the dynamic shape of cluster State, the state of the state of state, disk, the state of memory and network including CPU;The selection of behavior is shown as to a work Make the aid of working node in node group and is determined by the role of aid;Return is determined according to the variation of every wheel iteration time Justice.
6. the update method of the distributed parameters server according to claim 1 based on deeply learning algorithm, It is characterized in that, S5 specifically comprises the following steps:
S51, aid send message, inquired work progress by aid to it;
S52, progress report is sent to aid by aid;
If S53, aid's discovery are excessively slow by the progress of aid, this is diagnosed as slow task by aid;
S54, aid send the message for providing help to slow task;
S55, aid help the calculating of slow task progress part input data.
7. the update method of the distributed parameters server according to claim 1 based on deeply learning algorithm, Be characterized in that: progress described in S53 refers to that actual progress is slower than 20% or more of estimated progress slowly excessively.
8. the update method of the distributed parameters server according to claim 1 based on deeply learning algorithm, Be characterized in that: the mode of the global update of progress described in S7 is average operation.
CN201811568466.6A 2018-12-21 2018-12-21 The update method of distributed parameters server based on deeply learning algorithm Pending CN109710289A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811568466.6A CN109710289A (en) 2018-12-21 2018-12-21 The update method of distributed parameters server based on deeply learning algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811568466.6A CN109710289A (en) 2018-12-21 2018-12-21 The update method of distributed parameters server based on deeply learning algorithm

Publications (1)

Publication Number Publication Date
CN109710289A true CN109710289A (en) 2019-05-03

Family

ID=66257066

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811568466.6A Pending CN109710289A (en) 2018-12-21 2018-12-21 The update method of distributed parameters server based on deeply learning algorithm

Country Status (1)

Country Link
CN (1) CN109710289A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110490319A (en) * 2019-07-30 2019-11-22 成都蓉奥科技有限公司 Distributed deeply study based on fused neural network parameter
CN110580196A (en) * 2019-09-12 2019-12-17 北京邮电大学 Multi-task reinforcement learning method for realizing parallel task scheduling
CN111131080A (en) * 2019-12-26 2020-05-08 电子科技大学 Distributed deep learning flow scheduling method, system and equipment
CN111147541A (en) * 2019-11-18 2020-05-12 广州文远知行科技有限公司 Node processing method, device and equipment based on parameter server and storage medium
CN111612155A (en) * 2020-05-15 2020-09-01 湖南大学 Distributed machine learning system and communication scheduling method suitable for same
CN111698327A (en) * 2020-06-12 2020-09-22 中国人民解放军国防科技大学 Distributed parallel reinforcement learning model training method and system based on chat room architecture
CN112488324A (en) * 2020-12-24 2021-03-12 南京大学 Version control-based distributed machine learning model updating method
CN113033800A (en) * 2019-12-25 2021-06-25 香港理工大学深圳研究院 Distributed deep learning method and device, parameter server and main working node

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020198854A1 (en) * 2001-03-30 2002-12-26 Berenji Hamid R. Convergent actor critic-based fuzzy reinforcement learning apparatus and method
CN107018184A (en) * 2017-03-28 2017-08-04 华中科技大学 Distributed deep neural network cluster packet synchronization optimization method and system
CN108446770A (en) * 2017-02-16 2018-08-24 中国科学院上海高等研究院 A kind of slow node processing system and method for distributed machines study based on sampling
CN108829441A (en) * 2018-05-14 2018-11-16 中山大学 A kind of parameter update optimization system of distribution deep learning
CN109032630A (en) * 2018-06-29 2018-12-18 电子科技大学 The update method of global parameter in a kind of parameter server

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020198854A1 (en) * 2001-03-30 2002-12-26 Berenji Hamid R. Convergent actor critic-based fuzzy reinforcement learning apparatus and method
CN108446770A (en) * 2017-02-16 2018-08-24 中国科学院上海高等研究院 A kind of slow node processing system and method for distributed machines study based on sampling
CN107018184A (en) * 2017-03-28 2017-08-04 华中科技大学 Distributed deep neural network cluster packet synchronization optimization method and system
CN108829441A (en) * 2018-05-14 2018-11-16 中山大学 A kind of parameter update optimization system of distribution deep learning
CN109032630A (en) * 2018-06-29 2018-12-18 电子科技大学 The update method of global parameter in a kind of parameter server

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
AARON HARLAP ET AL.: "Addressing the Straggler Problem for Iterative Convergent Parallel ML", 《PROCEEDINGS OF THE SEVENTH ACM SYMPOSIUM ON CLOUD COMPUTING》 *
NENAVATH SRINIVAS NAIK ET AL.: "Performance Improvement of MapReduce Framework in Heterogeneous Context using Reinforcement Learning", 《PROCEDIA COMPUTER SCIENCE》 *
刘全 等: "深度强化学习综述", 《计算机学报》 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110490319A (en) * 2019-07-30 2019-11-22 成都蓉奥科技有限公司 Distributed deeply study based on fused neural network parameter
CN110490319B (en) * 2019-07-30 2020-06-26 成都蓉奥科技有限公司 Distributed deep reinforcement learning method based on fusion neural network parameters
CN110580196B (en) * 2019-09-12 2021-04-06 北京邮电大学 Multi-task reinforcement learning method for realizing parallel task scheduling
CN110580196A (en) * 2019-09-12 2019-12-17 北京邮电大学 Multi-task reinforcement learning method for realizing parallel task scheduling
CN111147541A (en) * 2019-11-18 2020-05-12 广州文远知行科技有限公司 Node processing method, device and equipment based on parameter server and storage medium
CN113033800A (en) * 2019-12-25 2021-06-25 香港理工大学深圳研究院 Distributed deep learning method and device, parameter server and main working node
CN113033800B (en) * 2019-12-25 2023-11-17 香港理工大学深圳研究院 Distributed deep learning method and device, parameter server and main working node
CN111131080A (en) * 2019-12-26 2020-05-08 电子科技大学 Distributed deep learning flow scheduling method, system and equipment
CN111612155A (en) * 2020-05-15 2020-09-01 湖南大学 Distributed machine learning system and communication scheduling method suitable for same
CN111612155B (en) * 2020-05-15 2023-05-05 湖南大学 Distributed machine learning system and communication scheduling method suitable for same
CN111698327A (en) * 2020-06-12 2020-09-22 中国人民解放军国防科技大学 Distributed parallel reinforcement learning model training method and system based on chat room architecture
CN111698327B (en) * 2020-06-12 2022-07-01 中国人民解放军国防科技大学 Distributed parallel reinforcement learning model training method and system based on chat room architecture
CN112488324A (en) * 2020-12-24 2021-03-12 南京大学 Version control-based distributed machine learning model updating method
CN112488324B (en) * 2020-12-24 2024-03-22 南京大学 Version control-based distributed machine learning model updating method

Similar Documents

Publication Publication Date Title
CN109710289A (en) The update method of distributed parameters server based on deeply learning algorithm
CN114721833B (en) Intelligent cloud coordination method and device based on platform service type
TWI547817B (en) Method, system and apparatus of planning resources for cluster computing architecture
CN103680496A (en) Deep-neural-network-based acoustic model training method, hosts and system
CN102281290A (en) Emulation system and method for a PaaS (Platform-as-a-service) cloud platform
CN105224959A (en) The training method of order models and device
CN108564164A (en) A kind of parallelization deep learning method based on SPARK platforms
CN107169143B (en) Efficient mass public opinion data information cluster matching method
CN113794748B (en) Performance-aware service function chain intelligent deployment method and device
CN108009642A (en) Distributed machines learning method and system
CN105407162A (en) Cloud computing Web application resource load balancing algorithm based on SLA service grade
CN106021512A (en) Page refresh method and apparatus
CN103325371A (en) Voice recognition system and method based on cloud
CN109587247A (en) A kind of energy platform of internet of things communication means for supporting communication
CN109754090A (en) It supports to execute distributed system and method that more machine learning model predictions service
CN111309472A (en) Online virtual resource allocation method based on virtual machine pre-deployment
CN113762512A (en) Distributed model training method, system and related device
CN108966316A (en) Show the method, device and equipment of multimedia resource, prediction connection waiting time
CN109214512A (en) A kind of parameter exchange method, apparatus, server and the storage medium of deep learning
CN115358395A (en) Knowledge graph updating method and device, storage medium and electronic device
CN103780640A (en) Multimedia cloud calculating simulation method
CN104346380B (en) Data reordering method and system based on MapReduce model
CN115688495B (en) Distributed LVC simulation system collaborative planning method, server and storage medium
CN104580498B (en) A kind of adaptive cloud management platform
CN104503846B (en) A kind of resource management system based on cloud computing system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190503

RJ01 Rejection of invention patent application after publication