CN105183531A - Distributed development platform and calculation method of same - Google Patents

Distributed development platform and calculation method of same Download PDF

Info

Publication number
CN105183531A
CN105183531A CN201410273009.XA CN201410273009A CN105183531A CN 105183531 A CN105183531 A CN 105183531A CN 201410273009 A CN201410273009 A CN 201410273009A CN 105183531 A CN105183531 A CN 105183531A
Authority
CN
China
Prior art keywords
mpi
cluster
dfs
computing
computer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410273009.XA
Other languages
Chinese (zh)
Inventor
徐君
李航
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201410273009.XA priority Critical patent/CN105183531A/en
Publication of CN105183531A publication Critical patent/CN105183531A/en
Pending legal-status Critical Current

Links

Abstract

The embodiment of the invention provides a distributed development platform and a calculation method of the same. The distributed development platform comprises a computer cluster, a distributed document system DFS deployed on the computer cluster and an MPI cluster constituted of an information transmission interface MPI which is deployed on each computer inside the computer cluster, wherein the DFS is used to provide sharing and storage space for each computer inside the computer cluster, and the sharing and storage space stores operation data required when the MPI cluster operates programs; and the MPI cluster is used to divide the programs which are submitted to the MPI cluster into a plurality of MPI tasks and then distribute the tasks to each MPI in the MPI cluster, wherein the computer of each MPI in the MPI cluster receives the MPI tasks, then calculates the operation data stored in the DFS according to the MPI task which is distributed respectively as well as the programs submitted to the MPI cluster, and stores the calculation results in the DFS, so that each computer in the computer cluster can get access to the calculation results.

Description

Distributed development platform and computing method thereof
Technical field
The embodiment of the present invention relates to computer realm, and more specifically, relates to distributed development platform and computing method thereof.
Background technology
The indispensable instrument of the large data of Distributed Computing Platform process.Current existing distributed platform, a class can realize automatic Data dissemination and data-transmitting fault-tolerant, and programming realization is simple, but the operational efficiency of program is lower; The another kind of developer of needs customizes distributing data and data-transmitting fault-tolerant strategy, and programming realization is complicated, and adopts the mode of parallel computation to make the operational efficiency of calling program higher.
Therefore, need a kind of suitable scheme, data automatic distributing and data-transmitting fault-tolerant can be realized, simplify programming complexity, realize efficient concurrent operation simultaneously.
Summary of the invention
The embodiment of the present invention provides a kind of distributed development platform and computing method thereof, can unite realize data automatic distributing, data transmission fault-tolerant etc., simplify programming complexity, and realize efficient parallel computation.
First aspect, provide a kind of distributed development platform, this distributed development platform comprises: computer cluster, the MPI cluster being deployed in the distributed file system DFS on this computer cluster and being made up of the message passing interface MPI be deployed in this computer cluster on each computing machine, wherein, this DFS is used for providing shared storage space for each computing machine in this computer cluster, service data required when this shared storage space stores this MPI cluster working procedure; This MPI cluster is used for the program being submitted to this MPI cluster to be divided into multiple MPI task and is distributed to each MPI in this MPI cluster, wherein, the computing machine at each the MPI place in this MPI cluster is after receiving MPI task, computing is carried out according to the service data that MPI task and this program being submitted to this MPI cluster of distribution separately store at this DFS, and the result after computing is stored in this DFS, the result after making each computing machine in this computer cluster can have access to computing.
In conjunction with first aspect, in the implementation that the first is possible, be implemented as: this DFS also for using the host node of the computing machine of in this computer cluster as this DFS, other computing machine as this DFS from node; This MPI cluster also for using the host node of the computing machine of in this computer cluster as this MPI cluster, other computing machine as this MPI cluster from node.
In conjunction with the first possible implementation of first aspect, in the implementation that the second is possible, be implemented as: the host node of this DFS and the host node of this MPI cluster are same computer; Or the host node of this DFS and the host node of this MPI cluster are different computing machines.
In conjunction with first aspect or the first possible implementation of first aspect or the possible implementation of the second of first aspect, in the implementation that the third is possible, be implemented as: this DFS is built by network file system(NFS) NFS or Hadoop distributed system HDFS.
In conjunction with the first possible implementation of first aspect or first aspect to any one possible implementation in the third possible implementation of first aspect, in the 4th kind of possible implementation, be implemented as: this MPI cluster is built by MPICH.
Second aspect, provide a kind of computing method of distributed development platform, the MPI cluster that this Distributed Computing Platform comprises computer cluster, is deployed in the distributed file system DFS on this computer cluster and is made up of the message passing interface MPI be deployed in this computer cluster on each computing machine, service data required when this DFS stores this MPI cluster working procedure, these computing method comprise: the program being submitted to this MPI cluster is divided into multiple MPI task and is distributed to each MPI in this MPI cluster by this MPI cluster; The service data that the computing machine at each the MPI place in this MPI cluster stores at this DFS according to the MPI task of distribution separately and this program being submitted to this MPI cluster carries out computing; Result after computing is stored in this DFS by each computing machine in this computer cluster, the result after making each computing machine in this computer cluster can have access to computing.
In conjunction with second aspect, in the implementation that the first is possible, be implemented as: this DFS also for using the host node of the computing machine of in this computer cluster as this DFS, other computing machine as this DFS from node; This MPI cluster also for using the host node of the computing machine of in this computer cluster as this MPI cluster, other computing machine as this MPI cluster from node.
In conjunction with the first possible implementation of second aspect, in the implementation that the second is possible, be implemented as: the host node of this DFS and the host node of this MPI cluster are same computer; Or the host node of this DFS and the host node of this MPI cluster are different computing machines.
In conjunction with second aspect or the first possible implementation of second aspect or the possible implementation of the second of second aspect, in the implementation that the third is possible, be implemented as: this DFS is built by network file system(NFS) NFS or Hadoop distributed system HDFS.
In conjunction with the first possible implementation of second aspect or second aspect to any one possible implementation in the third possible implementation of second aspect, in the 4th kind of possible implementation, be implemented as: this MPI cluster is built by MPICH.
Based on above technical scheme, the distributed development platform of the embodiment of the present invention and computing method thereof, utilize distributed file system to realize the automatic distributing of data, the fault-tolerant of data transmission etc., simplify programming complexity; The method utilizes MPI environment to realize the logic of parallel computation simultaneously, realizes efficient parallel computation.
Accompanying drawing explanation
In order to be illustrated more clearly in the technical scheme of the embodiment of the present invention, be briefly described to the accompanying drawing used required in embodiment or description of the prior art below, apparently, accompanying drawing in the following describes is only some embodiments of the present invention, for those of ordinary skill in the art, under the prerequisite not paying creative work, other accompanying drawing can also be obtained according to these accompanying drawings.
Fig. 1 is the system chart of the distributed development platform of the embodiment of the present invention.
Fig. 2 is the computing method process flow diagram of embodiment of the present invention distributed development platform.
Fig. 3 is Fig. 3 is embodiment of the present invention network deployment structure schematic diagram.
Fig. 4 is the structural representation of the distributed development platform of the embodiment of the present invention.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, be clearly and completely described the technical scheme in the embodiment of the present invention, obviously, described embodiment is the present invention's part embodiment, instead of whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art, not making the every other embodiment obtained under creative work prerequisite, belong to the scope of protection of the invention.
Fig. 1 is the system chart of the distributed development platform of the embodiment of the present invention.Wherein, multiple stage computing machine forms a computer cluster, each computing machine is provided with the computing machine of native file system and computing system.As shown in Figure 1, distributed file system (DistributedFileSystem, DFS) is deployed on computer cluster, is the logic mechanism that all computing machines in computer cluster provide the data simultaneously can accessing (read/write) to store.Meanwhile, message passing interface (MessagePassingInterface, MPI) is also deployed on computer cluster, for all computing machines in computer cluster provide parallel computation mechanism.Computer cluster Computer carries out the access (read/write) of global data by DFS, by the file system of this locality, local hard drive is carried out to the access (read/write) of local data, and carry out MPI process control by MPI cluster, realized the logic control of concurrent program by MPI communication with synchronous.
Fig. 2 is the computing method process flow diagram of embodiment of the present invention distributed platform.The method of Fig. 2 is performed by distributed platform.Wherein, the MPI cluster that this Distributed Computing Platform comprises computer cluster, is deployed in the distributed file system DFS on this computer cluster and is made up of the message passing interface MPI be deployed in this computer cluster on each computing machine, service data required when this DFS stores this MPI cluster working procedure.These computing method comprise:
The program being submitted to this MPI cluster is divided into multiple MPI task is distributed to MPI place computing machine by each MPI in this MPI cluster by 201, MPI cluster.
202, the service data that the computing machine at each the MPI place in this MPI cluster stores at DFS according to the MPI task of distribution separately and this program being submitted to this MPI cluster carries out computing.
203, the result after computing is stored in this DFS by the computing machine at each the MPI place in this MPI cluster, the result after making each computing machine in this computer cluster can have access to computing.
In the embodiment of the present invention, by utilizing distributed file system to realize the automatic distributing of data, the fault-tolerant of data transmission etc., programming complexity can be simplified; The method utilizes MPI environment to realize the logic of parallel computation simultaneously, can realize efficient parallel computation.
Alternatively, in the embodiment of the present invention, build DFS system by various ways.Such as, network file system(NFS) (NetworkFileSystem, NFS), Hadoop distributed file system (HadoopDistributedFileSystem, HDFS) etc.
Alternatively, in the embodiment of the present invention, build MPI cluster by various ways.Such as, MPICH etc.
Alternatively, this DFS also for using the host node of the computing machine of in this computer cluster as this DFS, other computing machine as this DFS from node; This MPI cluster also for using the host node of the computing machine of in this computer cluster as this MPI cluster, other computing machine as this MPI cluster from node.
Further, the host node of this DFS and the host node of this MPI cluster are same computer; Or the host node of this DFS and the host node of this MPI cluster are different computing machines.
Below, by conjunction with concrete example, the method for the embodiment of the present invention is further described.
1st step, deploying computer cluster.
Fig. 3 is embodiment of the present invention network deployment structure schematic diagram.For Fig. 3, computer cluster comprises 5 computer equipments, is connected between all computer equipments by Ethernet, and each computer equipment is provided with operating system, such as Linux etc.Should be understood that the computing machine in computer cluster also more than 5, or can be less than 5.
2nd step, disposes DFS system and MPI system.
When disposing DFS system, computer equipment in computer cluster can install DFS software, and from computer cluster, select computer equipment as the Master node of DFS system, other computer equipment as the Slave node of DFS system, to configure HDFS cluster.Such as, in figure 3, the HDFS software of Hadoop can be installed on M1, S1, S2, S3 and S4, and select M1 as the Master node (NameNode) of HDFS, select S1, S2, S3 and S4 as the Slave node (DateNode) of HDFS, carry out HDFS cluster configuration.The concrete configuration of HDFS cluster can with reference to prior art, and the embodiment of the present invention does not repeat them here.
In HDFS cluster, NameNode is master server, and for managing the metadata of HDFS, this metadata is for representing the essential information of the data file of HDFS; DateNode is storage server, for the data block (block) of storage file.In the data file of HDFS, large files can be divided into multiple block and store, and the size of each block is defaulted as 64MB.Can be there is multiple copy in each block, be stored in respectively in different DateNode.Can be there is one or more copy in each data file, or more particularly, each block can to exist or multiple copy, can be 1,2,3 or more.Generally, the number of DateNode in the no more than HDFS of the number of copy.A kind of preferred scheme, the node of HDFS cluster also can comprise SecondaryNameNode, for backing up, to recover when NameNode breaks down the metadata of NameNode.Certainly, HDFS cluster also can adopt more complicated tactic pattern according to actual needs, and such as, the tactic pattern etc. of distributed NameNode, the embodiment of the present invention is not restricted this.In the embodiment of the present invention, only for Master-Slave tactic pattern, the method for the embodiment of the present invention is described.
When disposing MPI system, the computer equipment in computer cluster can install MPI software, and from computer cluster, select a computer equipment as the Master node of MPI system, other computer equipment is as the Slave node of MPI system.The Master node of DFS system and the Master node of MPI system can be identical nodes, also can be different nodes.Preferably, the conveniently maintenance management of soft hardware equipment, can select the Master node of Master node as MPI system of DFS system, to configure MPI cluster.Such as, in figure 3, MPICH2 software can be installed on M1, S1, S2, S3 and S4, and select M1 as the Master node of MPI, select S1, S2, S3 and S4 as the Slave node of MPI, carry out MPI cluster configuration.The concrete configuration of MPI cluster can with reference to prior art, and the embodiment of the present invention does not repeat them here.
3rd step, writes MPI program according to task and performs.
Might as well suppose that task to be solved is in a set, add up the number of times of each word appearance.
First, the data file of this set is uploaded in HDFS, so as HDFS automatically by this data file piecemeal and distributed storage in different Slave nodes (DataNode).
Might as well suppose that the data file of this set is input.txt, size is that the file data in 640MB, HDFS only has a copy, there is not redundancy backup.After input.txt uploads to HDFS, input.txt can be divided into the block of 10 64MB by HDFS, and distributed storage is in S1, S2, S3 and S4.
Secondly, word frequency statistics task is divided into multiple MPI task and is distributed in the Slave node of MPI by the Master node of MPI.
Again, the Slave node of MPI is added up for the block being assigned to the machine, obtains the local word frequency on each computing machine, and is stored into HDFS.
Finally, the Master node of MPI reads the result of each node of HDFS, thus obtains the result of the overall situation.
Might as well suppose that the file needing to add up word frequency is " input.txt ", the embodiment of the present invention is specific as follows for a MPI false code program of carrying out word frequency statistics:
In the application of reality, the file backup of HDFS may exist multiple, and the Slave node of MPI can judge the whether not processed mistake of the data of this block when performing, and processes when this block is untreated again; Or when storing, the Slave node of MPI can add that the metadata information of block is to identify statistics, the Master node of MPI, when the statistics reading HDFS is to carry out final gathering, needs to judge whether this block has joined in final summarized results.
Certainly, also may there is other more complicated situation, can MPI program be write according to actual conditions and perform.
In the embodiment of the present invention, by data automatic distributing and the automatic transmission of distributed file system, and the concurrent operation function of MPI, data automatic distributing and data-transmitting fault-tolerant can be realized rapidly, simplify programming complexity, realize efficient concurrent operation simultaneously.
Fig. 4 is the structural representation of the distributed development platform 400 of the embodiment of the present invention.As shown in Figure 4, distributed development platform 400 can comprise computer cluster 401, MPI cluster 402 and DFS403.Wherein, MPI cluster 402 is made up of the message passing interface MPI be deployed in this computer cluster on each computing machine, and DFS403 disposes and is also deployed on computer cluster 401.
DFS403, this DFS are used for providing shared storage space for each computing machine in computer cluster 401, service data required when this shared storage space stores this MPI cluster working procedure.
MPI cluster 402, each MPI in MPI cluster 402 is distributed to for the program being submitted to MPI cluster 402 being divided into multiple MPI task, wherein, the computing machine at each the MPI place in MPI cluster 402 is after receiving MPI task, computing is carried out according to the service data that MPI task and this program being submitted to MPI cluster 402 of distribution separately store at DFS403, and the result after computing is stored in DFS403, the result after making each computing machine in computer cluster 401 can have access to computing.
In the embodiment of the present invention, distributed development platform 400 realizes the automatic distributing of data, the fault-tolerant of data transmission etc. by utilizing distributed file system, can simplify programming complexity; The method utilizes MPI environment to realize the logic of parallel computation simultaneously, can realize efficient parallel computation.
Alternatively, in the embodiment of the present invention, build DFS403 by various ways.Such as, NFS, HDFS etc.
Alternatively, in the embodiment of the present invention, build MPI cluster 402 by various ways.Such as, MPICH etc.
Alternatively, DFS403 also for using the host node of the computing machine of in this computer cluster as DFS403, other computing machine as DFS403 from node; MPI cluster 402 also for using the host node of the computing machine of in this computer cluster as MPI cluster 402, other computing machine as MPI cluster 402 from node.
Further, the host node of DFS403 and the host node of MPI cluster 402 are same computer; Or the host node of DFS403 and the host node of MPI cluster 402 are different computing machines.
In addition, distributed development platform 400 also can perform the method for Fig. 2, and realizes distributed development platform in Fig. 2, function embodiment illustrated in fig. 3, and the embodiment of the present invention does not repeat them here.
Those of ordinary skill in the art can recognize, in conjunction with unit and the algorithm steps of each example of embodiment disclosed herein description, can realize with the combination of electronic hardware or computer software and electronic hardware.These functions perform with hardware or software mode actually, depend on application-specific and the design constraint of technical scheme.Professional and technical personnel can use distinct methods to realize described function to each specifically should being used for, but this realization should not thought and exceeds scope of the present invention.
Those skilled in the art can be well understood to, and for convenience and simplicity of description, the specific works process of the system of foregoing description, device and unit, with reference to the corresponding process in preceding method embodiment, can not repeat them here.
In several embodiments that the application provides, should be understood that disclosed system, apparatus and method can realize by another way.Such as, device embodiment described above is only schematic, such as, the division of described unit, be only a kind of logic function to divide, actual can have other dividing mode when realizing, such as multiple unit or assembly can in conjunction with or another system can be integrated into, or some features can be ignored, or do not perform.Another point, shown or discussed coupling each other or direct-coupling or communication connection can be by some interfaces, and the indirect coupling of device or unit or communication connection can be electrical, machinery or other form.
The described unit illustrated as separating component or can may not be and physically separates, and the parts as unit display can be or may not be physical location, namely can be positioned at a place, or also can be distributed in multiple network element.Some or all of unit wherein can be selected according to the actual needs to realize the object of the present embodiment scheme.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, also can be that the independent physics of unit exists, also can two or more unit in a unit integrated.
If described function using the form of SFU software functional unit realize and as independently production marketing or use time, can be stored in a computer read/write memory medium.Based on such understanding, the part of the part that technical scheme of the present invention contributes to prior art in essence in other words or this technical scheme can embody with the form of software product, this computer software product is stored in a storage medium, comprising some instructions in order to make a computer equipment (can be personal computer, server, or the network equipment etc.) perform all or part of step of method described in each embodiment of the present invention.And aforesaid storage medium comprises: USB flash disk, portable hard drive, ROM (read-only memory) (ROM, Read-OnlyMemory), random access memory (RAM, RandomAccessMemory), magnetic disc or CD etc. various can be program code stored medium.
The above; be only the specific embodiment of the present invention, but protection scope of the present invention is not limited thereto, is anyly familiar with those skilled in the art in the technical scope that the present invention discloses; change can be expected easily or replace, all should be encompassed within protection scope of the present invention.Therefore, protection scope of the present invention should described be as the criterion with the protection domain of claim.

Claims (10)

1. a Distributed Computing Platform, it is characterized in that, the MPI cluster comprising computer cluster, be deployed in the distributed file system DFS on described computer cluster and be made up of the message passing interface MPI be deployed in described computer cluster on each computing machine, wherein
Described DFS is used for providing shared storage space for each computing machine in described computer cluster, service data required when described shared storage space stores described MPI cluster working procedure;
Described MPI cluster is used for the program being submitted to described MPI cluster to be divided into multiple MPI task and is distributed to each MPI in described MPI cluster, wherein, the computing machine at each the MPI place in described MPI cluster is after receiving MPI task, computing is carried out according to the service data that MPI task and the described program being submitted to described MPI cluster of distribution separately store at described DFS, and the result after computing is stored in described DFS, the result after making each computing machine in described computer cluster can have access to computing.
2. Distributed Computing Platform as claimed in claim 1, is characterized in that,
Described DFS also for using the host node of the computing machine of in described computer cluster as described DFS, other computing machine as described DFS from node;
Described MPI cluster also for using the host node of the computing machine of in described computer cluster as described MPI cluster, other computing machine as described MPI cluster from node.
3. Distributed Computing Platform as claimed in claim 2, is characterized in that,
The host node of described DFS and the host node of described MPI cluster are same computer; Or
The host node of described DFS and the host node of described MPI cluster are different computing machines.
4. the Distributed Computing Platform as described in any one of claims 1 to 3, is characterized in that, described DFS is built by network file system(NFS) NFS or Hadoop distributed system HDFS.
5. the Distributed Computing Platform as described in any one of Claims 1-4, is characterized in that, described MPI cluster is built by MPICH.
6. the computing method of a Distributed Computing Platform, it is characterized in that, the MPI cluster that described Distributed Computing Platform comprises computer cluster, is deployed in the distributed file system DFS on described computer cluster and is made up of the message passing interface MPI be deployed in described computer cluster on each computing machine, service data required when described DFS stores described MPI cluster working procedure, described computing method comprise:
The program being submitted to described MPI cluster is divided into multiple MPI task is distributed to MPI place computing machine by each MPI in described MPI cluster by described MPI cluster;
The service data that the computing machine at each the MPI place in described MPI cluster stores at described DFS according to the MPI task of distribution separately and the described program being submitted to described MPI cluster carries out computing;
Result after computing is stored in described DFS by each computing machine in described computer cluster, the result after making each computing machine in described computer cluster can have access to computing.
7. method as claimed in claim 6, is characterized in that,
Described DFS also for using the host node of the computing machine of in described computer cluster as described DFS, other computing machine as described DFS from node;
Described MPI cluster also for using the host node of the computing machine of in described computer cluster as described MPI cluster, other computing machine as described MPI cluster from node.
8. method as claimed in claim 7, is characterized in that,
The host node of described DFS and the host node of described MPI cluster are same computer; Or
The host node of described DFS and the host node of described MPI cluster are different computing machines.
9. the method as described in any one of claim 6 to 8, is characterized in that, described DFS is built by network file system(NFS) NFS or Hadoop distributed system HDFS.
10. the method as described in any one of claim 6 to 9, is characterized in that, described MPI cluster is built by MPICH.
CN201410273009.XA 2014-06-18 2014-06-18 Distributed development platform and calculation method of same Pending CN105183531A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410273009.XA CN105183531A (en) 2014-06-18 2014-06-18 Distributed development platform and calculation method of same

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410273009.XA CN105183531A (en) 2014-06-18 2014-06-18 Distributed development platform and calculation method of same

Publications (1)

Publication Number Publication Date
CN105183531A true CN105183531A (en) 2015-12-23

Family

ID=54905629

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410273009.XA Pending CN105183531A (en) 2014-06-18 2014-06-18 Distributed development platform and calculation method of same

Country Status (1)

Country Link
CN (1) CN105183531A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108170530A (en) * 2017-12-26 2018-06-15 北京工业大学 A kind of Hadoop Load Balancing Task Scheduling methods based on mixing meta-heuristic algorithm
CN108604202A (en) * 2016-05-12 2018-09-28 华为技术有限公司 The working node of parallel processing system (PPS) is rebuild
WO2022047632A1 (en) * 2020-09-01 2022-03-10 华为技术有限公司 Data computation method and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103428217A (en) * 2013-08-19 2013-12-04 中国航空动力机械研究所 Method and system for dispatching distributed parallel computing job
CN103780655A (en) * 2012-10-24 2014-05-07 阿里巴巴集团控股有限公司 Message transmission interface task and resource scheduling system and method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103780655A (en) * 2012-10-24 2014-05-07 阿里巴巴集团控股有限公司 Message transmission interface task and resource scheduling system and method
CN103428217A (en) * 2013-08-19 2013-12-04 中国航空动力机械研究所 Method and system for dispatching distributed parallel computing job

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108604202A (en) * 2016-05-12 2018-09-28 华为技术有限公司 The working node of parallel processing system (PPS) is rebuild
CN108170530A (en) * 2017-12-26 2018-06-15 北京工业大学 A kind of Hadoop Load Balancing Task Scheduling methods based on mixing meta-heuristic algorithm
CN108170530B (en) * 2017-12-26 2021-08-17 北京工业大学 Hadoop load balancing task scheduling method based on mixed element heuristic algorithm
WO2022047632A1 (en) * 2020-09-01 2022-03-10 华为技术有限公司 Data computation method and device

Similar Documents

Publication Publication Date Title
CN107870845B (en) Management method and system for micro-service architecture application
CN102317923B (en) Storage system
US10078552B2 (en) Hierarchic storage policy for distributed object storage systems
CN104115447A (en) Allowing destroy scheme configuration method and device under cloud computing architecture
CN102938784A (en) Method and system used for data storage and used in distributed storage system
CN103797462A (en) Method, system, and device for creating virtual machine
EP3513296B1 (en) Hierarchical fault tolerance in system storage
CN107430603A (en) The system and method for MPP database
CN104346479A (en) Database synchronization method and database synchronization device
CN102282544A (en) Storage system
CN102202087A (en) Method for identifying storage equipment and system thereof
CN105095103A (en) Storage device management method and device used for cloud environment
CN104598316A (en) Storage resource distribution method and device
CN108197159A (en) Digital independent, wiring method and device based on distributed file system
CN103399781A (en) Cloud server and virtual machine management method thereof
CN105205143A (en) File storage and processing method, device and system
CN103455346A (en) Application program deployment method, deployment main control computer, deployment client side and cluster
CN105528454A (en) Log treatment method and distributed cluster computing device
CN105474177A (en) Distributed processing system, distributed processing device, distributed processing method, and distributed processing program
CN102282545B (en) Storage system
CN105183531A (en) Distributed development platform and calculation method of same
CN105872635A (en) Video resource distribution method and device
CN110851143A (en) Source code deployment method, device, equipment and storage medium
CN105074660A (en) Deploying data-path-related plugins
CN103530206A (en) Data recovery method and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20151223