CN103502941B - A kind of method for parallel processing and device - Google Patents

A kind of method for parallel processing and device Download PDF

Info

Publication number
CN103502941B
CN103502941B CN201280000701.4A CN201280000701A CN103502941B CN 103502941 B CN103502941 B CN 103502941B CN 201280000701 A CN201280000701 A CN 201280000701A CN 103502941 B CN103502941 B CN 103502941B
Authority
CN
China
Prior art keywords
task
information
multiple tasks
business information
parallel processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201280000701.4A
Other languages
Chinese (zh)
Other versions
CN103502941A (en
Inventor
李震
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Cloud Computing Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of CN103502941A publication Critical patent/CN103502941A/en
Application granted granted Critical
Publication of CN103502941B publication Critical patent/CN103502941B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The present invention relates to field of computer technology, more particularly to method for parallel processing and device, wherein, method for parallel processing includes receiving multiple tasks processing request, the service identification carried according to task processing request, determines business information corresponding to task;According to business information corresponding to the task, parallel processing is carried out to the multiple task using multiple tasks step, the step number of the multiple task step is more than or equal to 2.The method for parallel processing and device of the embodiment of the present invention, parallel processing is carried out to task using multiple tasks step, moreover, determining that business information is handled task according to service identification, the task run parameter of user need not be submitted repeatedly, realize the simplification of multiple step process.

Description

A kind of method for parallel processing and device
Technical field
The present embodiments relate to field of computer technology, more particularly to a kind of method for parallel processing and device.
Background of invention
With the development of internet, the epoch of information explosion are entered, the parallel processing to bulk information can improve place Manage efficiency.
At present, more well known parallel processing system (PPS) such as hadoop systems(Belong to a kind of distributed system architecture)、 EMR(Elastic MapReduce)System.
But for the parallel processing of task, above-mentioned hadoop systems or emr system, it is necessary in strict accordance with map, Two steps of reduce are handled, and map refers to that original document handle according to map rules, among output As a result, reduce refers to merging intermediate result according to reduce rules.If there are multiple steps in task(More than 2 steps Suddenly)Processing, then need multiple by submitting, input the task run parameter of user every time to complete multiple step process, because This, it is more complicated for user using for upper.
The content of the invention
The purpose of the embodiment of the present invention is to provide a kind of method for parallel processing and device, realizes the simple of multiple step process Change.
On the one hand, the embodiment of the present invention provides a kind of method for parallel processing, including,
Multiple tasks processing request is received, the service identification for asking to carry is handled according to the task, determines that task is corresponding Business information;
According to business information corresponding to the task, the multiple task is located parallel using multiple tasks step Reason, the step number of the multiple task step are more than or equal to 2.
On the other hand, the embodiment of the present invention provides a kind of parallel processing apparatus, including,
Receiving unit, for receiving multiple tasks processing request, the service identification of request carrying is handled according to the task, Determine business information corresponding to task;
Processing unit, for the business information according to corresponding to the task, using multiple tasks step to the multiple Business carries out parallel processing, and the step number of the multiple task step is more than or equal to 2.
The method for parallel processing and device of the embodiment of the present invention, parallel processing is carried out to task using multiple tasks step, Moreover, determine that business information is handled task according to service identification, it is not necessary to the task run parameter of user is submitted repeatedly, Realize the simplification of multiple step process.
Brief Description Of Drawings
Fig. 1 is the schematic flow sheet of method for parallel processing of the embodiment of the present invention;
Fig. 2 is the composition schematic diagram of parallel processing apparatus of the embodiment of the present invention;
Fig. 3 is the composition schematic diagram under the application scenarios of parallel processing apparatus of the embodiment of the present invention one;
Fig. 4 is task step relation schematic diagram one in method for parallel processing of the embodiment of the present invention;
Fig. 5 is task step relation schematic diagram two in method for parallel processing of the embodiment of the present invention;
Fig. 6 is the schematic flow sheet under the application scenarios of method for parallel processing of the embodiment of the present invention one.
Implement the mode of the present invention
As shown in figure 1, the embodiment of the present invention provides a kind of method for parallel processing, including:
11st, multiple tasks processing request is received, the service identification (ID) for asking to carry is handled according to the task, it is determined that appointing Business information corresponding to business.
12nd, the business information according to corresponding to the task, the multiple task is carried out using multiple tasks step parallel Processing, the step number of the multiple task step are more than or equal to 2.
Wherein, the multiple task step can be understood as using and task handled more than or equal to 2 number of steps.
The method for parallel processing of the embodiment of the present invention, parallel processing is carried out to task using multiple tasks step, moreover, root Determine that business information is handled task according to service identification, it is not necessary to submit the task run parameter of user repeatedly, simply Multiple step process are realized, overcome hadoop systems, emr system needs to submit multiple, each task run for inputting user The defects of parameter is to complete multiple step process.
The method for parallel processing of the embodiment of the present invention, described before multiple tasks processing request is received, methods described is also It can include:
Obtain user-defined service definition file.
The service definition file is parsed, obtains the business information.
Service identification is generated, and establishes the corresponding relation of service identification and the business information.
In the present embodiment, user-defined service definition file can as the processing masterplate of certain class business, so as to The foundation run as multiple tasks under such business.In the task of submission, there is provided service identification can determine that business information, no Must incoming task operational factor every time, reduce operation of the user in task run, be easy to the use of user.
Specifically, the business information, can include:
Task definition information:Fault-tolerant rank and computation model for defining task etc..
Task splits information:For task to be split into multiple tasks step etc..
Task step related information:For defining the processing sequence between multiple tasks step.
Task step information:For defining the operation information of each task step, operation information includes:Resource information, use Family program and user's setting etc..
Optionally, operation information can also include:The tupe of multiple tasks step, the tupe are serial process Pattern or parallel processing mode.
It is previous in multiple task steps when the tupe of multiple tasks step is the serial process pattern All outputs of task step after integrity checking, as in multiple task steps latter task step it is defeated Enter.That is the output of each task step is multi output, and all outputs, just can be just as defeated after integrity checking Enter to enter next task step.
It is previous in multiple task steps when the tupe of multiple tasks step is the parallel processing mode Input of any one output of task step directly as latter task step in multiple task steps.I.e. each is appointed The output for step of being engaged in all is multi output, but does not need all export to pass through integrity checking, task step any one Output can be used as input to enter next task step.
It can be seen that multiple tasks step can improve disposal ability with parallel processing, hadoop systems, emr system are overcome The defects of needing 2 steps and strict serial process in order.
Further, the business information according to corresponding to task, is handled task using multiple tasks step Mode, it can include:
Task in the business information splits information, and the task is split into multiple tasks step.
Task step information in the business information, the user program of task step is obtained, and appointed to be described Business step application resource.
Task step related information in the business information, the resource applied is taken, call user program, it is right The task is handled, until according to the processing sequence between multiple tasks step, the place of the multiple task steps of completion Reason.
Optionally, the method for parallel processing of the embodiment of the present invention, can also include:
According to the priority ranking of task, the high task of priority treatment priority.
Or the priority of task is adjusted, the high task of priority treatment priority.
The mode being adjusted to the priority of task, it can include:According to the complete of the stand-by period of task and/or task Priority adjustment is carried out into the time.
As shown in Fig. 2 the method for parallel processing of corresponding above-described embodiment, the embodiment of the present invention provides a kind of parallel processing dress Put, including:
Receiving unit 21, for receiving multiple tasks processing request, the business mark of request carrying is handled according to the task Know (ID), determine business information corresponding to task;
Processing unit 22, for the business information according to corresponding to the task, using multiple tasks step to the multiple Task carries out parallel processing, and the step number of the multiple task step is more than or equal to 2.
The parallel processing apparatus of the embodiment of the present invention, parallel processing is carried out to task using multiple tasks step, moreover, root Determine that business information is handled task according to service identification, it is not necessary to submit the task run parameter of user repeatedly, simply Multiple step process are realized, overcome hadoop systems, emr system needs to submit multiple, each task run for inputting user The defects of parameter is to complete multiple step process.
The parallel processing apparatus of the embodiment of the present invention, it can also include:
Acquiring unit, for obtaining user-defined service definition file.
Resolution unit, for parsing the service definition file, the business information is obtained, generates service identification, and build Vertical service identification and the corresponding relation of the business information.
Memory cell, for storing the corresponding relation of the service identification and the business information.
Specifically, business information can include:
Task definition information:Fault-tolerant rank and computation model for defining task etc..
Task splits information:For task to be split into multiple tasks step etc..
Task step related information:For defining the processing sequence between multiple tasks step.
Task step information:For defining the operation information of each task step, operation information includes:Resource information, use Family program and user's setting etc..
Further, processing unit 22, specifically can be used for:
Task in the business information splits information, and the task is split into multiple tasks step.
Task step information in the business information, the user program of task step is obtained, and appointed to be described Business step application resource.
Task step related information in the business information, the resource applied is taken, call user program, it is right The task is handled, until according to the processing sequence between multiple tasks step, the place of the multiple task steps of completion Reason.
Optionally, operation information can also include:The tupe of multiple tasks step, the tupe are serial process Pattern or parallel processing mode, processing unit 22, specifically can be also used for:
It is previous in multiple task steps when the tupe of multiple tasks step is the serial process pattern All outputs of task step after integrity checking, as in multiple task steps latter task step it is defeated Enter.
It is previous in multiple task steps when the tupe of multiple tasks step is the parallel processing mode Input of any one output of task step directly as latter task step in multiple task steps.
Optionally, processing unit 22, specifically can be also used for:
According to the priority ranking of task, the high task of priority treatment priority, or, the priority of task is adjusted It is whole, the high task of priority treatment priority.
The mode being adjusted to the priority of task, it can include:According to the complete of the stand-by period of task and/or task Priority adjustment is carried out into the time.
The parallel processing apparatus of the embodiment of the present invention can be managed correspondingly with reference to the method for parallel processing of above-described embodiment Solution, identical content, therefore not to repeat here.
As shown in figure 3, the composition schematic diagram under parallel processing apparatus application scenarios of the embodiment of the present invention.
Web Service31, it is responsible for user's web request and receives and forward.User is such as received to ask to define service scripts Request, and be transmitted to service definition module 32.Web Service belong to application service, may be referred to prior art and are able to Understand, therefore not to repeat here.
Service definition module 32, offer interface is provided and allows user to define service definition file.Service definition file includes industry Business information.Business information can include task definition information, and task splits information, task step related information, and task step Rapid information.Business information is the foundation that task dispatcher carries out task scheduling processing, and task dispatcher and business information will be under Stationery body demonstrates.
Task parsing module 33, it is responsible for receiving user-defined service definition file(Service definition file type such as xml or Person json, etc.), and service definition file is parsed, user-defined business information is obtained, and save it in data In storehouse 34, while return to service identification corresponding to business information(ID).Database 34 can be distributed data base, distributed number Prior art is may be referred to according to storehouse to be understood, therefore not to repeat here.
Task dispatcher 35, it is responsible for receiving the task processing request that Web Service31 are sent.Task dispatcher 35, It can be used for being adapted to different computation models, map, reduce mould of computation model such as hadoop systems according to the demand of business Type, or for example multiple step scheduling models of computation model(Step is more than or equal to 2).Task dispatcher 35, it can also be used to realize Priority ranking, either priority adjust or task is disassembled and be that task distributes resource and task control.
Explorer 36, it is responsible for meeting the resource bid of task dispatcher 35, release.Specifically, explorer 36 major function can include resource management, resource matched or resource automatic telescopic.
Task run module 37, it is responsible for the processing of task, calls the processing routine of User Exploitation, handles task dispatcher 35 Distributing for task.
Cluster management module 38, it is responsible for automatically dispose and the monitoring of the cluster of processing parallel task.
The bottom supports physical machine and VM(Virtual Machine, virtual machine)Etc. various isomerization hardwares 39.Physical machine can With including PC, work station, or various application servers, etc..
Specifically, business information can include:Task definition information, task split information, task step related information With task step information.
(1)Task definition information includes tolerant level other FaultTolerance and computation model ProgramModel, respectively For:
FaultTolerance="Normal"
ProgramModel="CMR"
Wherein, CMR is Cloud MapReduce, can be to being interpreted as a kind of multiple step computation models.
Optionally, computation model can also be the computation model of hadoop systems or emr system, realize to two rapid places Reason, realizes the compatibility that parallel processing apparatus of the embodiment of the present invention is handled two suddenly.
(2)Task splits information, such as:
Information is split according to the task of user, being submitted to user for task is split(Or can default system carry The fractionation function of confession), for after fractionation result continue following task the step of handle.
(3)Task step related information:It can define multiple(More than 2)Task step, rather than hadoop systems The step of map, reduce two, and define the processing sequence between multiple tasks step, i.e. multiple tasks step relation.Overcome Hadoop systems, emr system need to submit the task run parameter for repeatedly inputting user every time to complete multiple step process Defect.It is defined as follows for multiple tasks step relation:
Wherein, task run process proportionate relationships of the StepRatio between each Step, adjustment StepRatio from And realize the adjustment of task run process.
Task step relation as shown in Figure 4, according to task step relation, task manager carries out the scheduling of task, fortune The result occurred after one Step of row continues to run with as next Step input, as entered Step2 after Step1, enters back into Step3。
Include the task step relation of bifurcated as shown in Figure 5, as Step21 and Step22 is located at after Step1 side by side.
(4)Task step information, task step information can include resource information, the user program of each Step operations And user other setting.Task step information is such as:
Wherein " % " is the symbol of division operation.
Scheduler carries out the application of resource according to task step information, carries out task processing.
As shown in fig. 6, the parallel processing apparatus with reference to shown in Fig. 3, by taking media handling transcoding business as an example, can be submitted more Individual transcoding task, each transcoding task include burst, transcoding, merge 3 steps, correspond to step1, step2, step3 respectively Exemplified by, illustrate the method for parallel processing of the embodiment of the present invention, including:
Step 61:User logs in.
Step 62:Service definition module receives the request that user submits self-defined service definition file;Return business is determined The adopted page.
Step 63:The definition of service definition module finishing service, and the service definition file of generation is submitted into business solution Analyse module.
Step 64:Service resolution module receives service definition file, by document analysis, obtains user-defined business letter Breath.
Step 65:Business information is stored in database by service resolution module, and returns to the traffic ID of business information.
Step 66:User submits task, and Web Service receive the task processing request of user's submission, in this implementation In mode, the task processing request can include the input and output of user and used traffic ID.
Step 67:Task processing request is transmitted to task dispatcher by Web Service.
Step 68:Task dispatcher is handled according to the task and asked, and user-defined business information is found, in database Middle these business information of acquisition, and return to user and submit successfully, in the present embodiment, the task processing request is business ID。
Step 69:Task of the task dispatcher in business information splits information, obtains the application program of user.
Specifically, task dispatcher according to task split information, task is split as multiple small tasks, as step1, Step2, step3, in order to carry out parallel processing faster.
Step 610:Task dispatcher is to explorer application resource.
According to the operation information in step1 task step information, task dispatcher needs to explorer application Resource(Shared resource when specification, mirror image including operation machine, user task operation:It is CPU, internal memory, virtual memory, hard Disk, network bandwidth etc.).
The information that explorer provides according to task manager returns to the resource identification of matching.
Optionally, task dispatcher can select the high task of priority to carry out concurrent according to the priority ranking of task Processing, or task dispatcher are adjusted to priority.
Step 611:The operation task in the resource of application.
Specifically, when starting resource, a task run module can be started, for task run and management.Task is adjusted The task run module that degree device is sent information in resource.
Step 612:Task run module obtains the user application of the step1 to file storage unit.
Step 613:Task run module runs step1.
If Step1 has result, task dispatcher can find Step2, again according to task step related information To resource operation task corresponding to explorer application Step2, terminate until Step3 is performed.
Due to being multiple step process, step 68613 is that each step can be related to, until whole task all Issue success.
Optionally, task dispatcher in scheduler task step, it is necessary to the input and output of intermediate steps be specified, and in business Define defined in file each step whether be after the completion of export.If be defined as without integrity checking and output immediately, Then can continues to run with next step directly as the input of next step after each step result output in task, real The parallel processing of existing step., whereas if be defined as needing by integrity checking, then each Step outputs it is a collection of among tie Fruit is, it is necessary to carry out certain processing to intermediate result, and completely after output, progress in next step, realizes the serial process of task.
Optionally, service definition module, service resolution module, task dispatcher, explorer, can be arranged on same Or on different servers.Task run module, file storage unit can be arranged on same or different physical machine or VM.
Explanation is also needed to, user is after a kind of service definition file is submitted(Step 6165), available for similar task Operation, i.e. user submits multiple same type task, can share the service definition file that a step 6165 is submitted.
User can submit multiple tasks, and Web Service can be forwarded to parallel processing in task dispatcher, realize an industry Business defines file and used for multiple tasks, realizes the parallel operation of task.Such as the step 613 of similar step 66:
Step 660:User submits task, and Web Service receive the request that user submits task to handle(Including user Input and output and used traffic ID).
Step 670:Web Service will be asked(Input and output and used traffic ID including user)It is transmitted to and appoints Business scheduler.
Step 680:Task dispatcher is according to the information of request(Traffic ID), user-defined business information is found, in number According to obtaining these business information in storehouse, and return to user and submit successfully.
Step 690:Task of the task dispatcher in business information splits information, obtains the application program of user.
Step 6100:Task dispatcher is to explorer application resource.Explorer provides according to task manager Information return matching resource identification.
Step 6110:Task dispatcher will give task run module for the resource notification that step1 applies.
Step 6120:Task run module obtains the user application of the step1 to file storage unit.
Step 6130:Task run module runs step1.
It can be seen that carrying out parallel processing to task using multiple tasks step, hadoop systems, emr system needs are overcome Submit repeatedly, input the defects of task run parameter of user is to complete multiple step process every time.
Moreover, user can submit multiple tasks, realize that a service definition file uses for the task of multiple same types, it is real The parallel operation of current task.
One of ordinary skill in the art will appreciate that realize all or part of flow in above-described embodiment method, being can be with The hardware of correlation is instructed to complete by computer program, described program can be stored in a computer read/write memory medium In, the program is upon execution, it may include such as the flow of the embodiment of above-mentioned each method.Wherein, described storage medium can be magnetic Dish, CD, read-only memory(Read-Only Memory, ROM)Or random access memory(Random Access Memory, RAM)Deng.

Claims (4)

  1. A kind of 1. method for parallel processing, it is characterised in that including:
    Multiple tasks processing request is received, the service identification for asking to carry is handled according to the task, determines industry corresponding to task Business information;
    According to business information corresponding to the task, parallel processing, institute are carried out to the multiple task using multiple tasks step The step number for stating multiple tasks step is more than or equal to 2;
    Wherein, the business information includes:
    Task definition information:For defining the fault-tolerant rank and computation model of task;
    Task splits information:For task to be split into multiple tasks step;
    Task step related information:For defining the processing sequence between multiple tasks step;
    And task step information:For defining the operation information of each task step, the operation information includes:Resource is believed Breath, user program and user are set;
    Further, the business information according to corresponding to the task, the multiple task is entered using multiple tasks step Row parallel processing, is specifically included:
    Task in the business information splits information, and the task is split into multiple tasks step;
    Task step information in the business information, the user program of task step is obtained, and walked for the task Rapid application resource;
    Task step related information in the business information, the resource applied is taken, user program is called, to described Task is handled, until according to the processing sequence between multiple tasks step, completing the processing of multiple task steps;
    And the operation information also includes:The tupe of multiple tasks step, the tupe be serial process pattern or Parallel processing mode,
    When the tupe of multiple tasks step is the serial process pattern, previous task in multiple task steps All outputs of step are after integrity checking, the input as latter task step in multiple task steps;
    When the tupe of multiple tasks step is the parallel processing mode, previous task in multiple task steps Input of any one output of step directly as latter task step in multiple task steps.
  2. 2. according to the method for claim 1, it is characterised in that it is described before multiple tasks processing request is received, it is described Method also includes:
    Obtain user-defined service definition file;
    The service definition file is parsed, obtains the business information;
    Service identification is generated, and establishes the corresponding relation of service identification and the business information.
  3. A kind of 3. parallel processing apparatus, it is characterised in that including:
    Receiving unit, for receiving multiple tasks processing request, the service identification of request carrying is handled according to the task, it is determined that Business information corresponding to task;
    Processing unit, for the business information according to corresponding to the task, the multiple task is entered using multiple tasks step Row parallel processing, the step number of the multiple task step are more than 2;
    The business information includes:
    Task definition information:For defining the fault-tolerant rank and computation model of task;
    Task splits information:For task to be split into multiple tasks step;
    Task step related information:For defining the processing sequence between multiple tasks step;
    And task step information:For defining the operation information of each task step, the operation information includes:Resource is believed Breath, user program and user are set;
    The processing unit is used for the business information according to corresponding to the task, using multiple tasks step to the multiple task Parallel processing is carried out, is specifically included:
    Task in the business information splits information, and the task is split into multiple tasks step;
    Task step information in the business information, the user program of task step is obtained, and walked for the task Rapid application resource;
    Task step relation information in the business information, the resource applied is taken, user program is called, to described Task is handled, until according to the processing sequence between multiple tasks step, completing the processing of multiple task steps;
    And the operation information also includes:The tupe of multiple tasks step, the tupe be serial process pattern or Parallel processing mode, the processing unit are specifically additionally operable to:
    When the tupe of multiple tasks step is the serial process pattern, previous task in multiple task steps All outputs of step are after integrity checking, the input as latter task step in multiple task steps;
    When the tupe of multiple tasks step is the parallel processing mode, previous task in multiple task steps Input of any one output of step directly as latter task step in multiple task steps.
  4. 4. device according to claim 3, it is characterised in that described device also includes:
    Acquiring unit, for obtaining user-defined service definition file;
    Resolution unit, for parsing the service definition file, the business information is obtained, generates service identification, and establish industry The corresponding relation of business mark and the business information;
    Memory cell, for storing the corresponding relation of the service identification and the business information.
CN201280000701.4A 2012-03-19 2012-03-19 A kind of method for parallel processing and device Active CN103502941B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2012/072545 WO2013138982A1 (en) 2012-03-19 2012-03-19 A parallel processing method and apparatus

Publications (2)

Publication Number Publication Date
CN103502941A CN103502941A (en) 2014-01-08
CN103502941B true CN103502941B (en) 2017-11-17

Family

ID=49221785

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201280000701.4A Active CN103502941B (en) 2012-03-19 2012-03-19 A kind of method for parallel processing and device

Country Status (2)

Country Link
CN (1) CN103502941B (en)
WO (1) WO2013138982A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107016480B (en) * 2016-01-28 2021-02-02 五八同城信息技术有限公司 Task scheduling method, device and system
CN107818097B (en) * 2016-09-12 2020-06-30 平安科技(深圳)有限公司 Data processing method and device
CN108898482B (en) * 2018-07-09 2021-02-26 中国建设银行股份有限公司 Multi-product signing method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5812809A (en) * 1989-09-04 1998-09-22 Mitsubishi Denki Kabushiki Kaisha Data processing system capable of execution of plural instructions in parallel
CN101110022A (en) * 2007-08-30 2008-01-23 济南卓信智能科技有限公司 Method for implementing workflow model by software
CN101770402A (en) * 2008-12-29 2010-07-07 中国移动通信集团公司 Map task scheduling method, equipment and system in MapReduce system
CN102033748A (en) * 2010-12-03 2011-04-27 中国科学院软件研究所 Method for generating data processing flow codes

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070140244A1 (en) * 2005-12-21 2007-06-21 International Business Machines Corporation Optimal algorithm for large message broadcast

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5812809A (en) * 1989-09-04 1998-09-22 Mitsubishi Denki Kabushiki Kaisha Data processing system capable of execution of plural instructions in parallel
CN101110022A (en) * 2007-08-30 2008-01-23 济南卓信智能科技有限公司 Method for implementing workflow model by software
CN101770402A (en) * 2008-12-29 2010-07-07 中国移动通信集团公司 Map task scheduling method, equipment and system in MapReduce system
CN102033748A (en) * 2010-12-03 2011-04-27 中国科学院软件研究所 Method for generating data processing flow codes

Also Published As

Publication number Publication date
WO2013138982A1 (en) 2013-09-26
CN103502941A (en) 2014-01-08

Similar Documents

Publication Publication Date Title
CN103873321B (en) Distributed file system-based simulation distributed parallel computing platform and method
CN112866059B (en) Lossless network performance testing method and device based on artificial intelligence application
US11740941B2 (en) Method of accelerating execution of machine learning based application tasks in a computing device
CN109192248A (en) Biological information analysis system, method and cloud computing platform system based on cloud platform
CN103713935B (en) Method and device for managing Hadoop cluster resources in online manner
CN107370667A (en) Multi-threading parallel process method and apparatus, computer-readable recording medium and storage control
CN104112049B (en) Based on the MapReduce task of P2P framework across data center scheduling system and method
CN116263701A (en) Computing power network task scheduling method and device, computer equipment and storage medium
CN108319499A (en) Method for scheduling task and device
CN106815254A (en) A kind of data processing method and device
CN109614227A (en) Task resource concocting method, device, electronic equipment and computer-readable medium
CN106940631A (en) A kind of parallel printing system based on controller
CN110806928A (en) Job submitting method and system
CN108241534A (en) A kind of task processing, distribution, management, the method calculated and device
CN103502941B (en) A kind of method for parallel processing and device
CN110356007A (en) A kind of extensive 3D printing model slice cloud platform based on IPv6 network
CN106373112A (en) Image processing method, image processing device and electronic equipment
CN111782404A (en) Data processing method and related equipment
CN102831102A (en) Method and system for carrying out matrix product operation on computer cluster
CN105550025A (en) Distributed IaaS (Infrastructure as a Service) scheduling method and system
CN107918676A (en) The method for optimizing resources and database inquiry system of structuralized query
CN103347059B (en) Realize the method for user's configuration parameter transmission, client and system
JPWO2021016019A5 (en)
CN105159741A (en) Water conservancy model simulating computation system and computation method based on cloud serving
Salama et al. XAI: A Middleware for Scalable AI.

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20220209

Address after: 550025 Huawei cloud data center, jiaoxinggong Road, Qianzhong Avenue, Gui'an New District, Guiyang City, Guizhou Province

Patentee after: Huawei Cloud Computing Technologies Co.,Ltd.

Address before: 518129 Bantian HUAWEI headquarters office building, Longgang District, Guangdong, Shenzhen

Patentee before: HUAWEI TECHNOLOGIES Co.,Ltd.

TR01 Transfer of patent right