KR101764811B1 - Muti-scheduling method and apparatus in multi-processing environment - Google Patents

Muti-scheduling method and apparatus in multi-processing environment Download PDF

Info

Publication number
KR101764811B1
KR101764811B1 KR1020140079129A KR20140079129A KR101764811B1 KR 101764811 B1 KR101764811 B1 KR 101764811B1 KR 1020140079129 A KR1020140079129 A KR 1020140079129A KR 20140079129 A KR20140079129 A KR 20140079129A KR 101764811 B1 KR101764811 B1 KR 101764811B1
Authority
KR
South Korea
Prior art keywords
group
flow
scheduler
processor
processors
Prior art date
Application number
KR1020140079129A
Other languages
Korean (ko)
Other versions
KR20150114869A (en
Inventor
정기웅
Original Assignee
주식회사 구버넷
정기웅
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 구버넷, 정기웅 filed Critical 주식회사 구버넷
Priority to CN201580003775.7A priority Critical patent/CN105900063B/en
Priority to JP2017520838A priority patent/JP2017521806A/en
Priority to US14/760,374 priority patent/US10530846B2/en
Priority to EP15811135.1A priority patent/EP3163442A4/en
Priority to PCT/KR2015/005914 priority patent/WO2015199366A1/en
Publication of KR20150114869A publication Critical patent/KR20150114869A/en
Application granted granted Critical
Publication of KR101764811B1 publication Critical patent/KR101764811B1/en

Links

Images

Abstract

A method and apparatus for multiple scheduling in a multiprocessing environment are disclosed. A multi-processing apparatus divides a plurality of processors into at least two groups and designates one of the processors as a scheduler for each group.

Description

[0001] The present invention relates to a multi-scheduling method and apparatus for multi-

The present invention relates to a multi-scheduling method and apparatus, and more particularly, to a method and apparatus for multiple scheduling using a plurality of schedulers in a multi-processing environment for parallel processing of packets.

A multiprocessing system processes a plurality of processes in parallel using a plurality of CPUs (Central Processing Unit) cores. However, in the case of multiprocessing, there is a problem of evenly distributing the load among the CPU cores, collision of resources shared between CPU cores, and deterioration of cache efficiency.

In particular, in the case of a multiprocessing system that processes packets in parallel, it is desirable to treat the packets belonging to the same flow in the same CPU and maintain the flow affinity for the CPU in order to increase the packet processing efficiency. In this case, There is a problem that the load distribution imbalance among the entire CPUs may occur due to an excessive load on the CPU, which may lower the overall processing efficiency of the multi-processing system.

In order to solve this problem, load balancing between the CPUs must be performed periodically. In this case, however, the CPU processing the flows is changed during the load balancing process, so that the flow affinity is lowered. (Packet re-ordering) process is required, and the packet processing efficiency of the multi-processing system is lowered.

As described above, in order to increase the processing efficiency of the multiprocessing system, it is necessary to improve the flow affinity and to appropriately load-distribute them.

Patent Publication No. 2013-0108609 Patent Publication No. 2011-0070772

SUMMARY OF THE INVENTION The present invention is directed to a multi-scheduling method that mitigates trade-offs between flow affinity and load balancing and increases utilization efficiency of all processors in a multiprocessing environment that performs packet parallel processing, And to provide that device.

According to an aspect of the present invention, there is provided a method for multiple scheduling in a multi-processing apparatus, including: dividing a plurality of processors into at least two groups; And designating one of the processors as a scheduler for each group.

According to another aspect of the present invention, there is provided a method of scheduling multiple processes in a multi-processing apparatus, comprising: identifying a flow of a received packet; Allocating the flow to any one of the at least two groups according to a predetermined classification policy in a situation where there are a plurality of processors divided into at least two groups; And designating one of the processors in the group to which the flow is assigned as a scheduler.

According to an aspect of the present invention, there is provided a multi-processing apparatus including: a packet allocation table including information on processors assigned to each flow; A packet identification unit for identifying a flow of the received packet and allocating the packet to the processor with reference to the packet allocation table; A group division unit for dividing the plurality of processors into at least two groups; And a scheduler designating unit for designating, for each group, a scheduler for selecting a processor to process a flow-specific packet.

According to the present invention, it is possible to improve the performance of parallel processing by alleviating the trade-off between flow affinity and load distribution. Also, by using a plurality of dynamically designated schedulers, latency due to packet scheduling and queuing can be reduced. Also, various scheduling algorithms according to traffic attributes can be easily applied through a plurality of schedulers.

Figure 1 illustrates an example of a multiprocessing device in accordance with the present invention;
Figure 2 illustrates an example of a multiprocessing method in accordance with the present invention;
FIG. 3 illustrates an example of a classification policy for multi-scheduling according to the present invention.
4 is a diagram illustrating a configuration of an example of a multiprocessing apparatus to which a multiple scheduling method according to the present invention is applied,
FIG. 5 is a diagram illustrating an example of a detailed configuration of a multi-scheduling unit according to the present invention;
FIG. 6 is a flowchart illustrating an example of a multiple scheduling method in a multiple processing environment according to the present invention,
FIG. 7 is a flowchart illustrating another example of a multi-scheduling method in a multi-processing environment according to the present invention.

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS Reference will now be made in detail to the preferred embodiments of the present invention, examples of which are illustrated in the accompanying drawings.

1 is a diagram illustrating an example of a multiprocessing apparatus according to the present invention.

1, a multi-processor apparatus 100 includes a packet identification unit 105, a packet transfer unit 140, a packet assignment table 130, a memory 104, a plurality of queues 110, 112 and 114, 120, 122, and 124, and a control unit 150.

The packet identification unit 105 receives a packet from a wired or wireless network or another device, and identifies a flow of the received packet. The packet identification unit 105 determines whether there is a processor assigned to the received packet flow with reference to the packet assignment table 130. [

The packet assignment table 130 includes information of processors allocated for each packet flow. For example, the packet allocation table 130 may include information indicating that a first processor is assigned to a processor for processing first and second flows, and a second processor is assigned to a processor for processing a third flow . The information stored in the packet assignment table 130 is then generated and updated by the schedules scheduler.

The memory 104 stores the packet received by the packet identification unit 105. [ At this time, the memory 104 may store flow information of a packet identified by the packet identification unit, processor information obtained by referring to the packet allocation table, and the like.

The packet transfer unit 140 transfers the packets stored in the memory to the queue of the corresponding processor. The packet forwarding unit 140 may forward the packets stored in the memory to the processor queue sequentially or may sequentially transmit the packets to the processor queue in consideration of various conditions such as QoS (Quality of Service) and priority (prioirty) have.

The queues 110, 112 and 114 receive the packets to be processed by the respective processors from the memory 104 and store them. In the present embodiment, the queues 110, 112, and 114 are shown as one for each of the processors 120, 122, and 124. However, the present invention is not limited to this, and two or more queues may exist in one processor, You can share.

In addition, the queues 110, 112, and 114 may be implemented in a first-in first-out (FIFO) structure, but may be implemented in other types of structures, such as, but not limited to, LIFO or priority, It can be a form capable of storing a packet to be transmitted.

If there is no packet flow information in the packet allocation table 130, the controller 150 designates one of the plurality of processors as a scheduler and transmits an interrupt request signal. The processor receiving the interrupt request signal selects a processor to process the packet and stores the related information in the packet assignment table.

The plurality of processors 120, 122, and 124 process packets, respectively. Also, one of a plurality of processors (for example, processor 1 120) may be used as a scheduler without having a separate scheduler for packet scheduling in consideration of the efficiency of multi-processing system and reduction of manufacturing cost. A method of using one of the plurality of processors as a scheduler will be described with reference to FIG. Of course, in this embodiment, a separate scheduler may be provided in addition to the plurality of processors.

FIG. 2 is a diagram illustrating an example of a multiprocessing method according to the present invention.

Referring to FIG. 1 and FIG. 2 together, the packet identifying unit 105 analyzes the received packet to identify the packet flow (S200, S210). The packet identification unit 105 refers to the packet allocation table 130 and determines whether there is information of a processor to process a flow (S220). The packet, the flow information of the packet, the processor information, and the like are stored in the memory 104.

If there is information on the processor to process the flow in the packet assignment table 130 (S230), the packet transfer unit 140 delivers the packet to the queue of the processor (S260). For example, if the received packet is identified as the third flow by the packet identification unit 105 and the second processor is assigned to the processor for processing the first flow in the packet assignment table 130, the packet transfer unit 140 Transfers the packet to the queue 112 of the second processor 122.

On the other hand, if there is no information of the processor to process the flow in the packet assignment table 130 (i.e., if the flow is a new flow) (S230), the controller 150 instructs the processor, (S240). The controller 150 may designate a scheduler having a current load among a plurality of processors as a scheduler, designate a scheduler through a predetermined scheduler determination algorithm, or designate a predetermined processor as a scheduler. In this embodiment, processor 1 120 is designated as a scheduler.

Upon receiving the interrupt request signal, the processor 120 interrupts the previous operation and performs a scheduling operation (S250). For example, the processor 120 designated as the scheduler selects a processor to process a new flow (S250), and stores information about the selected processor in the packet assignment table 130 (S260). If an interrupt request is forwarded to the processor specified by the scheduler, an interrupt request for processing a new incoming packet is not allowed until the interrupt is released.

In addition, the processor 120 designated as the scheduler may perform a load redistribution among processors (for example, when a specific event such as a load unbalance of processors is higher than a predetermined level occurs) or periodically by applying various conventional load balancing algorithms re-balancing.

In the case of FIGS. 1 and 2, one scheduler 120 is selected and a new interrupt is not allowed from the system while performing the task, and processing for another new flow is delayed until the requested interrupt is released. Further, since the load redistribution for eliminating the load imbalance is performed for all of the plurality of processors, there is a problem that the tradeoff between flow affinity and load distribution is intensified. This can be mitigated through the multiple scheduling of FIG.

3 is a diagram illustrating an example of a classification policy for multi-scheduling according to the present invention.

Referring to FIG. 3, the classification policy includes a policy for dividing a plurality of processors into groups. As shown in FIG. 4, in the case of multiple scheduling, a plurality of processors are divided into at least two groups, and scheduling is performed for each group. This requires a policy for dividing a plurality of processors into groups.

As an example of the classification policy, there is a packet flow based policy as shown in FIG. A flow can be divided into two groups A and B based on attributes that can be hierarchically divided into packet flows. In this case, the plurality of processors may be divided into two groups depending on which group the flows currently processing are belonging to.

As another example, there is a classification policy based on the load of each processor. The processors can be divided according to the predetermined group number so that the load distribution for each group can be made even.

The classification policy for dividing a plurality of processors into groups may be plural. For example, the first policy is a policy for dividing a plurality of processors into two groups on a flow basis, the second policy is a policy for dividing a plurality of processors into three groups on a flow basis, And may be a policy of dividing the processors into at least two groups depending on the degree of load.

The present invention is not limited to the embodiment of FIG. 3. Various classification policies for dividing the processors may be applied. The classification policy is set in advance, and the user can update the classification policy through a separate input / output interface. In FIG. 4, it is assumed that a criterion for dividing a plurality of processors, that is, a classification policy, is set in advance for multi-scheduling.

FIG. 4 is a block diagram illustrating an example of a multi-processing apparatus to which the multi-scheduling method according to the present invention is applied.

4, the multiple processing apparatus 400 includes a packet identification unit 410, a packet transfer unit 480, a packet assignment table 420, a multiple scheduling unit 430, a memory 440, 450 and a plurality of processors 460, 462, 464, 466, 470, 472, 474.

The packet identification unit 410, the packet transfer unit 480, the packet assignment table 420, the memory 440, the plurality of queues 450 and the processors 460, 462, 464, 466, 470, 472 and 474 all include the configuration and the function described in FIG. . Therefore, the description of the present embodiment will not be repeated, and the configuration and functions necessary for multi-scheduling according to the present embodiment will be described.

The multi-scheduling unit 430 determines whether to perform multi-scheduling based on state information of the multi-processing system such as load distribution state, traffic attribute, and traffic processing capacity. In addition, when there are a plurality of classification policies as shown in FIG. 3, the multiple scheduler 430 determines which classification policy is applied based on the status information.

If the scheduler 430 determines to perform the multiple scheduling, the multiple scheduler 430 classifies the plurality of processors into at least two groups according to the classification policy, and designates a scheduler to perform scheduling for each group. The detailed configuration of the multiple scheduling unit is shown in Fig.

For example, as shown in FIG. 4, the multi-scheduling unit 430 divides seven processors into two groups (first group: processors 1 to 4 and second group: processors 5 to 7) One of the processors 466 and 474 is designated to the scheduler by a predetermined scheduler determination algorithm. When the processors are divided into groups by the multiple scheduling unit 430, the information about the processor groups can be stored in the packet assignment table 420.

For example, in the case of grouping processors on a flow-based basis as shown in FIG. 3, the multiple scheduling unit 430 stores information on which group the flow belongs to in the packet assignment table 420. If the information on the newly received packet flow is not present in the packet assignment table 420, the packet identification unit 410 identifies the group to which the new flow belongs, and stores identification information on the packet and the packet in the memory 440). The multi-scheduling unit 430 designates a scheduler to process the packet in the corresponding group according to the degree of load or a predetermined scheduler determination algorithm, and transmits an interrupt request signal to the designated scheduler to process the packet. The processor designated by the scheduler performs a scheduling operation such as selecting a processor in the group to process the flow as described with reference to FIG.

5 is a diagram illustrating an example of a detailed configuration of a multiple scheduling unit according to the present invention.

Referring to FIG. 5, the multiple scheduling unit 430 includes a policy determination unit 500, a group division unit 510, and a scheduler designation unit 520.

The policy determination unit 500 determines whether to perform multiple scheduling using information on various statuses of the multiple processing environment, for example, load distribution status, traffic attributes, traffic processing capacity, and the like. In addition, the policy determination unit 500 determines how to divide a plurality of processors when performing multiple scheduling or a policy for applying a policy for each divided group.

For example, the policy determining unit 500 determines to perform multi-scheduling when the total traffic processing capacity of the multi-processing apparatus is less than a predetermined level, and selects a flow-based policy as the classification policy as shown in FIG.

The group division unit 510 divides the plurality of processors into at least two groups according to the classification policy determined by the policy determination unit 500.

The scheduler designation unit 520 assigns one of the processors in each group to the scheduler by a load degree or a predetermined scheduler selection algorithm. For example, the scheduler designation unit 520 may designate a processor having a small load for each group as a scheduler. In addition, you can dynamically assign a scheduler by fixing a specific processor to the scheduler or by applying various other selection methods.

6 is a flowchart illustrating an example of a multi-scheduling method in a multi-processing environment according to the present invention.

Referring to FIGS. 4 and 6, the multiple scheduling unit 430 grants various status information such as traffic capacity, flow attribute, and load distribution status (S600). Based on the status information, the multiple scheduling unit 430 determines whether to perform multiple scheduling (S610). In the case of performing multiple scheduling lines, the multiple scheduling unit 430 divides the plurality of processors into at least two groups according to the classification policy (S620), and designates a processor to be a scheduler for each group (S630). As shown in FIG. 1, a processor designated for each group performs a scheduling operation for a corresponding packet in accordance with an interrupt request signal, but performs scheduling only on processors in a group to which the processor belongs, rather than scheduling all processors. Accordingly, each group divided into two or more groups can be independently and simultaneously scheduled to be performed by a scheduler assigned to each group. In other words, the scheduler for each group receives the interrupt signal from the multiple scheduler 430, and can perform a scheduling operation such as selection of a processor for packet processing regardless of whether the interrupt for the other group of schedulers is released have. And each divided group can be independently applied different policies or algorithms as needed.

FIG. 7 is a flowchart illustrating another example of a multi-scheduling method in a multi-processing environment according to the present invention. FIG. 7 illustrates a case where a plurality of processors are grouped by the scheduling unit.

Referring to FIGS. 4 and 7, when the packet identifying unit 410 receives the packet (S700), it analyzes the packet to identify the flow (S710). If there is information about the flow in the packet assignment table 420 (S730), the packet identification unit 410 determines which group the new flow belongs to and the like, and identifies the packet and identification information about the packet And stores it in the memory 440. The multi-scheduling unit 430 transmits an interrupt request signal to the corresponding scheduler to process the packet.

If there is no information on the flow in the packet assignment table 420 (S730), the multiple scheduling unit 430 refers to the classification policy to identify the group to which the flow belongs (S740). For example, when the processor group is divided into the flow based on FIG. 3, the multiple scheduling unit 430 determines which group the flow newly recognized based on the upper attribute that hierarchically classifies the flow belongs to. As another example, when the processor group is divided according to the load distribution, the multiple scheduling unit 430 can select a group to which a flow newly recognized as a group belonging to a relatively low load belongs.

The multi-scheduling unit 430 determines which group the new flow belongs to, and then assigns a scheduler to process the packet in the corresponding group using a load level or a preset scheduler determination algorithm, and interrupts the corresponding scheduler to process the packet Request signal.

The processor receiving the interrupt signal operates as a scheduler, selects a processor to process a new flow, and stores related information in the packet assignment table 420 (S750, S760).

For example, referring again to FIG. 4, if a new flow is assigned to a group of processors 1-4, the multiple scheduler 430 sends an interrupt signal to the processor 4 (466) designated as the scheduler. Processor 4 (466) selects processor 1 (460) as a processor to process the packet according to a predetermined processor determination algorithm and stores the information in packet allocation table 420. Thereafter, when a packet of the same flow is received, the packet transfer unit 480 transfers the packet to the processor 1 (460).

The present invention can also be embodied as computer-readable codes on a computer-readable recording medium. A computer-readable recording medium includes all kinds of recording apparatuses in which data that can be read by a computer system is stored. Examples of the computer-readable recording medium include various types of ROM, RAM, CD-ROM, magnetic tape, floppy disk, optical data storage, and the like. The computer-readable recording medium may also be distributed over a networked computer system so that computer readable code can be stored and executed in a distributed manner.

The present invention has been described with reference to the preferred embodiments. It will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. Therefore, the disclosed embodiments should be considered in an illustrative rather than a restrictive sense. The scope of the present invention is defined by the appended claims rather than by the foregoing description, and all differences within the scope of equivalents thereof should be construed as being included in the present invention.

Claims (5)

A method for multiple scheduling in a multiprocessing device,
Dividing the plurality of processors into at least two groups;
Assigning the flow to a group to which the pre-designated processor belongs if a previously designated group or processor exists for the flow of the received packet;
Selecting a group to which to allocate the flow among the pre-divided groups if there is no pre-designated group or processor for the flow of the received packet, and designating one of the in-group processors as a scheduler; And
Wherein when a flow is assigned to a group to which the scheduler belongs, the scheduler performs a scheduling operation of suspending a previously performed task and assigning the flow to one of the processors in the group to which the processor belongs, And resuming the work being performed,
Determining whether to perform multi-scheduling based on a load state or a processing capacity of the plurality of processors, or an attribute of a received packet, before dividing the group into the groups,
Wherein the step of dividing and designating to the scheduler are performed when the multi-scheduling is determined to be performed.
delete The method of claim 1, wherein the step of assigning to the scheduler comprises:
And designating a processor determined by a predetermined scheduler determination algorithm for each processor or group having a smallest load for each group to a scheduler.
A method for multiple scheduling in a multiprocessing device,
Identifying a flow of a received packet;
Allocating the flow to any one of the at least two groups according to a predetermined classification policy in a situation where there are a plurality of processors divided into at least two groups; And
And assigning one of the processors in the group to which the flow is assigned to a scheduler,
Wherein the assigning comprises:
Assigning the flow to a group to which the pre-designated processor belongs if a pre-designated group or processor exists for the flow of the received packet;
Selecting a group to which to allocate the flow among the pre-divided groups if there is no pre-designated group or processor for the flow of the received packet, and designating one of the in-group processors as a scheduler;
Wherein when a flow is assigned to a group to which the scheduler belongs, the scheduler performs a scheduling operation of suspending a previously performed task and assigning the flow to one of the processors in the group to which the processor belongs, And resuming the task that was being performed.
A packet assignment table including information on an assigned processor per flow;
A packet identification unit for identifying a flow of the received packet and allocating the packet to the processor with reference to the packet allocation table;
A group division unit for dividing the plurality of processors into at least two groups; And
And a scheduler designating unit for designating, for each group, a scheduler for selecting a processor to process a packet for each flow,
Wherein the scheduler designation unit selects a group to which the flow is to be allocated if information on the group or processor previously stored in the flow of the received packet does not exist, designates one of the processors in the selected group as a scheduler,
Wherein when a flow is assigned to a group to which the scheduler belongs, the scheduler performs a scheduling operation of suspending a previously performed task and assigning the flow to one of the processors in the group to which the processor belongs, And resumes the task that was being performed.
KR1020140079129A 2014-04-02 2014-06-26 Muti-scheduling method and apparatus in multi-processing environment KR101764811B1 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
CN201580003775.7A CN105900063B (en) 2014-06-26 2015-06-12 Scheduling method and device in multiprocessing environment
JP2017520838A JP2017521806A (en) 2014-06-26 2015-06-12 Scheduling method and apparatus in a multiprocessing environment
US14/760,374 US10530846B2 (en) 2014-06-26 2015-06-12 Scheduling packets to destination virtual machines based on identified deep flow
EP15811135.1A EP3163442A4 (en) 2014-06-26 2015-06-12 Method for scheduling in multiprocessing environment and device therefor
PCT/KR2015/005914 WO2015199366A1 (en) 2014-06-26 2015-06-12 Method for scheduling in multiprocessing environment and device therefor

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020140039085 2014-04-02
KR20140039085 2014-04-02

Publications (2)

Publication Number Publication Date
KR20150114869A KR20150114869A (en) 2015-10-13
KR101764811B1 true KR101764811B1 (en) 2017-08-04

Family

ID=54348275

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020140079129A KR101764811B1 (en) 2014-04-02 2014-06-26 Muti-scheduling method and apparatus in multi-processing environment

Country Status (1)

Country Link
KR (1) KR101764811B1 (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080059555A1 (en) * 2006-08-31 2008-03-06 Archer Charles J Parallel application load balancing and distributed work management

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080059555A1 (en) * 2006-08-31 2008-03-06 Archer Charles J Parallel application load balancing and distributed work management

Also Published As

Publication number Publication date
KR20150114869A (en) 2015-10-13

Similar Documents

Publication Publication Date Title
CN105900063B (en) Scheduling method and device in multiprocessing environment
JP5905921B2 (en) Dynamic queue management that recognizes traffic and load
JP5954074B2 (en) Information processing method, information processing apparatus, and program.
KR101644800B1 (en) Computing system and method
US8695009B2 (en) Allocating tasks to machines in computing clusters
KR101651871B1 (en) Job Allocation Method on Multi-core System and Apparatus thereof
WO2013029487A1 (en) Resource allocation method and resource management platform
WO2017000657A1 (en) Cache management method and device, and computer storage medium
CN112783659B (en) Resource allocation method and device, computer equipment and storage medium
KR101859188B1 (en) Apparatus and method for partition scheduling for manycore system
JP2015194923A (en) Parallel computer system, control program of job management apparatus and control method of parallel computer system
US20140223053A1 (en) Access controller, router, access controlling method, and computer program
KR20110128023A (en) Multi-core processor, apparatus and method for task scheduling of multi-core processor
KR20170023280A (en) Multi-core system and Method for managing a shared cache in the same system
CN112925616A (en) Task allocation method and device, storage medium and electronic equipment
CN113010309B (en) Cluster resource scheduling method, device, storage medium, equipment and program product
KR102045125B1 (en) Resource assignment method using Continuous Double Auction protocol in distributed processing environment, recording medium and distributed processing device applying the same
JP2016004328A (en) Task assignment program, task assignment method, and task assignment device
CN112817726B (en) Priority-based virtual machine grouping resource scheduling method in cloud environment
KR101595967B1 (en) System and Method for MapReduce Scheduling to Improve the Distributed Processing Performance of Deadline Constraint Jobs
KR101764811B1 (en) Muti-scheduling method and apparatus in multi-processing environment
CN116483538A (en) Data center task scheduling method with low consistency and delay
KR20150114911A (en) Scheduling method and apparatus in multi-processing environment
Papazachos et al. Gang scheduling in a two-cluster system implementing migrations and periodic feedback
KR20150089665A (en) Appratus for workflow job scheduling

Legal Events

Date Code Title Description
A201 Request for examination
A302 Request for accelerated examination
E902 Notification of reason for refusal
AMND Amendment
N231 Notification of change of applicant
E601 Decision to refuse application
AMND Amendment
J201 Request for trial against refusal decision
J301 Trial decision

Free format text: TRIAL NUMBER: 2015101005301; TRIAL DECISION FOR APPEAL AGAINST DECISION TO DECLINE REFUSAL REQUESTED 20150910

Effective date: 20170109

S901 Examination by remand of revocation
GRNO Decision to grant (after opposition)