KR101764811B1 - Muti-scheduling method and apparatus in multi-processing environment - Google Patents
Muti-scheduling method and apparatus in multi-processing environment Download PDFInfo
- Publication number
- KR101764811B1 KR101764811B1 KR1020140079129A KR20140079129A KR101764811B1 KR 101764811 B1 KR101764811 B1 KR 101764811B1 KR 1020140079129 A KR1020140079129 A KR 1020140079129A KR 20140079129 A KR20140079129 A KR 20140079129A KR 101764811 B1 KR101764811 B1 KR 101764811B1
- Authority
- KR
- South Korea
- Prior art keywords
- group
- flow
- scheduler
- processor
- processors
- Prior art date
Links
Images
Abstract
A method and apparatus for multiple scheduling in a multiprocessing environment are disclosed. A multi-processing apparatus divides a plurality of processors into at least two groups and designates one of the processors as a scheduler for each group.
Description
The present invention relates to a multi-scheduling method and apparatus, and more particularly, to a method and apparatus for multiple scheduling using a plurality of schedulers in a multi-processing environment for parallel processing of packets.
A multiprocessing system processes a plurality of processes in parallel using a plurality of CPUs (Central Processing Unit) cores. However, in the case of multiprocessing, there is a problem of evenly distributing the load among the CPU cores, collision of resources shared between CPU cores, and deterioration of cache efficiency.
In particular, in the case of a multiprocessing system that processes packets in parallel, it is desirable to treat the packets belonging to the same flow in the same CPU and maintain the flow affinity for the CPU in order to increase the packet processing efficiency. In this case, There is a problem that the load distribution imbalance among the entire CPUs may occur due to an excessive load on the CPU, which may lower the overall processing efficiency of the multi-processing system.
In order to solve this problem, load balancing between the CPUs must be performed periodically. In this case, however, the CPU processing the flows is changed during the load balancing process, so that the flow affinity is lowered. (Packet re-ordering) process is required, and the packet processing efficiency of the multi-processing system is lowered.
As described above, in order to increase the processing efficiency of the multiprocessing system, it is necessary to improve the flow affinity and to appropriately load-distribute them.
SUMMARY OF THE INVENTION The present invention is directed to a multi-scheduling method that mitigates trade-offs between flow affinity and load balancing and increases utilization efficiency of all processors in a multiprocessing environment that performs packet parallel processing, And to provide that device.
According to an aspect of the present invention, there is provided a method for multiple scheduling in a multi-processing apparatus, including: dividing a plurality of processors into at least two groups; And designating one of the processors as a scheduler for each group.
According to another aspect of the present invention, there is provided a method of scheduling multiple processes in a multi-processing apparatus, comprising: identifying a flow of a received packet; Allocating the flow to any one of the at least two groups according to a predetermined classification policy in a situation where there are a plurality of processors divided into at least two groups; And designating one of the processors in the group to which the flow is assigned as a scheduler.
According to an aspect of the present invention, there is provided a multi-processing apparatus including: a packet allocation table including information on processors assigned to each flow; A packet identification unit for identifying a flow of the received packet and allocating the packet to the processor with reference to the packet allocation table; A group division unit for dividing the plurality of processors into at least two groups; And a scheduler designating unit for designating, for each group, a scheduler for selecting a processor to process a flow-specific packet.
According to the present invention, it is possible to improve the performance of parallel processing by alleviating the trade-off between flow affinity and load distribution. Also, by using a plurality of dynamically designated schedulers, latency due to packet scheduling and queuing can be reduced. Also, various scheduling algorithms according to traffic attributes can be easily applied through a plurality of schedulers.
Figure 1 illustrates an example of a multiprocessing device in accordance with the present invention;
Figure 2 illustrates an example of a multiprocessing method in accordance with the present invention;
FIG. 3 illustrates an example of a classification policy for multi-scheduling according to the present invention.
4 is a diagram illustrating a configuration of an example of a multiprocessing apparatus to which a multiple scheduling method according to the present invention is applied,
FIG. 5 is a diagram illustrating an example of a detailed configuration of a multi-scheduling unit according to the present invention;
FIG. 6 is a flowchart illustrating an example of a multiple scheduling method in a multiple processing environment according to the present invention,
FIG. 7 is a flowchart illustrating another example of a multi-scheduling method in a multi-processing environment according to the present invention.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS Reference will now be made in detail to the preferred embodiments of the present invention, examples of which are illustrated in the accompanying drawings.
1 is a diagram illustrating an example of a multiprocessing apparatus according to the present invention.
1, a
The
The packet assignment table 130 includes information of processors allocated for each packet flow. For example, the packet allocation table 130 may include information indicating that a first processor is assigned to a processor for processing first and second flows, and a second processor is assigned to a processor for processing a third flow . The information stored in the packet assignment table 130 is then generated and updated by the schedules scheduler.
The
The
The
In addition, the
If there is no packet flow information in the packet allocation table 130, the
The plurality of
FIG. 2 is a diagram illustrating an example of a multiprocessing method according to the present invention.
Referring to FIG. 1 and FIG. 2 together, the
If there is information on the processor to process the flow in the packet assignment table 130 (S230), the
On the other hand, if there is no information of the processor to process the flow in the packet assignment table 130 (i.e., if the flow is a new flow) (S230), the
Upon receiving the interrupt request signal, the
In addition, the
In the case of FIGS. 1 and 2, one
3 is a diagram illustrating an example of a classification policy for multi-scheduling according to the present invention.
Referring to FIG. 3, the classification policy includes a policy for dividing a plurality of processors into groups. As shown in FIG. 4, in the case of multiple scheduling, a plurality of processors are divided into at least two groups, and scheduling is performed for each group. This requires a policy for dividing a plurality of processors into groups.
As an example of the classification policy, there is a packet flow based policy as shown in FIG. A flow can be divided into two groups A and B based on attributes that can be hierarchically divided into packet flows. In this case, the plurality of processors may be divided into two groups depending on which group the flows currently processing are belonging to.
As another example, there is a classification policy based on the load of each processor. The processors can be divided according to the predetermined group number so that the load distribution for each group can be made even.
The classification policy for dividing a plurality of processors into groups may be plural. For example, the first policy is a policy for dividing a plurality of processors into two groups on a flow basis, the second policy is a policy for dividing a plurality of processors into three groups on a flow basis, And may be a policy of dividing the processors into at least two groups depending on the degree of load.
The present invention is not limited to the embodiment of FIG. 3. Various classification policies for dividing the processors may be applied. The classification policy is set in advance, and the user can update the classification policy through a separate input / output interface. In FIG. 4, it is assumed that a criterion for dividing a plurality of processors, that is, a classification policy, is set in advance for multi-scheduling.
FIG. 4 is a block diagram illustrating an example of a multi-processing apparatus to which the multi-scheduling method according to the present invention is applied.
4, the
The
The
If the
For example, as shown in FIG. 4, the
For example, in the case of grouping processors on a flow-based basis as shown in FIG. 3, the
5 is a diagram illustrating an example of a detailed configuration of a multiple scheduling unit according to the present invention.
Referring to FIG. 5, the
The
For example, the
The
The
6 is a flowchart illustrating an example of a multi-scheduling method in a multi-processing environment according to the present invention.
Referring to FIGS. 4 and 6, the
FIG. 7 is a flowchart illustrating another example of a multi-scheduling method in a multi-processing environment according to the present invention. FIG. 7 illustrates a case where a plurality of processors are grouped by the scheduling unit.
Referring to FIGS. 4 and 7, when the
If there is no information on the flow in the packet assignment table 420 (S730), the
The
The processor receiving the interrupt signal operates as a scheduler, selects a processor to process a new flow, and stores related information in the packet assignment table 420 (S750, S760).
For example, referring again to FIG. 4, if a new flow is assigned to a group of processors 1-4, the
The present invention can also be embodied as computer-readable codes on a computer-readable recording medium. A computer-readable recording medium includes all kinds of recording apparatuses in which data that can be read by a computer system is stored. Examples of the computer-readable recording medium include various types of ROM, RAM, CD-ROM, magnetic tape, floppy disk, optical data storage, and the like. The computer-readable recording medium may also be distributed over a networked computer system so that computer readable code can be stored and executed in a distributed manner.
The present invention has been described with reference to the preferred embodiments. It will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. Therefore, the disclosed embodiments should be considered in an illustrative rather than a restrictive sense. The scope of the present invention is defined by the appended claims rather than by the foregoing description, and all differences within the scope of equivalents thereof should be construed as being included in the present invention.
Claims (5)
Dividing the plurality of processors into at least two groups;
Assigning the flow to a group to which the pre-designated processor belongs if a previously designated group or processor exists for the flow of the received packet;
Selecting a group to which to allocate the flow among the pre-divided groups if there is no pre-designated group or processor for the flow of the received packet, and designating one of the in-group processors as a scheduler; And
Wherein when a flow is assigned to a group to which the scheduler belongs, the scheduler performs a scheduling operation of suspending a previously performed task and assigning the flow to one of the processors in the group to which the processor belongs, And resuming the work being performed,
Determining whether to perform multi-scheduling based on a load state or a processing capacity of the plurality of processors, or an attribute of a received packet, before dividing the group into the groups,
Wherein the step of dividing and designating to the scheduler are performed when the multi-scheduling is determined to be performed.
And designating a processor determined by a predetermined scheduler determination algorithm for each processor or group having a smallest load for each group to a scheduler.
Identifying a flow of a received packet;
Allocating the flow to any one of the at least two groups according to a predetermined classification policy in a situation where there are a plurality of processors divided into at least two groups; And
And assigning one of the processors in the group to which the flow is assigned to a scheduler,
Wherein the assigning comprises:
Assigning the flow to a group to which the pre-designated processor belongs if a pre-designated group or processor exists for the flow of the received packet;
Selecting a group to which to allocate the flow among the pre-divided groups if there is no pre-designated group or processor for the flow of the received packet, and designating one of the in-group processors as a scheduler;
Wherein when a flow is assigned to a group to which the scheduler belongs, the scheduler performs a scheduling operation of suspending a previously performed task and assigning the flow to one of the processors in the group to which the processor belongs, And resuming the task that was being performed.
A packet identification unit for identifying a flow of the received packet and allocating the packet to the processor with reference to the packet allocation table;
A group division unit for dividing the plurality of processors into at least two groups; And
And a scheduler designating unit for designating, for each group, a scheduler for selecting a processor to process a packet for each flow,
Wherein the scheduler designation unit selects a group to which the flow is to be allocated if information on the group or processor previously stored in the flow of the received packet does not exist, designates one of the processors in the selected group as a scheduler,
Wherein when a flow is assigned to a group to which the scheduler belongs, the scheduler performs a scheduling operation of suspending a previously performed task and assigning the flow to one of the processors in the group to which the processor belongs, And resumes the task that was being performed.
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201580003775.7A CN105900063B (en) | 2014-06-26 | 2015-06-12 | Scheduling method and device in multiprocessing environment |
JP2017520838A JP2017521806A (en) | 2014-06-26 | 2015-06-12 | Scheduling method and apparatus in a multiprocessing environment |
US14/760,374 US10530846B2 (en) | 2014-06-26 | 2015-06-12 | Scheduling packets to destination virtual machines based on identified deep flow |
EP15811135.1A EP3163442A4 (en) | 2014-06-26 | 2015-06-12 | Method for scheduling in multiprocessing environment and device therefor |
PCT/KR2015/005914 WO2015199366A1 (en) | 2014-06-26 | 2015-06-12 | Method for scheduling in multiprocessing environment and device therefor |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020140039085 | 2014-04-02 | ||
KR20140039085 | 2014-04-02 |
Publications (2)
Publication Number | Publication Date |
---|---|
KR20150114869A KR20150114869A (en) | 2015-10-13 |
KR101764811B1 true KR101764811B1 (en) | 2017-08-04 |
Family
ID=54348275
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020140079129A KR101764811B1 (en) | 2014-04-02 | 2014-06-26 | Muti-scheduling method and apparatus in multi-processing environment |
Country Status (1)
Country | Link |
---|---|
KR (1) | KR101764811B1 (en) |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080059555A1 (en) * | 2006-08-31 | 2008-03-06 | Archer Charles J | Parallel application load balancing and distributed work management |
-
2014
- 2014-06-26 KR KR1020140079129A patent/KR101764811B1/en active IP Right Grant
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080059555A1 (en) * | 2006-08-31 | 2008-03-06 | Archer Charles J | Parallel application load balancing and distributed work management |
Also Published As
Publication number | Publication date |
---|---|
KR20150114869A (en) | 2015-10-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105900063B (en) | Scheduling method and device in multiprocessing environment | |
JP5905921B2 (en) | Dynamic queue management that recognizes traffic and load | |
JP5954074B2 (en) | Information processing method, information processing apparatus, and program. | |
KR101644800B1 (en) | Computing system and method | |
US8695009B2 (en) | Allocating tasks to machines in computing clusters | |
KR101651871B1 (en) | Job Allocation Method on Multi-core System and Apparatus thereof | |
WO2013029487A1 (en) | Resource allocation method and resource management platform | |
WO2017000657A1 (en) | Cache management method and device, and computer storage medium | |
CN112783659B (en) | Resource allocation method and device, computer equipment and storage medium | |
KR101859188B1 (en) | Apparatus and method for partition scheduling for manycore system | |
JP2015194923A (en) | Parallel computer system, control program of job management apparatus and control method of parallel computer system | |
US20140223053A1 (en) | Access controller, router, access controlling method, and computer program | |
KR20110128023A (en) | Multi-core processor, apparatus and method for task scheduling of multi-core processor | |
KR20170023280A (en) | Multi-core system and Method for managing a shared cache in the same system | |
CN112925616A (en) | Task allocation method and device, storage medium and electronic equipment | |
CN113010309B (en) | Cluster resource scheduling method, device, storage medium, equipment and program product | |
KR102045125B1 (en) | Resource assignment method using Continuous Double Auction protocol in distributed processing environment, recording medium and distributed processing device applying the same | |
JP2016004328A (en) | Task assignment program, task assignment method, and task assignment device | |
CN112817726B (en) | Priority-based virtual machine grouping resource scheduling method in cloud environment | |
KR101595967B1 (en) | System and Method for MapReduce Scheduling to Improve the Distributed Processing Performance of Deadline Constraint Jobs | |
KR101764811B1 (en) | Muti-scheduling method and apparatus in multi-processing environment | |
CN116483538A (en) | Data center task scheduling method with low consistency and delay | |
KR20150114911A (en) | Scheduling method and apparatus in multi-processing environment | |
Papazachos et al. | Gang scheduling in a two-cluster system implementing migrations and periodic feedback | |
KR20150089665A (en) | Appratus for workflow job scheduling |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
A201 | Request for examination | ||
A302 | Request for accelerated examination | ||
E902 | Notification of reason for refusal | ||
AMND | Amendment | ||
N231 | Notification of change of applicant | ||
E601 | Decision to refuse application | ||
AMND | Amendment | ||
J201 | Request for trial against refusal decision | ||
J301 | Trial decision |
Free format text: TRIAL NUMBER: 2015101005301; TRIAL DECISION FOR APPEAL AGAINST DECISION TO DECLINE REFUSAL REQUESTED 20150910 Effective date: 20170109 |
|
S901 | Examination by remand of revocation | ||
GRNO | Decision to grant (after opposition) |