WO2007017932A1 - スケジュール制御プログラム及びスケジュール制御方法 - Google Patents
スケジュール制御プログラム及びスケジュール制御方法 Download PDFInfo
- Publication number
- WO2007017932A1 WO2007017932A1 PCT/JP2005/014590 JP2005014590W WO2007017932A1 WO 2007017932 A1 WO2007017932 A1 WO 2007017932A1 JP 2005014590 W JP2005014590 W JP 2005014590W WO 2007017932 A1 WO2007017932 A1 WO 2007017932A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- thread
- time
- cache
- cpu
- executed
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/505—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/5033—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering data affinity
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0806—Multiuser, multiprocessor or multiprocessing cache systems
- G06F12/0811—Multiuser, multiprocessor or multiprocessing cache systems with multilevel cache hierarchies
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0806—Multiuser, multiprocessor or multiprocessing cache systems
- G06F12/0842—Multiuser, multiprocessor or multiprocessing cache systems for multiprocessing or multitasking
Definitions
- the present invention relates to a schedule control program and a schedule control method, and in particular, a schedule control program for causing a computer to function to assign a thread to be executed to a plurality of processor devices, and a thread to be executed by a plurality of processor devices.
- the present invention relates to a schedule control method to be assigned.
- OS operating system
- CPUs Central Processing Units
- Is assigned to each CPU by the OS scheduler.
- a single program is processed in parallel by multiple CPUs, saving program processing time and distributing the load on the CPU.
- the scheduler selects the thread with the highest priority from the queue (hereinafter referred to as the run queue) to which the thread to be executed is connected, and executes it on the CPU corresponding to the selected run queue. Let Also, the execution start time was recorded for each thread.
- Fig. 7 is a diagram schematically showing how threads are allocated by a conventional scheduler.
- A shows the data in the cache memory during execution of a thread
- B shows the passage of a fixed time. It is a figure which shows the mode of the data of the cache memory at the time of reallocating the thread later.
- the primary cache memory (hereinafter referred to as the primary cache) 501a, 501b, 501c, and 501d is connected to each of the CPUs 500a, 500b, 500c, and 500d.
- the secondary cache memory (hereinafter referred to as the secondary cache) 502a is shared, and the CPU 500c and 500d share the secondary cache 502b!
- the hatched portion indicates data.
- FIG. 11500 & is executing thread 510.
- the data power used by the thread 510 for example, is read from a main memory (not shown) and stored.
- the threads 511, 512, and 513 are connected to the run queue and are waiting to be executed.
- a thread is not always executed by the same CPU, for example, it is interrupted at a certain time, and the CPU is caused to execute another thread. Multitasking is realized. Then, for example, when the suspended thread 510 is resumed, the scheduler selects a CPU having a run queue that connects the thread 510 as shown in FIG.
- Patent Document 1 JP-A-8-30562 (paragraph numbers [0020] to [0039], FIGS. 1 to 5)
- Patent Document 2 JP-A-10-143382
- the present invention has been made in view of these points, and an object thereof is to provide a schedule control program that assigns threads to a CPU so that a cache can be used efficiently.
- Another object of the present invention is to provide a schedule control method for assigning threads to CPUs so that a cache can be used efficiently.
- a schedule control program for causing a computer to perform a process of assigning a thread to be executed to a plurality of processor devices, as shown in FIG.
- Thread information storage means 1 for storing the execution start time and the identification information (CPU number) of the processor device to be executed (for example, CPUlOa in FIG.
- the elapsed time calculation means 2 that calculates the elapsed time t from the execution start time, the cache memory used by the processor unit (CPU10a, 10b, 10c, 10d) (1 Next cache l la, l ib, l lc, l ld, secondary cache 12a, 12b, tertiary cache 13)
- the elapsed time t is less than the time parameter set in the cache memory of the ⁇ th order ( ⁇ is a natural number of 2 or more), and ( ⁇ -1) is greater than or equal to the time parameter set in the next cache memory.
- the thread allocation that assigns thread 20 to the processor device with the least load hand 3 the schedule control program for causing to function as is provided.
- the thread information storage unit 1 stores the execution start time and the identification information (CPU number) of the processor device to be executed (for example, CPU10a in FIG. 1) when the thread 20 is executed.
- the time calculation means 2 calculates the elapsed time t from the execution start time when assigning the suspended thread 20 to the processor device (CPU 10a, 10b, 10c, 10d) to be executed next, and the thread assignment means 3
- the cache memory primary cache l la, l ib, l lc, l ld, secondary cache 12a, 12b, tertiary cache 13
- the time parameter set for the next cache memory In the
- the present invention stores the execution start time and the identification information of the processor device to be executed at the time of executing the thread, and assigns the elapsed time from the execution start time when assigning the suspended thread to the processor device to be executed next.
- the higher the order the larger the time parameter is set for the cache used by the processor unit, and the elapsed time is less than the time parameter set for the nth order cache (where n is a natural number of 2 or more). , (N—1) to the next cache If it is greater than the set time meter, a thread is allocated to the processor device with the least load among the processor devices executed last time and the processor devices sharing the n-th cache with the processor device executed last time.
- a processor device that uses a cache that is highly likely to retain the data stored during the previous execution is selected as the allocation target for the suspended thread. This increases the cache hit rate and allows the cache to be used efficiently.
- the load distribution process can be performed simultaneously with the allocation of the suspended thread. As a result, it is possible to prevent the cache hit rate from being lowered due to the eviction of the thread data stored in the cache due to the load distribution processing being performed after thread allocation.
- FIG. 1 is a diagram for explaining an outline of schedule control for assigning threads to be executed by a plurality of CPUs.
- FIG. 2 is a hardware configuration diagram of an example of a computer that executes schedule control according to the present embodiment.
- FIG. 3 is a diagram showing hardware configuration information.
- FIG. 4 is a diagram schematically showing OS functions related to schedule control according to the present embodiment.
- FIG. 5 is a flowchart for explaining the flow of processing during thread execution.
- FIG. 6 is a diagram illustrating a process flow for connecting a suspended thread to a run queue of any CPU.
- FIG. 7 A diagram schematically showing how threads are allocated by a conventional scheduler.
- A shows the data in the cache memory during execution of a thread
- B shows the thread after a certain period of time.
- FIG. 6 is a diagram showing the state of data in a cache memory when reassigning
- FIG. 1 is a diagram for explaining an outline of schedule control for assigning threads to be executed by a plurality of CPUs.
- the primary cache 1 la, l ib, l lc, and l id force S are connected to each of the plurality of CPUs 10a, 10b, 10c, and 10d, and the secondary cache 12a is connected to the CPUs 10a and 10b.
- a description will be given by taking an example of schedule control in a computer sharing the secondary cache 12b with the CPUs 10c and 10d and sharing the tertiary cache 13 with the CPUs 10a to 10d.
- threads 20, 21, 22, 23, 24, 25, and 26 executed by the CPUs 10a to 10d are schematically shown.
- the schedule control for assigning threads to be executed by the plurality of CPUs 10a to 10d is performed by the thread information storage means 1, the elapsed time calculation means 2, and the thread assignment means 3.
- the function of each processing means will be described.
- the thread information storage means 1 stores the execution start time and the CPUlOa-: LOd identification information (hereinafter referred to as the CPU number) to be executed when each of the threads 20-26 is executed.
- the elapsed time calculation means 2 calculates the elapsed time t from the execution start time stored in the thread information storage means 1 when allocating the suspended thread (for example, thread 20) to CPUlOa to LOd to be executed next. To do.
- the thread assignment means 3 assigns the threads 20 to 26 to the CPUs 10a to 10d.
- the time parameter Tl increases as the degree increases with respect to the cache memory used by the CPUs 10a to 10d (primary caches l la, l ib, 11c, l ld, secondary caches 12a, 12b, tertiary cache 13).
- Set ⁇ 2 and ⁇ 3 ( ⁇ 1 ⁇ 2 ⁇ 3).
- This time parameter is set based on the time that the data used by the suspended thread during the previous execution may remain in the cache of each order. Since the higher-order cache has a larger capacity, there is a high possibility that data will remain even if the elapsed time t is long. Therefore, the higher the order, the larger the time parameter is set. Specifically, the time parameter is determined by the cache capacity and benchmark.
- thread allocation means 3 sets the elapsed time t of the suspended thread to the n-th order cache (n is a natural number of 2 or more). (N—1) If the time parameter set for the next cache is greater than or equal to the time parameter set in the next cache, the CPU that was executed last time and the CPU that shares the nth time cache with the CPU that was executed last time It is characterized by assigning suspended threads to those with the least load.
- the elapsed time calculation means 2 refers to the execution start time at the previous execution of the thread 20 stored in the thread information storage means 1. Thus, the elapsed time t until the current time is calculated. Based on the calculated elapsed time t, the thread allocation unit 3 allocates the thread 20 as follows.
- the thread 20 is assigned to the CPU 10a, 10b, 10c, lOd with the least load, as in the conventional case.
- CPU 10c is selected and thread 20 is assigned.
- CPUs 10a to 10d that use a cache that has a high possibility of remaining data stored at the time of previous execution depending on the elapsed time are selected as allocation targets of the suspended thread 20.
- the cache hit rate is increased and the cache is efficiently Can be used.
- the load distribution process can be performed simultaneously with the allocation of the thread 20. As a result, it is possible to prevent the cache hit rate from being lowered due to the eviction of the thread data stored in the cache due to the load distribution processing being performed after thread allocation.
- FIG. 2 is a hardware configuration diagram of an example of a computer that executes schedule control according to the present embodiment.
- the computer 30 shown here is, for example, a UNIX (registered trademark) server computer.
- Computer 30 has 8 CPUs 31a, 31b, 3 lc, 31d, 31e, 31f, 31g, 31h each with built-in primary cache, and secondary caches 32a, 32b, 32c, 32d, tertiary cache 33a, 33b, tertiary caches 33a, 33b, memory 35 connected to the system bus 34, and IZ036 are also configured.
- the secondary cache 32a is shared by the CPUs 31a and 31b
- the secondary cache 32b is shared by the CPUs 31c and 31d
- the secondary cache 32c is shared by the CPUs 31e and 31f
- the secondary cache 32d is the CPU 31g. , Shared on 31h.
- the tertiary cache 33a is shared by the CPUs 31a, 31b, 31c and 31d
- the tertiary cache 33b is shared by the CPUs 31e, 31f, 31g and 31h.
- FIG. 3 is a diagram illustrating hardware configuration information.
- identification information “id: ⁇ ” is described, and further, identification information of a cache used by the CPU is described for each order of the cache.
- the configuration information includes a description indicating a configuration such as a memory 35 (not shown) and 1/036, etc. Such hardware configuration information is passed to the OS.
- FIG. 4 is a diagram schematically showing functions of the OS related to schedule control according to the present embodiment.
- CPU management structures 40-1, 40-2, 40-3,..., 40-8 which indicate the information of CPUs 31a to 31h in FIG.
- the thread 41 is represented by a thread management structure, and the execution start time “disp—time” of the thread 41 and the CPU number “cpu” that executed the thread 41 are stored.
- the thread 41 is assigned to one of the CPUs 31a to 31h by the scheduler 42.
- the OS scheduler 42 refers to the thread management structure and selects a CPU to which a suspended thread is allocated according to the elapsed time. As described above, the time parameters Tl, ⁇ 2, and ⁇ 3 are set according to the order of the cache.
- each means shown in FIG. 1 is realized on a hardware computer 30 as shown in FIG.
- FIG. 5 is a flowchart for explaining the flow of processing during thread execution.
- the scheduler 42 confirms the rank of each of the CPUs 31a to 31h and confirms the existence of the waiting thread 41. If there is a thread 41, the process proceeds to step S2. If there is no thread 41, the process of step S1 is repeated until the thread 41 appears in the run queue (step S1).
- the scheduler 42 removes the thread 41 from the run queue and sets the execution state.
- the CPU management structure 40-2 The thread 41a connected to the run queue of the CPU 31b represented by As shown by the arrow A in Fig. 4! /, As shown in the CPU management structure 40-2, the CPU 3 lb rank shown in FIG. 4 Thread 41a is pulled out, and thread 41a gets the execution right by CPU 31b (step S2) .
- Such a thread 41a has a power that disappears when processing is completed. While waiting for a response from an external device via IZ036, waiting for an exclusive resource acquired by another thread, or processing of thread 41a has exceeded a certain time. Sometimes it is interrupted before finishing the process (arrow B in Fig. 4). In that case, the scheduler 42 puts the thread 41a in a suspended state and gives execution rights to another thread in the run queue of the CPU 31b. When there is a response from an external device via IZ036, or when an exclusive resource that another thread has acquired is released, the processing of other thread 41a is executed over a certain period of time and to another thread. When the execution of the thread 41a is resumed, for example, after the right is transferred, the thread 41a is again assigned to one of the CPUs 31a to 31h.
- FIG. 6 is a diagram for explaining the flow of processing for connecting a suspended thread to the run queue of one of the CPUs.
- the scheduler 42 calculates the elapsed time t from the difference between the execution start time "disp-time" recorded in the thread management structure of the suspended thread 41a and the current time (step S5). ).
- step S6 If the elapsed time t is t ⁇ Tl, the same CPU 31b as the previous time is selected (step S6).
- the CPU 31b that shares the secondary cache 32a with the CPU 31b that operated last time, and the CPU with the least load of 3 lb are selected.
- the number of threads 41 connected to the CPU3 la run queue represented by the CPU management structure 40-1 CPU3 lb is selected because the load is larger than the number of threads 41 connected to the CPU3 lb run queue represented by the CPU management structure 40-2 (step S7).
- the CPU 31a, 31b, 31c, 3Id that shares the tertiary cache 33a with the CPU 31b that has been operated last time is selected as the CPU with the least load.
- the CPU 31c represented by the CPU management structure 40-3 is selected (step S8).
- the scheduler 42 connects the thread 41a to the run queue of the selected CPU (step S10).
- a CPU that uses a cache that has a high possibility that data stored at the time of the previous execution remains high depending on the elapsed time. Since it is selected as an allocation target, the cache hit rate increases and the cache can be used efficiently.
- the load distribution process can be performed simultaneously with the allocation of the suspended thread. As a result, it is possible to prevent the cache hit rate from being lowered due to the removal of the thread data stored in the cache due to the load distribution processing being performed after the thread allocation.
- the above processing content can be realized by a computer.
- a program describing the processing contents of the functions that the computer should have is provided.
- the program describing the processing contents can be recorded on a computer-readable recording medium.
- the computer-readable recording medium include a magnetic recording device, an optical disk, a magneto-optical recording medium, and a semiconductor memory.
- Magnetic recording devices include hard disk drives (HDD), flexible disks (FD), and magnetic tape.
- Optical discs include DVD (Digital Versatile Disc), DVD—RAM, CD -ROM, CD— R (Recordable) ZRW (ReWritable) etc.
- Magneto-optical recording media include MO (Magneto-Optical disk).
- the computer that executes the program stores, for example, a program recorded on a portable recording medium or a server computer-powered program in its own storage device. Then, the computer reads its own storage device power program and executes processing according to the program. The computer can also read the program directly from the portable recording medium and execute processing according to the program. The computer can also execute processing according to the received program sequentially each time the program is transferred to the server computer.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Memory System Of A Hierarchy Structure (AREA)
Abstract
Description
Claims
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2007529429A JP4651671B2 (ja) | 2005-08-09 | 2005-08-09 | スケジュール制御プログラム及びスケジュール制御方法 |
EP05770421A EP1914632B1 (en) | 2005-08-09 | 2005-08-09 | Schedule control program and schedule control method |
PCT/JP2005/014590 WO2007017932A1 (ja) | 2005-08-09 | 2005-08-09 | スケジュール制御プログラム及びスケジュール制御方法 |
CNB200580051287XA CN100573458C (zh) | 2005-08-09 | 2005-08-09 | 进程控制装置以及进程控制方法 |
KR1020087002269A KR100942740B1 (ko) | 2005-08-09 | 2005-08-09 | 스케줄 제어 프로그램을 기록한 컴퓨터 판독 가능한 기록 매체 및 스케줄 제어 방법 |
US12/006,028 US8479205B2 (en) | 2005-08-09 | 2007-12-28 | Schedule control program and schedule control method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2005/014590 WO2007017932A1 (ja) | 2005-08-09 | 2005-08-09 | スケジュール制御プログラム及びスケジュール制御方法 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/006,028 Continuation US8479205B2 (en) | 2005-08-09 | 2007-12-28 | Schedule control program and schedule control method |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2007017932A1 true WO2007017932A1 (ja) | 2007-02-15 |
Family
ID=37727128
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2005/014590 WO2007017932A1 (ja) | 2005-08-09 | 2005-08-09 | スケジュール制御プログラム及びスケジュール制御方法 |
Country Status (6)
Country | Link |
---|---|
US (1) | US8479205B2 (ja) |
EP (1) | EP1914632B1 (ja) |
JP (1) | JP4651671B2 (ja) |
KR (1) | KR100942740B1 (ja) |
CN (1) | CN100573458C (ja) |
WO (1) | WO2007017932A1 (ja) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008310555A (ja) * | 2007-06-14 | 2008-12-25 | Asyst Technologies Japan Inc | プロセス状態監視装置 |
WO2012095982A1 (ja) * | 2011-01-13 | 2012-07-19 | 富士通株式会社 | マルチコアプロセッサシステム、およびスケジューリング方法 |
WO2013021441A1 (ja) * | 2011-08-05 | 2013-02-14 | 富士通株式会社 | データ処理システム、およびデータ処理方法 |
US10127045B2 (en) | 2014-04-04 | 2018-11-13 | Fanuc Corporation | Machine tool controller including a multi-core processor for dividing a large-sized program into portions stored in different lockable instruction caches |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8356284B2 (en) * | 2006-12-28 | 2013-01-15 | International Business Machines Corporation | Threading model analysis system and method |
US8453146B2 (en) * | 2009-12-23 | 2013-05-28 | Intel Corporation | Apportioning a counted value to a task executed on a multi-core processor |
US8756585B2 (en) * | 2009-12-29 | 2014-06-17 | International Business Machines Corporation | Efficient monitoring in a software system |
CN102402220B (zh) * | 2011-01-21 | 2013-10-23 | 南京航空航天大学 | 基于负荷分担式的容错飞行控制系统的故障检测方法 |
WO2012098684A1 (ja) * | 2011-01-21 | 2012-07-26 | 富士通株式会社 | スケジューリング方法およびスケジューリングシステム |
US20120267423A1 (en) * | 2011-04-19 | 2012-10-25 | Taiwan Semiconductor Manufacturing Company, Ltd. | Methods and Apparatus for Thin Die Processing |
US9274854B2 (en) * | 2012-07-27 | 2016-03-01 | International Business Machines Corporation | Contamination based workload management |
US9571329B2 (en) * | 2013-03-11 | 2017-02-14 | International Business Machines Corporation | Collective operation management in a parallel computer |
US10140210B2 (en) * | 2013-09-24 | 2018-11-27 | Intel Corporation | Method and apparatus for cache occupancy determination and instruction scheduling |
CN106484537B (zh) | 2016-09-30 | 2019-07-19 | 网易(杭州)网络有限公司 | 一种cpu核资源的分配方法和设备 |
CN116521351B (zh) * | 2023-07-03 | 2023-09-05 | 建信金融科技有限责任公司 | 多线程任务调度方法、装置、存储介质及处理器 |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH04283849A (ja) * | 1991-03-13 | 1992-10-08 | Toshiba Corp | マルチプロセッサシステム |
GB2258933A (en) | 1991-08-19 | 1993-02-24 | Sequent Computer Systems Inc | Cache affinity scheduler |
JPH05151064A (ja) * | 1991-11-29 | 1993-06-18 | Yokogawa Electric Corp | 密結合マルチプロセツサシステム |
JPH0830562A (ja) | 1994-07-19 | 1996-02-02 | Nec Corp | マルチプロセッサシステム |
JPH10143382A (ja) | 1996-11-08 | 1998-05-29 | Hitachi Ltd | 共有メモリ型マルチプロセッサシステムの資源管理方法 |
JP2815095B2 (ja) * | 1989-07-12 | 1998-10-27 | 日本電信電話株式会社 | マルチプロセッサにおけるタスク割り当て制御方法 |
JPH11259318A (ja) * | 1998-03-13 | 1999-09-24 | Hitachi Ltd | ディスパッチ方式 |
US5974438A (en) | 1996-12-31 | 1999-10-26 | Compaq Computer Corporation | Scoreboard for cached multi-thread processes |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5287508A (en) * | 1992-04-07 | 1994-02-15 | Sun Microsystems, Inc. | Method and apparatus for efficient scheduling in a multiprocessor system |
US5784614A (en) * | 1995-07-27 | 1998-07-21 | Ncr Corporation | Cache affinity scheduling method for multi-processor nodes in a split transaction bus architecture |
US5872972A (en) * | 1996-07-05 | 1999-02-16 | Ncr Corporation | Method for load balancing a per processor affinity scheduler wherein processes are strictly affinitized to processors and the migration of a process from an affinitized processor to another available processor is limited |
US5875469A (en) * | 1996-08-26 | 1999-02-23 | International Business Machines Corporation | Apparatus and method of snooping processors and look-aside caches |
US6665699B1 (en) * | 1999-09-23 | 2003-12-16 | Bull Hn Information Systems Inc. | Method and data processing system providing processor affinity dispatching |
JP3535795B2 (ja) * | 2000-02-21 | 2004-06-07 | 博 和泉 | コンピュータ、並びに、コンピュータ読み取り可能な記録媒体 |
-
2005
- 2005-08-09 KR KR1020087002269A patent/KR100942740B1/ko not_active IP Right Cessation
- 2005-08-09 JP JP2007529429A patent/JP4651671B2/ja not_active Expired - Fee Related
- 2005-08-09 CN CNB200580051287XA patent/CN100573458C/zh not_active Expired - Fee Related
- 2005-08-09 EP EP05770421A patent/EP1914632B1/en not_active Ceased
- 2005-08-09 WO PCT/JP2005/014590 patent/WO2007017932A1/ja active Application Filing
-
2007
- 2007-12-28 US US12/006,028 patent/US8479205B2/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2815095B2 (ja) * | 1989-07-12 | 1998-10-27 | 日本電信電話株式会社 | マルチプロセッサにおけるタスク割り当て制御方法 |
JPH04283849A (ja) * | 1991-03-13 | 1992-10-08 | Toshiba Corp | マルチプロセッサシステム |
GB2258933A (en) | 1991-08-19 | 1993-02-24 | Sequent Computer Systems Inc | Cache affinity scheduler |
JPH05151064A (ja) * | 1991-11-29 | 1993-06-18 | Yokogawa Electric Corp | 密結合マルチプロセツサシステム |
JPH0830562A (ja) | 1994-07-19 | 1996-02-02 | Nec Corp | マルチプロセッサシステム |
JPH10143382A (ja) | 1996-11-08 | 1998-05-29 | Hitachi Ltd | 共有メモリ型マルチプロセッサシステムの資源管理方法 |
US5974438A (en) | 1996-12-31 | 1999-10-26 | Compaq Computer Corporation | Scoreboard for cached multi-thread processes |
JPH11259318A (ja) * | 1998-03-13 | 1999-09-24 | Hitachi Ltd | ディスパッチ方式 |
Non-Patent Citations (2)
Title |
---|
See also references of EP1914632A4 * |
STANLEY L. ET AL: "Two-Level Cache Performancefor Multiprocessors", SIMULATION, vol. 60, no. 4, April 1993 (1993-04-01), pages 222 - 231, XP002993365 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008310555A (ja) * | 2007-06-14 | 2008-12-25 | Asyst Technologies Japan Inc | プロセス状態監視装置 |
WO2012095982A1 (ja) * | 2011-01-13 | 2012-07-19 | 富士通株式会社 | マルチコアプロセッサシステム、およびスケジューリング方法 |
WO2013021441A1 (ja) * | 2011-08-05 | 2013-02-14 | 富士通株式会社 | データ処理システム、およびデータ処理方法 |
JPWO2013021441A1 (ja) * | 2011-08-05 | 2015-03-05 | 富士通株式会社 | データ処理システム、およびデータ処理方法 |
US9405470B2 (en) | 2011-08-05 | 2016-08-02 | Fujitsu Limited | Data processing system and data processing method |
US10127045B2 (en) | 2014-04-04 | 2018-11-13 | Fanuc Corporation | Machine tool controller including a multi-core processor for dividing a large-sized program into portions stored in different lockable instruction caches |
Also Published As
Publication number | Publication date |
---|---|
JP4651671B2 (ja) | 2011-03-16 |
KR100942740B1 (ko) | 2010-02-17 |
CN101238442A (zh) | 2008-08-06 |
US20080109817A1 (en) | 2008-05-08 |
EP1914632B1 (en) | 2012-06-27 |
CN100573458C (zh) | 2009-12-23 |
KR20080023358A (ko) | 2008-03-13 |
JPWO2007017932A1 (ja) | 2009-02-19 |
EP1914632A1 (en) | 2008-04-23 |
US8479205B2 (en) | 2013-07-02 |
EP1914632A4 (en) | 2009-07-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2007017932A1 (ja) | スケジュール制御プログラム及びスケジュール制御方法 | |
JP5831324B2 (ja) | 制御装置,制御方法,プログラム及び分散処理システム | |
US8893145B2 (en) | Method to reduce queue synchronization of multiple work items in a system with high memory latency between processing nodes | |
JP5615361B2 (ja) | スレッドシフト:コアへのスレッド割振り | |
KR101640848B1 (ko) | 멀티코어 시스템 상에서 단위 작업을 할당하는 방법 및 그 장치 | |
US10019283B2 (en) | Predicting a context portion to move between a context buffer and registers based on context portions previously used by at least one other thread | |
JP2008084009A (ja) | マルチプロセッサシステム | |
US20200285510A1 (en) | High precision load distribution among processors | |
KR102493859B1 (ko) | 듀얼 모드 로컬 데이터 저장 | |
TWI503750B (zh) | 運算任務狀態的封裝 | |
JP2008090546A (ja) | マルチプロセッサシステム | |
JP6885193B2 (ja) | 並列処理装置、ジョブ管理方法、およびジョブ管理プログラム | |
JP2009217818A (ja) | 情報処理装置,スケジュール管理装置,スケジュール管理方法およびスケジュール管理プログラム | |
JPWO2008126202A1 (ja) | ストレージシステムの負荷分散プログラム、ストレージシステムの負荷分散方法、及びストレージ管理装置 | |
TW201435576A (zh) | 陷阱處理期間的協作執行緒陣列粒化內文切換 | |
US20070079020A1 (en) | Dynamically migrating channels | |
US10698813B2 (en) | Memory allocation system for multi-tier memory | |
US9170962B2 (en) | Dynamic designation of retirement order in out-of-order store queue | |
JP2021022282A (ja) | 情報処理装置,ストレージシステム及びスケジューリングプログラム | |
JP7011156B2 (ja) | ストレージ制御装置およびプログラム | |
US8930680B2 (en) | Sync-ID for multiple concurrent sync dependencies in an out-of-order store queue | |
JP6135392B2 (ja) | キャッシュメモリ制御プログラム,キャッシュメモリを内蔵するプロセッサ及びキャッシュメモリ制御方法 | |
CN101847128A (zh) | 管理tlb的方法和装置 | |
JP6900687B2 (ja) | ディスク制御装置、ディスク制御方法、および、ディスク制御プログラム | |
WO2015004570A1 (en) | Method and system for implementing a dynamic array data structure in a cache line |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2007529429 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 12006028 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1020087002269 Country of ref document: KR |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2005770421 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 200580051287.X Country of ref document: CN |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWP | Wipo information: published in national office |
Ref document number: 2005770421 Country of ref document: EP |
|
WWP | Wipo information: published in national office |
Ref document number: 12006028 Country of ref document: US |