WO2013001613A1 - Procédé et système de planification - Google Patents

Procédé et système de planification Download PDF

Info

Publication number
WO2013001613A1
WO2013001613A1 PCT/JP2011/064841 JP2011064841W WO2013001613A1 WO 2013001613 A1 WO2013001613 A1 WO 2013001613A1 JP 2011064841 W JP2011064841 W JP 2011064841W WO 2013001613 A1 WO2013001613 A1 WO 2013001613A1
Authority
WO
WIPO (PCT)
Prior art keywords
task
storage
memory
access
cpu
Prior art date
Application number
PCT/JP2011/064841
Other languages
English (en)
Japanese (ja)
Inventor
康志 栗原
浩一郎 山下
鈴木 貴久
尚記 大舘
俊也 大友
Original Assignee
富士通株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 富士通株式会社 filed Critical 富士通株式会社
Priority to JP2013522397A priority Critical patent/JP5861706B2/ja
Priority to PCT/JP2011/064841 priority patent/WO2013001613A1/fr
Publication of WO2013001613A1 publication Critical patent/WO2013001613A1/fr
Priority to US14/134,643 priority patent/US9507633B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/16Handling requests for interconnection or transfer for access to memory bus
    • G06F13/1605Handling requests for interconnection or transfer for access to memory bus based on arbitration
    • G06F13/1652Handling requests for interconnection or transfer for access to memory bus based on arbitration in a multiprocessor architecture
    • G06F13/1663Access to shared memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/52Program synchronisation; Mutual exclusion, e.g. by means of semaphores
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • G06F3/0631Configuration or reconfiguration of storage systems by allocating resources to storage systems

Definitions

  • the present invention relates to a scheduling method and system for avoiding access contention with respect to memory shared by a plurality of CPUs.
  • a multi-core system including a plurality of storages and a plurality of CPUs
  • storage access from a plurality of CPUs may occur simultaneously for one storage.
  • Patent Document 1 there is a technique that prevents a seek conflict with a disk by grouping access tasks for the disk drive and executing threads in series according to the task list (see, for example, Patent Document 1 below).
  • Patent Document 2 there is a technology that targets storage access requests with a time limit, and if writing to the target storage does not end within the time limit, once the data is written to another storage and later moved to the original storage (For example, see Patent Document 2 below.)
  • the collective optical disk device is connected to the single-disc optical disc device, and image data registration is temporarily registered in the single-disc optical disc device, and then transferred to the collective optical disc device.
  • image data registration is temporarily registered in the single-disc optical disc device, and then transferred to the collective optical disc device.
  • arbitration logic means for determining the type of access request to the HDD, setting a different address space for each type, and controlling the access request is provided. (See, for example, Patent Document 4 below).
  • the disclosed scheduling method and system solves the above-described problems, and aims to avoid an access conflict to a memory shared by a plurality of CPUs.
  • the disclosed technology is such that the first CPU determines whether a task belongs to the first task type, and when the task belongs to the first task type, the task Determining whether the first access area to be accessed is in the first memory or the second memory, and setting the memory to be accessed by the task in the first memory or the second memory based on the determination result It is characterized by.
  • FIG. 1 is a diagram illustrating a configuration example of a system that executes a scheduling method according to an embodiment.
  • FIG. 2 is a block diagram showing the internal configuration of the system.
  • FIG. 3A is a chart showing database information (part 1).
  • FIG. 3-2 is a chart showing database information (part 2).
  • FIG. 3-3 is a chart showing database information (part 3).
  • FIG. 4A is a diagram showing an outline of processing for avoiding access conflict (part 1).
  • FIG. 4-2 is a diagram showing an outline of processing for avoiding access conflict (part 2).
  • FIG. 5A is a diagram of an example of access scheduling to the storage (part 1).
  • FIG. 5-2 is a diagram showing an example of access scheduling to the storage (part 2).
  • FIG. 5C is a diagram of an example of access scheduling to the storage (part 3).
  • FIG. 5-4 is a diagram showing an example of access scheduling to the storage (part 4).
  • FIG. 5-5 is a diagram showing an example of access scheduling to the storage (part 5).
  • FIG. 6A is a diagram illustrating an example of scheduling in which storage access and task access are linked (part 1).
  • FIG. 6B is a diagram of an example of scheduling in which storage access and task access are linked (part 2).
  • FIG. 6C is a diagram illustrating an example of scheduling in which storage access and task access are linked (part 3).
  • FIG. 6D is a diagram of an example of scheduling in which storage access and task access are linked (part 4).
  • FIG. 6A is a diagram illustrating an example of scheduling in which storage access and task access are linked (part 1).
  • FIG. 6B is a diagram of an example of scheduling in which storage access and task access are linked (part 2).
  • FIG. 6C is a diagram illustrating an example of scheduling
  • FIG. 6-5 is a diagram showing an example of scheduling in which storage access and task access are linked (part 5).
  • FIG. 7 is a diagram showing a state of access to data before writing back.
  • FIG. 8-1 is a diagram showing processing relating to data writing back (part 1).
  • FIG. 8-2 is a diagram showing processing relating to data writing back (part 2).
  • FIG. 8C is a diagram showing processing relating to data write-back (part 3).
  • FIG. 9 is a flowchart showing a processing procedure performed by the storage scheduler.
  • FIG. 10 is a flowchart showing a processing procedure performed by the task scheduler.
  • FIG. 11 is a flowchart showing a processing procedure performed by the main scheduler and the slave scheduler.
  • FIG. 9 is a flowchart showing a processing procedure performed by the storage scheduler.
  • FIG. 12 is a flowchart illustrating a processing procedure performed by the access monitoring unit.
  • FIG. 13 is a diagram illustrating contention avoidance and all access processing times according to the embodiment.
  • FIG. 14 is a diagram illustrating an application example of a system using the computer illustrated in FIG.
  • the disclosed technology is applied to a system that includes a plurality of CPUs and a plurality of memories and processes tasks in parallel. Then, by coordinating the memory access scheduling and the task scheduling, by executing the task, an access contention to one shared memory among a plurality of memories is avoided, and the access processing to the memory is made efficient.
  • FIG. 1 is a diagram illustrating a configuration example of a system that executes a scheduling method according to an embodiment.
  • the system 100 includes a plurality of CPUs (first CPU 101 and second CPU 102), and the CPUs 101 and 102 include OSs 111 and 112, respectively.
  • the CPUs 101 and 102 access the memory 121 and a memory shared by a plurality of CPUs via the bus 103.
  • a description will be given using a multi-core configuration example in which a plurality of CPUs 101 and 102 are mounted on one computer shown in FIG.
  • the plurality of memories include a main storage (A) 131 as a first memory and a sub-storage (B) 132 as a second memory.
  • main storage (A) 131 and sub-storage (B) 132 can use hard disk devices.
  • both the CPUs 101 and 102 mainly access the main storage 131 and share them.
  • a description will be given assuming that the sub-storage 132 is temporarily used in order to avoid access conflict with the main storage 131.
  • FIG. 2 is a block diagram showing the internal configuration of the system.
  • the master-side OS 111 includes a master scheduler 201, an access monitoring unit 202, a wait queue 203, and a task queue 204.
  • the master scheduler 201 includes a storage scheduler 205, a task scheduler 206, and a task dispatch unit 207.
  • the slave-side OS 112 includes a slave scheduler 211, an access monitoring unit 212, and a task queue 214.
  • the slave scheduler 211 includes a task dispatch unit 217.
  • the memory 121 includes a database (DB) 221.
  • the storage scheduler 205 of the master OS 111 extracts a task from the wait queue 203 and determines an access destination storage (main storage 131 or sub storage 132) of the extracted task. At this time, the access destination storage is determined according to the characteristics of each task (Read or Write, bandwidth usage, processing time, etc.) and notified to the task scheduler 206.
  • the task scheduler 206 of the master OS 111 determines a task assignment destination based on the storage access scheduling information by the storage scheduler 205 and the task scheduling information, and inserts it into the task queue 204.
  • the task scheduler 206 assigns tasks having the same storage access destination to the same CPU 101 or 102.
  • the task dispatch unit 207 controls task dispatch and task switching of tasks inserted in the task queue 204 based on the dispatch status of the CPU 101.
  • the access monitoring unit 202 of the master OS 111 specifies the storage (main storage 131 or sub-storage 132) accessed by the task based on the information of the task being executed when a read or write to the storage occurs.
  • the slave OS 112 executes the control of the CPU 102 under the overall control of the master OS 111.
  • the slave scheduler 211 of the slave OS 112 includes a task dispatch unit 217.
  • the task dispatch unit 217 controls task dispatch and task switching of tasks inserted in the task queue 214 based on the dispatch status of the CPU 102.
  • the access monitoring unit 212 of the slave OS 112 specifies the storage (main storage 131 or sub-storage 132) accessed by the task based on the information of the task being executed when a read or write to the storage occurs.
  • the database 221 in the memory 121 holds a task table, storage access scheduling information, and task scheduling information.
  • FIG. 3A is a diagram showing a task table 301 stored in the database 221.
  • the task table 301 includes a write task table 311, a read task table 312, and an F task table 313.
  • the information in the F task table 313 is referred to when the write task once writes to another storage in order to avoid access conflict.
  • the F task (corresponding to a write-back task) is a task for performing data write-back from a storage in which data has been temporarily written to an actual access target storage. This F task is dispatched at a timing that does not affect other tasks that access the storage.
  • the write task table 311 includes (1) a write task ID, (2) a designated write area, (3) a write designated area, (4) a write-back determination flag, (5) a temporary storage destination access read task, (6) Each information of data size is included.
  • the write task ID is an identifier (ID) of a write task to be written in the storage.
  • the designated write area is an address of a storage designated for writing.
  • the write designation area is an address of a storage to which data is temporarily written.
  • the write-back determination flag is a flag value indicating whether or not the writing of the address data in the designated write area has been completed in the write designated area.
  • the temporary storage destination access read task is the ID of the task accessing the address of the designated write area.
  • the data size is the size of data to be written.
  • the read task table 312 includes information on (1) read task ID, (2) designated access area, (3) read designated area, and (4) data size.
  • the read task ID is an identifier (ID) of a read task read from the storage.
  • the designated access area is an address of a storage designated for reading.
  • the read designation area is an address of a storage from which data is actually read.
  • the data size is the size of data to be read.
  • the F task table 313 includes (1) F task ID and (2) write-back write (Write) task ID.
  • the F task ID is an ID of the F task.
  • the write-back write task ID is a target write task ID for writing back the temporarily written data to the actual designated storage.
  • FIG. 3-2 is a diagram showing a storage access scheduling table 321 and a task scheduling table 322 stored in the database 221.
  • the storage access scheduling table 321 includes a storage ID, an assigned task ID for each storage ID, and an estimated end time.
  • the storage ID is an ID for identifying the main storage 131 and the sub storage 132.
  • the assigned task ID is a task ID for accessing each storage.
  • the predicted end time is a predicted end time of the task obtained by an access end time prediction formula described later.
  • the task scheduling table 322 includes a CPU ID and an assigned task ID for each CPU ID.
  • the CPUID is an identifier (ID) of the plurality of CPUs 101 and 102.
  • the assigned task ID is an identifier (ID) of a task assigned to each of the plurality of CPUs 101 and 102.
  • FIG. 3-3 is a chart showing the classification of tasks that perform storage access stored in the database 221. As shown in the illustrated classification table 331, the tasks are classified into a plurality of (A to F in the illustrated example) according to the access type to the storage, how to use the bandwidth, processing time, and the like.
  • the tasks of category A are data when an access type is read, band use is a fixed band, and application processing such as video is performed.
  • the tasks of classification B are data to be uploaded, etc., where the access type is read and the bandwidth usage is the entire bandwidth.
  • the tasks of category C are access type “write”, full bandwidth utilization, downloaded data, and the like.
  • the tasks of category D are swap-out data, etc., in which the access type is write, bandwidth use is the highest priority, and the processing time is instantaneous.
  • the tasks of category E are swap-in data, etc., in which the access type is read, the bandwidth usage is the highest priority, and the processing time is instantaneous.
  • the task of category F (F task) is a task related to the process of writing back from the sub-storage 132 to the main storage 131, and the access type is read / write, and the band use is the entire band.
  • FIG. 4A is assumed that the CPU 101 is executing a task of reading from the storage.
  • the read task the storage from which data is read is determined in advance (in the example shown, it is the main storage 131).
  • the main storage 131 determines to which storage the write task is written. This determination is based on information on the storage bandwidths M and S of each storage, predicting and calculating the task access end time for the main storage 131 and determining which storage 131 and 132 to write.
  • the illustrated example shows a state in which the CPU 102 writes the write task to the sub-storage 132 in order to avoid an access conflict that occurs when the write task and the read task access the same main storage 131.
  • the data written in the sub-storage 132 is written into the main storage 131 (processing of the F task). This avoids access contention from a plurality of CPUs for the same storage.
  • the OSs 111 and 112 of the CPUs 101 and 102 refer to the database 221 stored in the memory 121 and assign the access destination task of the same storage to the same CPU, thereby avoiding access conflict to the same storage. I am doing so.
  • This prediction process is executed by the storage scheduler 205.
  • the storage scheduler 205 performs a predictive calculation in order to determine in which storage the tasks C and D, which are write tasks, are written.
  • the read task of category A accesses the storage using a certain bandwidth. Tasks in this category A can be accessed in parallel with tasks in other categories B, C, D, E, and F. 2. Tasks of classifications B, C, D, E, and F access the storage using all available bandwidths. That is, the entire band that is not used by the classification A is used. 3. Tasks of classifications B, C, D, E, and F are processed sequentially. Allow parallel access only with tasks in category A.
  • the storage bandwidth of the main storage 131 is M
  • the storage bandwidth of the sub-storage 132 is S
  • the data amount of the task Bx of the classification B assigned to the main storage 131 is Bmx
  • the task Cx of the classification C assigned to the main storage 131 is
  • the data amount of the task Ax of the category A assigned to the main storage 131 is Amx
  • the data amount of the task Bx of the category B assigned to the sub-storage 132 is Bsx
  • the category assigned to the sub-storage 132 is It is assumed that the data amount of the task Cx of C is Csx
  • the use bandwidth of the task Ax of classification A allocated to the sub-storage 132 is Asx.
  • Tm> Ts the write tasks C and D are written to the sub-storage 132, and if Tm ⁇ Ts, the write tasks C and D are instructed to be written to the main storage 131.
  • FIGS. 5-1 to 5-5 are diagrams illustrating an example of access scheduling to the storage.
  • An example of scheduling in the order of classifications A, C (C1, C2), B, and D will be described.
  • the target data of classifications A and B exist in the main storage 131, and the usage bandwidth is 10, the data amount of B is 30, the data amount of C1 is 20, and the data amount of C2 is 30.
  • the storage bandwidth is 50 for both the main storage 131 and the sub storage 132.
  • the storage scheduler 205 takes out a read task of category A from the wait queue 203. Since the target data of this classification A is in the main storage 131, this task is assigned to the main storage 131.
  • the task of classification D is assigned to the sub storage.
  • the storage scheduler 205 stores the assigned task ID and the estimated end time in the storage access scheduling table 321 for each scheduling described above.
  • the task scheduler 206 reads the scheduling information of the storage scheduler 205 from the storage access scheduling table 321 and queues the task in the task queue of each CPU.
  • the task dispatch units 207 and 217 perform task dispatch of tasks inserted in the task queues 204 and 214 based on the dispatch status of the CPUs 101 and 102.
  • the access monitoring units 202 and 212 specify the main storage 131 or the sub-storage 132 as the storage to be accessed by the task based on the information of the task being executed when a read or write to the storage occurs.
  • the task scheduler 206 queues the task queues 204 and 214 of the CPUs 101 and 102 so that the storage access destination is the same storage. This avoids access contention for the same storage.
  • the task dispatch units 207 and 217 dispatch only one task of classifications B and C at the same time in each CPU 101 and 102. Note that the tasks of the categories B and C are kept in the task queues 204 and 214 when tasks of other categories B and C are already dispatched.
  • the task dispatch units 207 and 217 dispatch immediately when the tasks of the classifications A, D, and E are queued. This is because the tasks of classifications A, D, and E are less affected by access contention. This is because the task of category A does not use the entire bandwidth, and the tasks of categories D and E are because the processing time is very short.
  • the tasks of category F are storage-access-scheduled, and the task queues 204 and 214 of the CPUs 101 and 102 do not have the tasks of categories B and C, respectively. , C tasks are dispatched, this category F task is dispatched.
  • the task dispatch units 207 and 217 immediately return the tasks of category F to the task queues 204 and 214. This is to prevent performance degradation due to writing back.
  • FIGS. 6-1 to 6-5 are diagrams illustrating an example of scheduling in which storage access and task access are linked. Similar to FIGS. 5A to 5E, the target data of the classifications A and B exist in the main storage 131, and the usage bandwidth is 10, the data amount of B is 30, the data amount of C1 is 20, It is assumed that the data amount of C2 is 30. It is assumed that the storage bandwidth is 50 for both the main storage 131 and the sub storage 132.
  • the task generation order classification A ⁇ C1 ⁇ C2 ⁇ B ⁇ D and the storage scheduling of each task shall use the results described with reference to FIGS. 5-1 to 5-5.
  • the generated category A task is scheduled to access the main storage 131 by the storage scheduler 205 (see FIG. 5A). Further, the task of category A can be assigned by any of the CPUs 101 and 102. In the illustrated example, it is assumed that the task scheduler 206 has assigned a task of category A to the CPU 101.
  • the next generated task of classification C1 is scheduled to access the sub-storage 132 by the storage scheduler 205 (see FIG. 5-2).
  • the task scheduler 206 assigns the task of category C1 to a different CPU 102 because the access storage is different from the task of category A.
  • the next generated task of classification C2 is scheduled to access the main storage 131 by the storage scheduler 205 (see FIG. 5-3). Then, the task scheduler 206 assigns the task of category C2 to the same CPU 101 as the task A having the same access destination. As described above, since the task of category A and the task of category C (C2) can be accessed in parallel, the CPU 101 performs multitask processing on the tasks of category A and category C2.
  • the next class B task generated is scheduled to access the main storage 131 by the storage scheduler 205 (see FIG. 5-4).
  • the task scheduler 206 assigns the task of classification B to the same CPU 101 as the tasks A and C2 having the same access destination.
  • the CPU 101 queues the task of category B in the task queue 204.
  • the next class D task is scheduled to access the sub-storage 132 by the storage scheduler 205 (see FIG. 5-5). Then, the task scheduler 206 assigns the task of category D to the same CPU 102 as the task C1 having the same access destination. However, since the task of category D has a short access time, the task dispatch unit 217 switches the dispatch order of the task of category C1 and the task of category D to process the task of category D first.
  • FIG. 7 is a diagram showing a state of access to data before writing back. For example, it is assumed that a state in which a task of class B is temporarily stored in a different sub-storage 132 instead of the main storage 131 that is the target storage occurs. When access to the data 701 in a state before the completion of such writing back occurs, the sub-data in which the data 701 is temporarily stored at this time, not the main storage 131 that is the original storage destination of the data 701. Control to access the storage 132 is required.
  • task information unique to each task is added in advance, and the task scheduler 206 and the storage scheduler 205 are linked. Information used for this cooperation is the task table 301 shown in FIG.
  • the storage scheduler 205 refers to the task table 301 and identifies the access destination.
  • the temporary storage destination the sub storage 132 in the above example
  • the temporarily stored data 701 is protected.
  • the storage scheduler 205 determines scheduling for the F task with reference to the task table 301 at the time of scheduling.
  • FIG. 8A to FIG. 8C are diagrams showing processing related to data write-back.
  • a snoop controller 803 is provided between the caches (L1 cache) 801 and 802 of the CPUs 101 and 102. Update data in the caches 801 and 802 is exchanged between the CPUs 101 and 102 via the snoop controller 803 to take coherency of the caches 801 and 802.
  • the cache 801 is provided with operation flags (referred to as C flags) 811 and 812 of the CPUs 101 and 102, and the cache 802 is provided with C flags 821 and 822 of the CPUs 101 and 102.
  • C flags operation flags
  • the C flag 811 and 821 related to the CPU 101 take a value of 1 when the CPU 101 executes the tasks of classifications B, C, D, and E, and takes a value of 2 when the F task is executed.
  • the C flags 812 and 822 related to the CPU 102 take a value of 1 when the CPU 102 executes the tasks of classification B, C, D, and E, and takes a value of 2 when the F task is executed.
  • the other values are 0 (OFF).
  • the CPU 101 multitasks the task A1 of category A and the task of category C, and the task of category F is queued in the task queue 204. Further, it is assumed that the CPU 102 processes a task of category D, and a task of category A2 is queued in the task queue 214.
  • the C flag 811 or 821 of the CPU 101 is 1 when the class C task is executed, and is 0 when the class A task is executed.
  • the C flag 812 and 822 of the CPU 102 is set to 1 when the task of category D is executed.
  • the CPU 101 executes the F task of category F, and the value of the C flags 811 and 821 of the CPU 101 becomes 2 while the F task is being executed.
  • the F task of category F is executed only by one CPU among the plurality of CPUs 101 and 102.
  • the storage scheduler 205 and the task scheduler 206 operate in conjunction with each other. Based on the access scheduling result of the storage scheduler 205, the task scheduler 206 distributes tasks to the CPUs 101 and 102.
  • FIG. 9 is a flowchart showing a processing procedure performed by the storage scheduler.
  • the storage scheduler 205 provided in the master scheduler 201 waits for generation of a task (step S901: No and step S902: No loop).
  • step S901: Yes the type of task is determined (step S901). S903).
  • step S901: No the type of task is determined (step S901). S903).
  • step S902 No
  • the process is terminated.
  • step S903 if it is a task of predetermined classifications A to F (step S903: Yes), it is then determined whether it is a read task of classification A, B, E (step S904). If it is not a task of classification A to F (step S903: No), storage scheduling is not performed, and the process proceeds to the task scheduler 206 (FIG. 10 described later).
  • step S904 if it is a read task of classification A, B, E (step S904: Yes), the storage in which the access data exists is checked and scheduled to the storage in which the access data exists.
  • the (2) designated write area of the write task table 311 is compared with the (2) designated access area of the read task table 312, and a matching write task is searched (step S905).
  • step S906 (4) the write-back determination flag of the corresponding write task is checked (step S906), and it is determined whether the write-back is completed (step S907). If the write-back has been completed (step S907: Yes), the (3) read designated area in the read task table 312 is updated to the (2) designated write area in the write task table 311 (step S908), and the process proceeds to step S911. To do.
  • step S907 if the write back is not completed in step S907 (step S907: No), the (3) read designation area of the read task table 312 is updated to (3) the write designation area of the write task table 311 (step S909).
  • the (1) read task ID of the read task table 312 is added to the (5) temporary storage destination access read task of the write task table 311 (step S910), and the process proceeds to step S911.
  • step S911 the data size to be read is written in (6) data size of the write task table 311 (step S911), and the corresponding task is allocated to the storage with the target data (step S912). Thereafter, the storage access scheduling table 321 is updated (step S913), and the process proceeds to the process of the task scheduler 206 (FIG. 10).
  • step S904 when the task is not a read task of classification A, B, E (step S904: No), it is determined whether the task is a classification F (step S914). If the task is not a task of category F (step S914: No), the task type is one of the remaining write tasks of categories C and D, so the storage access end time is predicted, and the storage to be accessed is determined and assigned. (Step S915). Thereafter, it is determined whether (2) the designated write area of the write task table 311 is equal to (3) the write designated area (that is, whether writing back is unnecessary) (step S916).
  • step S916 If the determination result in step S916 shows that the (2) designated write area of the write task table 311 is equal to (3) the write designated area (step S916: Yes), the write task table 311 is updated (step S917). At this time, (2) designated write area to (4) write-back determination flag and (6) data size in the write task table 311 are updated. As a result of the determination in step S916, if the (2) designated write area of the write task table 311 and (3) the write designated area are not equal (step S916: No), write back is necessary, and the F task is stored in the task queue 204. After the insertion (step S918), the process proceeds to step S917. After execution of step S917, the storage access scheduling table 321 is updated (step S913), and the process proceeds to the process of the task scheduler 206 (FIG. 10).
  • step S914 If the determination result in step S914 is a task of category F (step S914: Yes), this F task is assigned to the storage in which the target data exists (the data to be written back exists) (step S919), and the storage access The scheduling table 321 is updated (step S913), and the process proceeds to the process of the task scheduler 206 (FIG. 10).
  • FIG. 10 is a flowchart showing a processing procedure performed by the task scheduler.
  • the task scheduler 206 provided in the master scheduler 201 is executed after storage access scheduling processing by the storage scheduler 205 shown in FIG.
  • step S1001 it is determined whether the corresponding task is a task of classifications A to F (step S1001). If the task is a task of classifications A to F (step S1001: Yes), any task assigned to the same storage is selected. It is determined whether the task queues 204 and 214 of the CPUs 101 and 102 exist (step S1002). As a result of the determination, if a task assigned to the same storage exists in the task queues 204 and 214 of any of the CPUs 101 and 102 (step S1002: Yes), this task is assigned to the same task queues 204 and 214 (step S1003). ), The process ends, and the process returns to the process of the storage scheduler 205 (FIG. 9).
  • step S1001 When the determination result of step S1001 is not a task of classifications A to F (step S1001: No), and the task to which the determination result of step S1002 is assigned to the same storage is the task queue 204, 214 of any CPU 101, 102. Is not present (step S1002: No), normal scheduling is performed (step S1004), the process is terminated, and the process returns to the process of the storage scheduler 205 (FIG. 9).
  • FIG. 11 is a flowchart showing a processing procedure performed by the main scheduler and the slave scheduler.
  • the main scheduler 201 and the slave scheduler 211 determine whether a task exists in the task queues 204 and 214, or whether any of the CPU flag values has changed (step S1101).
  • S1101: (1) it is determined whether it is a task of classification BF (step S1102). If the flag value of any CPU changes (step S1101: (2)), it is determined whether the F task is being executed (step S1111). If the F task is being executed (step S1111: Yes). The F task is returned to the task queue (step S1112), and the process returns to step S1101.
  • step S1111: No If the F task is not being executed (step S1111: No), the process ends and the process returns to step S1101. If none of the determinations is true (step S1101: No), the above process is terminated, and a task is generated or a change of any CPU flag value is awaited.
  • step S1102 if it is a task of class B to F (step S1102: Yes), it is determined whether it is a task of class F (step S1103). If it is not a task of class B to F (step S1102: No), Control goes to step S1110. If it is a task of category F (step S1103: Yes), it is determined whether any of the tasks of categories B to F is threaded by any of the CPUs 101 and 102 (step S1104).
  • step S1104 determines whether the tasks of classifications B to F are threaded (step S1104: Yes) or not threaded (step S1104: No). If the tasks of classifications B to F are not threaded (step S1104: No), the C flag Is set to 2 (step S1105), the F task is threaded to start processing (step S1106), and the above processing ends when the processing ends.
  • step S1103 If the task is not a category F task in step S1103 (step S1103: No), (1) the tasks of categories B to E are threaded in the same CPU 101, 102, or (2) the F task is threaded. It is determined whether or not (step S1107). As a result of the determination, if the tasks of classifications B to E are threaded (step S1107: result (1)), the process ends without threading this task. Also, (2) if the F task is threaded (step S1107: result (2)), the F task is returned to the task queue 214 (step S1108), and the C flag is set to 1 (step S1109). The task is turned into a thread and the process is started (step S1110). When the process ends, the series of processes is completed.
  • step S1107 if the (1) classification B to E tasks are not threaded and (2) the F task is not threaded (step S1107: No), the process proceeds to step S1110. The corresponding task is threaded and the process is started (step S1110). When the process ends, the above series of processes ends.
  • FIG. 12 is a flowchart illustrating a processing procedure performed by the access monitoring unit.
  • the access monitoring units 202 and 212 specify the storage (main storage 131 or sub-storage 132) to be accessed by the task based on the information of the task being executed at the time of occurrence of reading or writing to the storage.
  • step S1201 determines whether (1) task dispatch occurs or (2) task processing end during dispatch occurs.
  • step S1201: result (1) it is determined whether this task is a task of classifications A to F (step S1202). If it is a task of classification A to F (step S1202: Yes), it is next determined whether it is a read task of classification A, B, E (step S1203). If the task does not correspond to either step S1201 or step S1202 (step S1201: No and step S1202: No), the process ends without performing special access control for the current task.
  • step S1203 if it is a read task of classification A, B, E (step S1203: Yes), the information (3) read designation area is acquired from the read task table 312 (step S1204), and data is read from this read designation area. Reading is started (step S1205). Thereafter, (1) task end or (2) task switch of this task is determined (step S1206). (1) When the task ends (step S1206: result (1)), the process proceeds to step S1207. When the task is switched (step S1206: result (2)), the process is terminated. If neither (1) task end or (2) task switch is found, the task waits for task end or task switch occurrence (step S1206: No loop).
  • step S1207 it is determined whether (2) designated access area and (3) read designated area in the read task table 312 match. As a result of the determination, if (2) the designated access area and (3) the read designated area match (step S1207: Yes), the process ends, and (2) the designated access area and (3) the read designated area If they do not match (step S1207: No), the data has been read from the temporary storage destination, so (2) the specified write area of the write task table 311 and (2) the specified access area of the read task table 312 match.
  • the write task to be searched is searched, and the ID of the read task that has been completed is removed from the (5) temporary storage destination access read task in the write task table 311 for the write task (step S1208), and the process is terminated.
  • step S1201 (2) when task processing during dispatch occurs (step S1201: result (2)), it is determined whether the task is a class B to F (step S1209), and the class B to F is determined. If it is a task (step S1209: Yes), the C flag is set to an initial value (0) (step S1210), and the process ends. On the other hand, if it is not a task of class B to F (including a task of class A) (step S1209: No), the process is terminated without performing the process.
  • step S1203 if the read task is not a read task of classification A, B, or E (step S1203: No), it is determined whether this task is a write task or an F task (step S1211). As a result of the determination, if it is an F task (step S1211: Yes), information on the F task table 313 (2) a write task corresponding to the write-back write task ID is searched from the write task table 311 and information on the corresponding task (2 The designated write area and (3) the write designated area are acquired (step S1212), and write back of the target area is started (step S1213).
  • step S1214 (1) task end or (2) task switch of this task is determined (step S1214).
  • step S1214: result (1) the write task information ( 4) Update (end) the write-back determination (step S1215), release the protection of the information (2) designated write area in the write task table 311 (step S1216), and end the process.
  • step S1214 when (2) the task is switched (step S1214: result (2)), the process is terminated without performing any processing.
  • step S1211 if it is not an F task (step S1211: No), it is a write task of classification C, D, information (2) designated write area from write task table 311, and (3) write designation. An area is acquired (step S1217). Thereafter, it is determined whether (2) the designated write area is coincident with (3) the write designated area (step S1218). As a result of the determination, if they do not match (step S1218: No), the storage area of the write destination is protected (step S1219), data writing to the designated area is started (step S1220), and the above-described series is completed by the end of the process. Finish the process.
  • step S1218 if (2) the designated write area and (3) the write designated area match (step S1218: Yes), step S1219 is skipped and data writing to the designated area is started (step S1220). ), The above-described series of processing is completed when the processing ends.
  • FIG. 13 is a diagram illustrating contention avoidance and all access processing times according to the embodiment.
  • write tasks C1 and C3 of class C are allocated to the main storage 131, and write tasks C2 and C4 of class C are allocated to the sub storage 132.
  • the write data amount of each of the tasks C1 to C4 is 50 and the bandwidth of each storage is 50.
  • the basic configuration of the system is the same as that of the embodiment, but for comparison with the conventional unmeasured technology, that is, only the storage access schedule is executed and the storage access scheduling and task scheduling are not linked. It is shown in the lower left in the figure.
  • tasks C1 and C2 may be assigned to the CPU 101
  • tasks C3 and C4 may be assigned to the CPU 102.
  • the CPUs 101 and 102 simultaneously access the main storage 131. Access conflict occurs.
  • the processing time for all accesses is 50/50 due to the occurrence of access contention due to the execution of tasks C1 and C3 and the occurrence of access contention due to the times of tasks C2 and C4.
  • 25 + 50/25 4, which is twice as long as the embodiment. It can be seen from this example that high access performance can be achieved by the processing of the embodiment.
  • FIG. 14 is a diagram illustrating an application example of a system using the computer illustrated in FIG.
  • a network NW is a network in which servers 1401 and 1402 and clients 1431 to 1434 can communicate with each other.
  • the network NW includes a LAN (Local Area Network), a WAN (Wide Area Network), the Internet, a mobile phone network, and the like. Is done.
  • the server 1402 is a management server of a server group (servers 1421 to 1425) constituting the cloud 1420.
  • the client 1431 is a notebook personal computer
  • the client 1432 is a desktop personal computer
  • the client 1433 is a mobile phone (may be a smartphone or a PHS (Personal Handyphone System))
  • the client 1434 is a tablet terminal.
  • Servers 1401, 1402, 1421 to 1425, and clients 1431 to 1434 in FIG. 14 are realized by, for example, the computer shown in FIG.
  • the CPUs 101 and 102 and the storages 131 and 132 shown in FIG. 1 are mounted on different computers (for example, the mobile phone and the server in FIG. 14), and a plurality of computers are connected via the network NW.
  • the present invention can also be applied to a configuration that performs distributed parallel processing.
  • the task type is determined to avoid access contention due to the occurrence of another task for the storage accessed by a task.
  • the task type it is only necessary to classify multiple types of tasks based on the processing time depending on the data size, etc., when access conflict occurs instantaneously but does not affect the overall access time Also determines to allow access contention. For example, if it is a read task that uses only a certain bandwidth instead of the entire bandwidth, other write tasks may permit access to the same storage in parallel.
  • the storage to which the write task writes data is determined based on the time when a plurality of tasks access each storage, the access time can be shortened.
  • This determination uses a plurality of values such as the bandwidth of each storage and the data amount of the classified task, so that a suitable storage can be determined.
  • the write task data is temporarily written to another storage. Data once written to another storage is moved to the original storage after the access to the storage is completed. As a result, the access processing to the storage of the entire system can be made efficient.
  • the storage described in the above embodiment is a disk device, for example.
  • the present invention is not limited to this, and any other data memory device may be used as long as it causes access contention when shared by a plurality of CPUs. The same can be applied.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System (AREA)
  • Multi Processors (AREA)
  • Bus Control (AREA)

Abstract

Un planificateur de système utilise une première CPU (101) pour évaluer si une tâche appartient à une première catégorie de tâches. Lorsque la tâche appartient à la première catégorie de tâches d'une tâche de lecture de mémoire, il est évalué si une première zone d'accès à laquelle la tâche doit accéder existe dans une mémoire principale (131) ou une mémoire secondaire (132). Sur la base du résultat de cette évaluation, l'emplacement de mémorisation auquel la tâche doit accéder est fixé, soit dans la mémoire principale (131), soit dans la mémoire secondaire (132). Lorsque la tâche n'appartient pas à la première catégorie de tâches, il est évalué si la tâche appartient à une autre catégorie de tâches, et une contention d'accès sur la mémoire principale (131) partagée par de multiples CPU (101 et 102) est évitée.
PCT/JP2011/064841 2011-06-28 2011-06-28 Procédé et système de planification WO2013001613A1 (fr)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2013522397A JP5861706B2 (ja) 2011-06-28 2011-06-28 スケジューリング方法およびシステム
PCT/JP2011/064841 WO2013001613A1 (fr) 2011-06-28 2011-06-28 Procédé et système de planification
US14/134,643 US9507633B2 (en) 2011-06-28 2013-12-19 Scheduling method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2011/064841 WO2013001613A1 (fr) 2011-06-28 2011-06-28 Procédé et système de planification

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US14/134,643 Continuation US9507633B2 (en) 2011-06-28 2013-12-19 Scheduling method and system

Publications (1)

Publication Number Publication Date
WO2013001613A1 true WO2013001613A1 (fr) 2013-01-03

Family

ID=47423556

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2011/064841 WO2013001613A1 (fr) 2011-06-28 2011-06-28 Procédé et système de planification

Country Status (3)

Country Link
US (1) US9507633B2 (fr)
JP (1) JP5861706B2 (fr)
WO (1) WO2013001613A1 (fr)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017072827A1 (fr) * 2015-10-26 2017-05-04 株式会社日立製作所 Système informatique et procédé de commande d'accès
JP2019091492A (ja) * 2015-01-19 2019-06-13 東芝メモリ株式会社 メモリ装置及び不揮発性メモリの制御方法
JP2021509745A (ja) * 2017-12-28 2021-04-01 アドバンスト・マイクロ・ディバイシズ・インコーポレイテッドAdvanced Micro Devices Incorporated 同じチャネルで不均一なレイテンシを有するメモリタイプのための応答のサポート
US11042331B2 (en) 2015-01-19 2021-06-22 Toshiba Memory Corporation Memory device managing data in accordance with command and non-transitory computer readable recording medium

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10073714B2 (en) 2015-03-11 2018-09-11 Western Digital Technologies, Inc. Task queues
US9606792B1 (en) * 2015-11-13 2017-03-28 International Business Machines Corporation Monitoring communication quality utilizing task transfers
CN109656690A (zh) * 2017-10-11 2019-04-19 阿里巴巴集团控股有限公司 调度系统、方法和存储介质
US20240095541A1 (en) * 2022-09-16 2024-03-21 Apple Inc. Compiling of tasks for streaming operations at neural processor

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007249729A (ja) * 2006-03-17 2007-09-27 Hitachi Ltd マイクロプロセッサの負荷分散機能を備えたストレージシステム

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0318976A (ja) 1989-06-15 1991-01-28 Nec Corp 画像データ検索システム
US6256704B1 (en) 1993-09-16 2001-07-03 International Business Machines Corporation Task management for data accesses to multiple logical partitions on physical disk drives in computer systems
JP3563541B2 (ja) 1996-09-13 2004-09-08 株式会社東芝 データ格納装置及びデータ格納方法
JP2002055966A (ja) * 2000-08-04 2002-02-20 Internatl Business Mach Corp <Ibm> マルチプロセッサ・システム、マルチプロセッサ・システムに用いるプロセッサ・モジュール及びマルチプロセッシングでのタスクの割り当て方法
US7159216B2 (en) * 2001-11-07 2007-01-02 International Business Machines Corporation Method and apparatus for dispatching tasks in a non-uniform memory access (NUMA) computer system
US6986013B2 (en) * 2002-12-05 2006-01-10 International Business Machines Corporation Imprecise cache line protection mechanism during a memory clone operation
US7330930B1 (en) * 2004-03-09 2008-02-12 Adaptec, Inc. Method and apparatus for balanced disk access load distribution
US8150946B2 (en) * 2006-04-21 2012-04-03 Oracle America, Inc. Proximity-based memory allocation in a distributed memory system
US8132172B2 (en) * 2007-03-26 2012-03-06 Intel Corporation Thread scheduling on multiprocessor systems
JP2009080690A (ja) * 2007-09-26 2009-04-16 Nec Corp 情報記録再生システム,情報記録再生方法及びプログラム
JP4983632B2 (ja) 2008-02-06 2012-07-25 日本電気株式会社 情報通信システム、そのアクセス調停方法及びその制御プログラム
US8250328B2 (en) * 2009-03-24 2012-08-21 Micron Technology, Inc. Apparatus and method for buffered write commands in a memory
JP2011059777A (ja) * 2009-09-07 2011-03-24 Toshiba Corp タスクスケジューリング方法及びマルチコアシステム

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007249729A (ja) * 2006-03-17 2007-09-27 Hitachi Ltd マイクロプロセッサの負荷分散機能を備えたストレージシステム

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019091492A (ja) * 2015-01-19 2019-06-13 東芝メモリ株式会社 メモリ装置及び不揮発性メモリの制御方法
US11042331B2 (en) 2015-01-19 2021-06-22 Toshiba Memory Corporation Memory device managing data in accordance with command and non-transitory computer readable recording medium
WO2017072827A1 (fr) * 2015-10-26 2017-05-04 株式会社日立製作所 Système informatique et procédé de commande d'accès
JPWO2017072827A1 (ja) * 2015-10-26 2018-08-16 株式会社日立製作所 計算機システム、及び、アクセス制御方法
US10592274B2 (en) 2015-10-26 2020-03-17 Hitachi, Ltd. Computer system and access control method
JP2021509745A (ja) * 2017-12-28 2021-04-01 アドバンスト・マイクロ・ディバイシズ・インコーポレイテッドAdvanced Micro Devices Incorporated 同じチャネルで不均一なレイテンシを有するメモリタイプのための応答のサポート

Also Published As

Publication number Publication date
US20140109100A1 (en) 2014-04-17
JP5861706B2 (ja) 2016-02-16
JPWO2013001613A1 (ja) 2015-02-23
US9507633B2 (en) 2016-11-29

Similar Documents

Publication Publication Date Title
JP5861706B2 (ja) スケジューリング方法およびシステム
JP5516744B2 (ja) スケジューラ、マルチコアプロセッサシステムおよびスケジューリング方法
US7451276B2 (en) Prefetch command control method, prefetch command control apparatus and cache memory control apparatus
JP5626690B2 (ja) マルチプロセス間のバリアの物理マネージャ
US20080098180A1 (en) Processor acquisition of ownership of access coordinator for shared resource
KR20130063003A (ko) 컨텍스트 스위칭
JP5498505B2 (ja) データバースト間の競合の解決
US20140351519A1 (en) System and method for providing cache-aware lightweight producer consumer queues
JP2004326754A (ja) 共用リソースを使用するための仮想計算機の管理
JP6753999B2 (ja) 分散データベースシステム及び分散データベースシステムのリソース管理方法
JP2015504541A (ja) マルチプロセッサ・コンピューティング・システムにおけるメモリ・アクセスを動的に最適化する方法、プログラム、及びコンピューティング・システム
Ulusoy Research issues in real-time database systems: survey paper
CN107528871B (zh) 存储系统中的数据分析
US20120185672A1 (en) Local-only synchronizing operations
JP5158576B2 (ja) 入出力制御システム、入出力制御方法、及び、入出力制御プログラム
KR20140037749A (ko) 실행 제어 방법 및 멀티프로세서 시스템
US20080276045A1 (en) Apparatus and Method for Dynamic Cache Management
JP5776813B2 (ja) マルチコアプロセッサシステム、マルチコアプロセッサシステムの制御方法および制御プログラム
Chen et al. Data prefetching and eviction mechanisms of in-memory storage systems based on scheduling for big data processing
Shrivastava et al. Supporting transaction predictability in replicated DRTDBS
JP2005339299A (ja) ストレージ装置のキャッシュ制御方法
WO2017201693A1 (fr) Procédé et dispositif de planification pour une instruction d&#39;accès en mémoire, et système informatique
JP2018049394A (ja) データベース管理装置、データベース管理方法、およびデータベース管理プログラム
JP7346649B2 (ja) 同期制御システムおよび同期制御方法
CN108733409B (zh) 执行推测线程的方法以及片上多核处理器

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11868477

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2013522397

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11868477

Country of ref document: EP

Kind code of ref document: A1