GB2393282A - A parallel processing arrangement in the form of a loop of processors in which calculations are made to determine clockwise and anticlockwise transfer of load - Google Patents

A parallel processing arrangement in the form of a loop of processors in which calculations are made to determine clockwise and anticlockwise transfer of load Download PDF

Info

Publication number
GB2393282A
GB2393282A GB0309202A GB0309202A GB2393282A GB 2393282 A GB2393282 A GB 2393282A GB 0309202 A GB0309202 A GB 0309202A GB 0309202 A GB0309202 A GB 0309202A GB 2393282 A GB2393282 A GB 2393282A
Authority
GB
Grant status
Application
Patent type
Prior art keywords
processing
loop
clockwise
elements
tasks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB0309202A
Other versions
GB0309202D0 (en )
GB2393282B (en )
Inventor
Mark Beaumont
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Micron Europe Ltd
Original Assignee
Micron Europe Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/76Architectures of general purpose stored program computers
    • G06F15/80Architectures of general purpose stored program computers comprising an array of processing units with common control, e.g. single instruction multiple data processors
    • G06F15/8007Architectures of general purpose stored program computers comprising an array of processing units with common control, e.g. single instruction multiple data processors single instruction multiple data [SIMD] multiprocessors
    • G06F15/8023Two dimensional arrays, e.g. mesh, torus
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/76Architectures of general purpose stored program computers
    • G06F15/78Architectures of general purpose stored program computers comprising a single central processing unit
    • G06F15/7807System on chip, i.e. computer system on a single chip; System in package, i.e. computer system on one or more chips in a single package
    • G06F15/7821Tightly coupled to memory, e.g. computational memory, smart memory, processor in memory

Abstract

One aspect of the present invention relates to a method for balancing the load of a parallel processing system having a plurality of parallel processing elements arranged in a loop, wherein each processing element has a local number of tasks associated therewith. The method comprises determining within each processing element a total number of tasks present within the loop, calculating a local mean number of tasks within each processing element and a local deviation from the mean, assigning a weight to each of said plurality of processing elements, and calculating a local weighted deviation within each processing element. The method also comprises determining the sum weighted deviations within each processing element for one-half the loop in an anti-clockwise direction and in a clockwise direction, determining clockwise and anti-clockwise transfer parameters within each processing element, and redistributing tasks among the processing elements in response to the clockwise and anti-clockwise transfer parameters.

Description

1 1! METHOD FOR USING FILTERING TO LOAD BALANCE A

LOOP OF PARALLEL PROCESSING ELEMENTS

BACKGROUND OF THE INVERSION

100021 The present invention relates generally to parallel processing and more particularly to balancing the work loads of the processing elements within a parallel processing system 100031 Conventional central processing units ("CPU's"), such as those found in most personal computers, execute a single program (or instruction stream) and operate on a single stream of data For example, the CPU fetches its program and data from a random access memory ("RAM"), manipulates the data in accordance with the program instructions, and writes the results back sequentially There is a single stream of instructions and a single stream of data (note a single operation may operate on more than one data item, as in X - Y + A, however, only a single stream of results is produced) Although the (21'1 J may determine the sequence of instructions executed in the program itself; only one operation can be computed at a time Because conventional (2I'Us execute a single progr.tn1 (or instruction stream) and operate on a single stream of data, convctttiortal (2I'lJs may he rcicrrcd to as a singie-instnction, single data ('I'IJ or an.SI.SI) ('I'U 100041 1 he speed of conventional ('1'Us has dramatically increased in recent years Additionally, the USC of cache memories enables conventional ('PUB laster access to the desired instnctiolt and data streams However because conventional ('I'Us call only compete

( one operation at a time, conventional CPUs are not suitable for extremely demanding applications having large data sets (such as moving image processing, high quality speech recognition, and analytical modeling applications, among others).

100051 Improved performance over conventional SISD CPUs may be achieved by building systems which exhibit parallel processing capability. Typically, parallel processing systems use multiple processing units or processing elements to simultaneously perform one or more tasks on one or more data streams. For example in one class of parallel processing system, the results of an operation from a first CPU are passed to a second CPU for additional processing, and from the second CPU to another CPU, and so on. Such a system, commonly known as a "pipeline", is referred to as a multiple-instruction, singleata or MISD system because each CPU receives a different instruction stream while operating on a single data stream. Improved performance may also be obtained by using a system which contains many autonomous processors, each running its own program (even if the program running on the processors is the same code) and producing multiple data streams. Systems in this class are referred to as a multiple-instruction, multiple-data or MIMD system.

100061 Additionally, improved performance may be obtained using a system which has multiple identical processing units each performing the same operations at once on different data streams. The processing units may be under the control of a single sequencer running a single program. Systems in this class are referred to as a single-instruction, multiple data or SIMD system. When the number of processing units in this type of system is very large (e.g., hundreds or thousands), the system may be referred to as a massively parallel SIMD system.

100071 Nearly all computer systems now exhibit some aspect of one or more of these types of parallelism. For example, MMX extensions arc SIMD; multiple processors (graphics processors, etc) are MIMD; pipclining (especially in graphics accelerators) is MISD.

Furthermore, techniques such as out of order execution and multiple execution units have been used to introduce parallelism within conventional CPUs as well.

100081 Parallel processing is also used in active memory applications. An active memory refers to a memory device having a processing resource distributed throughout the memory structure. The processing resource is most often partitioned into many similar processing elements (PEs) and is typically a highly parallel computer system. By distributing the processing resource throughout the memory system, an active memory is able to exploit the very high data bandwidths available inside a memory system. Another advantage of active memory is that data can be processed "on-chip" without the need to transmit the data across a system bus to the CPU or other system resource. Thus, the work load of the CPI J may be reduced to operating system tasks, such as scheduling processes and allocating system resources.

( 100091 A typical active memory includes a number of interconnected PEs which are capable of simultaneously executing instructions sent from a central sequencer or control unit. The PEs may be connected in a variety of different arrangements depending on the design requirements for the active memory. For example, PEs may be arranged in hypercubes, butterfly networks, one-dimensional stringsAoops, and two-dimensional meshes, among others. 100101 In typical active memories, load imbalances often occur such that some PEs are idle (i.e., without assigned tasks) while other PEs have multiple tasks assigned. To maximize the effectiveness of the active memory, it is desirable to balance the work load across all of the PEs. For example in an active memory having a multitude of identical PEs, it is desirable that each PE be assigned the same number of instructions by the central sequencer, thus maximizing the resources of the active memory. Additionally in an active memory having non-identical PEs, it may be desirable to assign more tasks to the PEs with greater processing capabilities. By balancing the load, the amount of time that one or more PEs is idle while waiting for one or more other PEs to complete their assigned tasks is minimized.

100111 Thus, there exists a need for a method for balancing the load of a parallel processing system such that the resources of the parallel processing system are maximized. More specifically, there exists a need for a method for balancing the load of an active memory such that the resources of the active memory are maximized.

SUMMARY OF THE INVENTION

100121 One aspect of the present invention relates to a method for balancing the load of a parallel processing system having a plurality of parallel processing elements arranged in a loop, wherein each processing element has a local number of tasks (or) associated therewith, wherein r represents the number for a selected processing element Ply,, and wherein each of the processing elements are operable to communicate with a clockwise adjacent processing element and with an anti-clockwise adjacent processing element. 'I'he method comprises determining within each processing element (PET) a total number of tasks ( V) present within the loop, calculating a local mean number of tasks (Mr) within each of' the plurality of processing elements (PEr), calculating a local deviation ( r) within each ot' the plurality of processing elements (PET) 'I'he method also comprises determining a sum weighted deviation within each of the processing elements (PET) for one-half the loop in an anti-clockwise direction (A), the one-half of the loop being relative to each of the selected processing elements (PlEr), determining a sum weighted deviation within each of'tnc processing elements (PET) for one-half the loop in a clockwise direction (Cl), the one-half of'the loop being relative to each of the selected processing elements (Pl.r); dctcnnining a clockwise transfer parameter

( (Tc) and an anti-clockwise transfer parameter (Ta) within each of the processing elements (PEr), and redistributing tasks among the plurality of processing elements in response to the clockwise transfer parameters (Tc) and the anti-clockwise parameters (Ta) within each of the plurality of processing elements (PE,).

100131 Another aspect of the present invention relates to a method for assigning tasks among a plurality of processing elements within a parallel processing system, the processing elements being connected in a loop and-having a local number of tasks (vr) associated therewith. The method comprises determining the total number of tasks on the loop, computing a local mean value for each of the processing elements, assigning a weight to each of said plurality of processing elements, and computing a local weighted deviation for each of the processing elements, the local deviation representative of the difference between the local number of tasks for a processing element and the local mean value for the processing element.

The method also includes summing the weighted deviation of the processing elements located within one-half of the loop in an anti-clockwise direction, summing the weighted deviation of the processing elements located within one-half of the loop in a clockwise direction, computing a number of tasks to transfer in a clockwise direction, computing a number of tasks to transfer in an anti-clockwise direction, and redistributing tasks relative to the number of tasks to transfer in a clockwise direction and the number of tasks to transfer in an anti clockwise direction.

100141 The present invention enables tasks to be distributed along a group of serially connected PEs so that each PE typically has X number of tasks or (X+ I) number of tasks to perform in the next phase. The present invention may be performed using the hardware and software (i.e., the local processing capability) of each PE within the array Those advantages and benefits, and others, will become apparent from description of the invention below.

BRIEF DESCRIPTION OF THE DRAWINGS

100151 To enable the present invention to be easily understood and readily practiced, the present invention will now be described for purposes of illustration and not limitation, in connection with the following figures wherein: [00161 FIG. I is a block diagram illustrating an active memory according to an embodiment of the present invention.

100171 FIG. 2 is a block diagram of a processing element for the active memory as illustrated in I.IG. I according to an embodiment of the present invention.

1001X' FIG. 3 illustrates an array of the processing elements as illustrated in FIG. 2 arranged in a loop according to an embodiment of the present invention.

( 100191 FIG. 4 illustrates an operational process for balancing the load within a loop of processing elements according to various embodiments of the present invention.

100201 FIG. 5 illustrates the determination of the sum weighted deviation in the anti-

clockwise half of loop (A) and the determination of the sum weighted deviation in the clockwise half of loop (C) for a local PE according to an embodiment of the present invention. DETAILED DESCRIPTION OF THE INVENTION

100211 As discussed above, parallel processing systems may be placed within one or more classifications (e.g., MISD, MIMD, SIMD, etc.). For simplicity, the present invention is discussed in the context of a SIMD parallel processing system. More specifically, the present invention is discussed in the context of a S'IMD active memory. It should be noted that such discussion is for clarity only and is not intended to the limit the scope of the present invention in any way. The present invention may be used for other types and classifications of parallel processing systems.

100221 FIG. I is a block diagram illustrating an active memory 10 according to an embodiment of the present invention. It should be noted that the active memory 10 is only one example of a device on which the methods of the present invention may be practiced and those of ordinary skill in the art will recognize that the block diagram of FIG. I is an overview of an active memory device 10 with a number of components known in the art being omitted for purposes of clarity.

100231 Active memory 10 is intended to be one component in a computer system.

Processing within active memory 10 is initiated when the active memory 10 receives commands from a host processor (not shown), such as the computer system's (2i"J. A complete processing operation (i.e., data movement and processing) in the active memory 10 may consist of a sequence of many commands from the host to the active memory device 10.

100241 Active memory 10 is comprised of a host memory interface ("HMI") 12, a bus interface 14, a clock generator 16, a task dispatch unit ("TDU") 18, a DRAM control unit ("DC'cJ") 2O, a DRAM module 22, a programmable SIAM 24, an array control sequencer 2(i, and a processing clement array 28, among others.

100251 The ilM1 12 provides an input/output channel between the host (such as a CPU, not shown) and the L)RAM module 22. In the current embodiment, the HMI 12 receives command (cmd), address (addr), and data signals (among others) from and sends data and ready (ray) signals (among others) to the host. The HMI 12 approximates the operation of a standard non-active memory so that the host, without modifications, is compatible with the active memory 10.

( 100261 The HMI 12 may be similar in its operation to the interface of a synchronous DRAM as is know in the art. Accordingly, the host must first activate a page of data to access data within a DRAM module 22. In the current embodiment, each page may contain 1024 bytes of data and there may be 16,384 pages in all. Once a page has been activated, it can be written and read through the HMI 12. The data in the DRAM module 22 may be updated when the page is deactivated. The IIMI 12 also sends control signals (among others) to the DCU 20 and to the processing element array 28 via the task dispatch unit 18.

100271 The HMI 12 may operate at a frequency different than that of the frequency of the master clock. For example, a 2x internal clock signal from clock generator 16 may be used.

Unlike a traditional DRAM, the access time for the HMI 12 uses a variable number of cycles to complete an internal operation, such as an activate or deactivate. Thus the ready signal (ray) is provided to allow the host to detect when a specific command has been completed.

100281 The bus interface 14 provides and inputloutput channel between the host and the TDU 18. For example, the bus interface 14 receives column select (cs), write command (w), read command (r), address (addr), and data signals (among others) from and places interrupt (intr), flag, and data signals (among others) onto the system bus (not shown). The bus interface 14 also receives signals from and sends signals to TDU 18.

100291 The clock generator 16 is operable to receive an external master clock signal (xl) and operable to provide the master clock signal (xl) and one or more internal clock signals (x2, x4, x8) to the components of the active memory. It should be apparent to one skilled in the art that other internal clock signals may be produced by the clock generator 16.

100301 The TDU 18 communicates with the bus interface 14, the 1iM1 12, the programmable SRAM 24, the array control sequencer 26, and the DCU 20. In the current embodiment, the TDU 18 functions as an interface to allow the host to issue a sequence of commands to the array control sequencer 26 and the DCU 20. Task commands from the host may be buffered in the TDU's FIFO buffers to allow a burst command to be issued. Commands may contain information on how the tasks in the array control sequencer 26 and the DCU 20 should be synchronized with one another, among others.

100311 The DCU 20 arbitrates between the TDU 18 and the HMI 12 and sends commands to the DRAM modules 22 and the processing element array 28. T he D(JU 20 also schedules refreshes within the DRAM modules 22. In one embodiment, the DRAM modules 22 of the active memory 10 may be comprised of sixteen 64k x128 eDRAM (or cmbcddcd DRAM) cores. Each eDRAM core may be connected to an array of sixteen PEs, thus providing 256 (16 x 16) I'Es in all.

100321 The programmable SEAM 24 functions as a program memory by storing commands issued by the TDIJ I X. For example, the TDIJ 1 X may transmit a "write program memory

( address" command which sets up a start address for a write operation and a "write program memory data" command which writes a memory location and increments the program memory write address, among others. The programmable SRAM 24, in the current embodiment, has both an address register and a data output register.

100331 The array control sequencer 26 may be comprised of a simple 16 bit minimal instruction set computer (16-MISC). The array control sequencer 26 communicates with the TDU 18, the programmable SRAM 24, and the DCU 20, and is operable to generate register file addresses for the processing element array 28 and operable to sequence the array commands, among others.

100341 The processing element array 28 is comprised of a multitude of processing elements ("PEs") 30 (see FIG. 2) connected in a variety of different arrangements depending on the design requirements for the processing system. For example, processing units may be arranged in hypercubes, butterfly networks, one-dimensional strings/loops, and two dimensional meshes, among others. In the current embodiment, the processing elements 30 are arranged in a loop (for example, see FIG. 3). The processing element array 28 communicates with the DRAM module 22 and executes commands received from the programmable SRAM 24, the array control sequencer 26, the DCU 20, and the HMI 12. Each PE in the processing element array 28 includes dedicated H-registers for communication with the HMI 12. Control of the H-registers is shared by the HMI 12 and the DCU 20.

100351 Referring now to I?IG. 2, a block diagram of a PE 30 according to one embodiment of the present invention is illustrated. PE 30 includes an arithmetic logic unit ("ALU") 32, Q registers 34, M-registers 36, a shift control and condition register 38 (also called "condition logic" 38), a result register pipeline 40, arid register file 42. The PE 30 may also contain other components such as multiplexers 46 and logic gates (not shown), among others.

l0036l In the current embodiment, the Q-registers 34 are operable to merge data into a floating point format and the M-Registers 36 are operable to de-mcrgc data from a Boating point format into a single magnitude plus an exponent format. 'lithe Al.U 32 is a multiplier adder operable (among others) to receive information from the Q-registers 34 and M-rcgisters 36, execute tasks assigned by the TDU 18 (see E IG. 1), and transmit results to the shift control and condition logic 38 and to the result register pipeline 40. The result register pipeline 40 is operable to communicate with the register file 42, which holds data for transfer into or out of the DRAM modules 22 via a DRAM interface 44. Data is transferred between the BE and the DRAM module 22 via a pair a registers, one register being responsive to the OCI J 20 and the other register being responsive to the PE 30MI he DRAM interface rcccivcs command information from the DCU 20. 1 he DRAM interface 44 also permits the PE 3() to communicate with the host through the host memory access port 46.

( 100371 In the current embodiment, the H-registers 42 are comprised of synchronous SRAM and each processing element within the processing element array 28 contains eight H-registers 42 so that two pages can be stored from different DRAM locations, thus allowing the interleaving of short i/o bursts to be more efficient. Result register pipeline 40 is also connected to one or more neighborhood connection registers ("X- register") (not shown). The X-register links one PE 30 to its neighboring PE's 30 in the processing element array 28.

100381 The reader desiring more information about the hardware shown in FlGs. I and 2 is directed to UK Patent application (serial no. not yet assigned) entitled "Control of Processing Elements in Parallel Processors" filed 17 September 2002, (Micron no. 02-1604) which is hereby incorporated by reference. Details about the PEs may also be found in UK Patent Application No. 021562.2 entitled "Host Memory Interface for a Parallel Processor" filed 17 September 2002, (Micron no. 02-0703) which is hereby incorporated by reference.

10039] FIG.3 is a simplified diagram showing the interconnections of an array of the; processing elements 30 (as illustrated in FIG. 2) arranged in a loop 50 according to an embodiment of the present invention. In the current embodiment, loop 50 is comprised of eight (8) PEs 30 (i.e., PEo' PEP,... PE7) which are interconnected via their associated X register links. It should be noted that the number of PEs 30 included in loop 50 may be altered while remaining within the scope of the present invention. As illustrated in FIG. 3, each PE is operable to communicate with its clockwise and anti-clockwise neighbor. For example, PET is operable to communicate with its clockwise neighbor, PE2, and with its anti clockwise neighbor, PEo. In the current embodiment, every PE 30 on the loop 50 receives instructions from a single TDU 18 as discussed in conjunction with FIG. I. Furthermore, each PE has a local number of tasks (vr) associated thcrcwith. For example, PEo has three (3) tasks associated therewith (i.e., v0= 3), PE, has six (6) tasks associated therewith (i.e., v/ = 6), PE2 has two (2) tasks associated therewith (i.c., v2 = 2), etc. 100401 FIG. 4 illustrates an operational process 60 for balancing the work loads between the PEs 30 on loop 50 according to an embodiment of the present invention. Operational process 60 begins by determining the total number of tasks ( V) present on the loop in operation 61.

As discussed above in conjunction with FIG. 3, each PE, (where r = 0 to 7, c.g., I'Eo. I'E,....

PE7) in the loop has a local number of tasks (vr) associated therewith. In the current embodiment, each PEr passes its own value or onto its clockwise neighbor and simultaneously receives a value vr+/ from its anticlockwise neighbor. Each PEr keeps a running partial sum (i.e., adds each value vr+/ received to its own value or). This process continues until each value vr has moved clockwise around the loop and visited each l'Er, in this case seven transfers are needed. At the end of the rotation process, the sum represents the total number

( of tasks (if) on the loop. As illustrated in FIG. 3, loop 50 has fortythree (43) total tasks associated therewith.

i= H -I 10041 I The sum ( F) can be expressed by the equation V = v;, where N represents the i=o number of PEs 30 in the loop 50 (here N= 8), and v; represents the local number of tasks associated with an {h processing element in the loop. For example, for i = 3, the number of tasks associated with PE3 (i.e., V3) is added to the sum V. It should be noted that after a rotation is completed, each PEr will have calculated the same value for ( V). It should also be noted in the current discussion, "local" refers to the values or functions associated with a single PE within the loop, whereas "global" refers to the values or function associated with the entire loop of PEs.

100421 After the total number of tasks (id) present on the loop is determined in operation 61, the local mean number (M,) of tasks for each PEr is computed in operation 62. In the current embodiment, operation 62 employs a rounding function to ensure that no tasks are lost or =N-I "gained" during the rounding process (i.e., to ensure that V = M i).

i=o 100431 For example assume that 13 tasks (i.e., V= 13) are to be shared by the eight PEs (i.e., PEo through PE7). Without the rounding function, the local mean for each PE would be PEr = 1.625 before rounding (i.e., 13 8 = 1.625). If the fraction thirteen-eighths is set to round down for each PE (i.e., 13 8 = 1), then the sum of the means for all of the individual PEs (i.e., PEo through PE7) is equal to eight (8) and five (13 - 8 = 5) tasks are lost. In contrast, if the fraction thirteen- eighths is set to round up for each PE (i.e., 13 - 8 = 2), then the sum of the means for all of the individual PEs (i.e., PEO through PE7) is equal to sixteen ( 16) and three (16 - 13 = 3) extra tasks are gained. The rounding function is discussed in more detail in U.S. Patent Application Serial No. entitled "Method for Rounding Values for a Plurality of Parallel Processing Elements" filed (DI3001064-000, Micron no. 02-

1269) and incorporated in its entirety by refcrcncc herein.

100441 The rounding function M, = Tnnc((V + F:r) I N) prevents tasks From being lost or gained (where M, represents the local mean tor PE,, 2N represents the total number of PEs 30 in the loop 50, and Er represents a number in the range of O to (N -I)). In the current embodiment, each PE is assigned a different Er value for controlling the rounding. The simplest form for the function E is the case in which Or Pr' where for represents the l'Es position in the loop. For cxamplc, for PEo, ED = 0; for PEP, E' = 1; for PE2, E2 -- 2; etc. By assigning each PE in the loop a different Er value, the rounding function can be controlled such that some of the local means are rounded up and some of the local means are rounded

( i=N-I down, thus insuring that V = Mimi. It should be noted that in the current embodiment, the i=o local mean for each PE 30 in the loop is computed in parallel with the local means of the other PEs in the loop.

100451 Table I illustrates the local mean calculation for the loop 50 as illustrated in FIG. 3 in which the total number of tasks on the loop is equal to forty-three (43). Referring to Table 1, it is apparent that the rounding function controls the rounding such that Mo through M' are all rounded to five (5), whereas M5 through M7 are all rounded to six (6). The sum of the values of Mo through M7 is equal to forty-three (43), which equals the total number of tasks ( V) on the loop. Thus, tasks are neither lost nor gained due to rounding.

PE, v, E, ( V +E,)IN Mr = Trunc(( V +Er)/N) D, PEo 3 0 5.375 -2 PER 6 5.5

PE2 2 2 5.625 5 -3

PE3 7 3 5.75 2

PEP 8 4 5.875 5 3

PE5 5 5 6 6 -1

PEs 5 6 6.125 6 -1 PE7 7 7 6.25 6

Table #1 -- Local Mean Calculation for the L cop 50 (V= 43, N= 8).

100461 After the local means are computed in operation 62, the local deviation Dr is calculated for each PE in operation 63. In the current embodiment, the local deviation is simply the difference between the local number of tasks and the local mean (i.e., D, = or -

M,). The local deviations for PEo through PE7 are illustrated in Table #1.

100471 After the local deviations are computed in operation 63, the sum weighted deviation in the anti-clockwise half of loop (A) is determined for each PE in operation 64. The anti-

clockwise sum (A) is then formed in a similar manner as that used to forth the partial value sum (V) in operation 61. In operation 64, however, a weighing factor (w,) is assigned to each PE and the local weighted deviations (w,D,) are then rotated halfway around the loop in clockwise direction and summed. In the current embodiment, greater weight is given to those l'Es that are located closer to the sclcctcd PE (i.c., PEs that arc closer to the selected PE have a greater weighing factor (w,)). for example if PE2 is the selected clcmcnt, then weighing factors are assigned to PEP, PEo, and l'E7 such that w/ > ivy > W7. I he sum weighted

( deviation in the anti-clockwise half of loop can be represented by the equation: i=(N/2)-1 A = wiD,.

i=l 100481 After the sum weighted deviation in the anti-clockwise half of loop (A) is determined in operation 64, the sum weighted deviation in the clockwise half of loop ((] are determined for each PE in operation 65. The clockwise sum (C) is formed in a similar manner as that used to determine the anti-clockwise sum (A) in operation 64. In operation 65, however, the local weighted deviations (w,D,) are rotated halfway around the loop in an anti-clockwise direction and surnrned. As discussed in conjunction with operation 64, greater weight is given to those PEs that are located closer to the selected PE (i.e., PEs that are closer to the selected PE have a greater weighing factor (wr)). Again if PE7 is the selected element, then weighing factors are assigned to PE3, PE4, and PEssuch that W3 > we > W5. The sum deviation in the clockwise half of loop can be represented by the equation: i=N-I C= wiD I. i=(N/2)+1 100491 FIG. S illustrates how the sum weighted deviation in the anti-clockwise half of loop (A) and the sum weighted deviation in the clockwise half of loop (C) is determined for PE2.

As seen in FIG. S. the sum weighted deviation in the clockwise half of loop (C) is determined by combining PE3, PE, and PE5 into a "super PE". The sum weighted deviation of this super PE is C.=SUm(W3D3 + WD' + W5D5). I,ikewise, the sum weighted deviation in the anti clockwise half of loop (A) is detennincd by combining PE/, PEo, and PE7 into another "super PE". The sum deviation of this super PE is A=SUm(W/D/ + WOI)O + W7D7). It should be noted that in the current embodiment no weight is given to PE6.

100501 Referring to Table #1, the sum deviation in the clockwise half of loop (C) using this super PE is w3D3 + wD' + WsD5 = w3(2) + w(3) + W5(-l). If weighing factors are assigned to Pal, PE4, and PEs as discussed above, for example in the current embodiment, W3 = 3, we --

2, and W5 = 1, then C = 3(2) 1- 2(3) + 1(-1) = 11. Likewise, the sum deviation in the anti clockwise half of loop (A) using the other super l'E is w,L)/ -it woDo + w717 --- w/( I) + Wo(-2) + w7(1). Again, if weighing factors are assigned to PEP, PEo, and PE7 as discussed above, for example in the current embodiment, w' = 3, we = 2, and W7 - 1, then A = 3(1) + 2(-2) + 1(1) = 0. 100511 After the sum deviation in the clockwise half of loop (C) is determined in operation 64 and the sum deviation in the anti-clockwise half of loop (A) is determined in operation 65.

clockwise and anti-clockwise transfer parameters (7C and 7a, respectively) are determined in operation 66. Referring again to FIG. 5 from the perspective of PE2, the loop has four values C, A, O. and S. where C represents both the sum deviation in the clockwise half of loop and

( the deviation of the first "super PE", A represents both the sum deviation in the anti-clockwise half of loop and the deviation of the second "super PE", 5 represents the deviation of the selected PE (e.g., here PE2) and O represents the deviation of the PE opposite to the selected PE (i.e., the PE for which the local deviation is being determined; here, PE2). The selected PE can deduce the deviation value of its opposite PE) because all deviations in the loop must sum to zero (i.e. , A + C + 5 + 0 = 0). It should be noted that A and C are calculated for each PE in parallel. 100521 It should be noted that in the current embodiment, the weights assigned to each PE are selected such that a linear relationship exist between the weights and each PE's location around the loop. Thus for example, through the use of an intermediate sum (K), the weighted sums in the anti-cloclvise direction (A) can be calculated without using multiplication.

Initially, Ko = Ao = Do and as each value Di (i = I to N - I) is rotated through the local PE the a calculation for K, (e.g., Kj = K, + Di) and a calculation for A, (e.g., Aj = Ai + Ki) is performed. After 'r' deviations have been rotated, the values of Kr and As are given by the following equations: i=r Kr=Di and; i=o i=r Ar =Ki i=0 100531 It should be noted that the same strategy can be used for evaluating the weighted clockwise sum C. Using the above system of weighting, To is determined from the equation T. (S/4) + A and Ta is determined from the equation 7'a = (LS/4) - l\, where = (A -- C)14N.

In the current embodiment, A = (A - C)14N= (O - 11)/32 = 0.34375 and thus, T. = (-3/4) 0.34375 = 0.40625, and Ta = (-3/4) - 0.34375 = -1.09375.

100541 It should be noted that the values obtained for Tc and Ta may need to be rounded in such a manner that R(7c) + R(Ta) = D,. In the current embodiment, tasks are transmitted in only one direction at a time around the loop (i.e., either in the clockwise or anti-clockwise direction). A direction is selected for the 'first' transmission around the loop and the values for T. and 7'a are rounded up in this direction. It should be noted that by ensuring 'excess traffic' is sent in the 'first' direction, the chance of the process finishing one step earlier is increased. In the current embodiment, tasks are transmitted in the anti-clockwise first, such that R(Ta) = Ceil (Ta)' where the 'Ceil' function returns the closest integer greater than or equal to the supplied input. To ensure that extra tasks are not created or lost by the rounding of R(Ta), R(TC) is set equal L), - R(7a) 00551 Accordingly in the example above, tasks are transmitted anti-clockwise first such that 7a = -1.09375 is rounded up to -1. To ensure that extra tasks are not created or lost by the

( rounding of R(Ta), R(Te) is set equal Dr - R(Ta). Thus, Tc is equal to 2 [i.e., -3 - (-1) = -2]. It should be noted that other rounding mechanisms may be used While remaining within the scope of the present invention. For example, Tc may be rounded up on odd numbered PE's and Ta rounded up on even numbered PE's such that pairs of odd and even PE's exchange their 'excess traffic'.

100561 In the case where the loop 50 is comprised of an odd number of PEs 30, an extra "phantom" PE may be used. The phantom PE is assigned a deviation of zero and is located diametrically opposite from the perspective of the selected PE (i.e., the PE for which the local deviation is being determined). For example, assume that loop 50 only has seven PEs (i.e., PEo to PE6). To calculate the local deviation of PEG, the phantom PE would be placed between PE3 and PE4; for PEP, between PE4 and PEs; for PE2, between PE5 and PE6, etc. Thus, the number of PEs between the selected PE and the phantom PE in the clockwise direction is equal to the number of PEs between the selected PE and the phantom PE in the! anti-clockwise direction.

100571 In some instances, it may be desirable to clamp the transfer rates to reduce the number of iterations needed to balance the loop. In one embodiment, a non-linear clamping operation is utilized. For example, the equations for Tc and Ta may be re-written as Tc = Tnnc[(2S + A) 4] and Ta = Trunc[(2S - A) 4], respectively, where = (4 - C) represents the number of 'thru' tasks (i.e., the number of tasks passing through the current PK. If Tc and Ta are of opposite sign, then the number of 'thru' values may be reduced by clamping either Tc or Ta to zero. The remaining value (i.e., Ta or Tc rcspcctively) may then be found using the identity S = Ta + TThis ensures that any rounding error introduced by the Trunc function is correctly compensated for such that S is fmally equal to zero.

100581 In the current embodiment for example, the transtcr parameter with the smallest absolute magnitude may be selected. The desired result can be achieved by applying the following non-linear modifications to where Mag = abs(2S). If A > Mag then set equal to Mag and it A < -Mag, then set A equal to -Mag. The revised value for (i.e., = Mag or = -Mag) is then substituted into the equations T. = Irunc[(2S A) - 41 and Ta = Truncl(25 A) 4]. It should be noted that other clamping operations may be used while remaining within the scope of the present invention.

100591 After the clockwise and anti-clockwise transfer parameters are determined in operation 66, the tasks are redistributed among the PEs in response to the clockwise and anti clockwise transfer parameters (i.c., To and Ta, respectively) in operation 67. In the current cmbodimcnt, a positive T. parameter represents the number of values that are to be transmitted clockwise out of the local PK. A negative Tc parameter represents the number of values that are to be transmitted from the clockwise PE into the local Pl. Similarly, a

( positive Ta parameter represents the number of values that are to be transmitted anti clockwise out of the local PK. A negative Ta parameter represents the number of values that are to be transmitted from the anticlockwise PE into the local PK.

100601 If the local deviation (D) is negative, one or more of the received values will be "absorbed" by the local PE to make up the local deficit. The other will be transmitted, either from the clockwise PE to the anti-clockwise PE, or from the anti-clockwise PE to the clockwise PK. On occasion, some PEs may start off with no values at all, these PEs may have to "mark time" until they receive a value. It should be noted that after each successful transmission or receipt, the local parameters Tc and Ta need to be updated. The redistribution stage only terminates when Tc = Ta = 0 for all PEs.

100611 As discussed above, the clockwise transfer parameter for PE2 in the current embodiment is Tc = -2. Because T. is negative, 2 tasks are to be transmitted from PE3 into PE2. Likewise, the anti-clockwise transfer parameter for PE2 in the current embodiment is 7'a = -1. Because Ta is negative, one task is to be transmitted from PEP into PE2. It should be apparent that PE2 had a deviation of-3 (i.e., D2 = -3). Thus, three tasks were transferred into PE2 in operation 67.

100621 It should be recognized that the above-described embodiments of the invention are intended to be illustrative only. Numerous alternative embodiments may be devised by those skilled in the art without departing from the scope of the following claims.

l

Claims (21)

( What is claimed is:
1. A method for balancing the load of a parallel processing system having a plurality of parallel processing elements arranged in a loop, wherein each processing element has a local number of tasks associated therewith, wherein r represents the number for a selected processing element PE,, and wherein each of said processing elements are operable to communicate with a clockwise adjacent processing element and with an anti-clockwise adjacent processing element, the method comprising: determining within each of said processing elements a total number of tasks present within said loop; calculating a local mean number of tasks within each of said plurality of processing elements; calculating a local deviation within each of said plurality of processing elements; determining a sum weighted deviation within each of said processing elements for one-half of said loop in an anti-clockwise direction, said one-half of said loop being relative to each of said selected processing elements; determining a sum weighted deviation within each of said processing elements in one-half of said loop in a clockwise direction, said one-half of said loop being relative to each of said selected processing element; determining a clockwise transfer parameter and an anti-clockwise transfer parameter within each of said processing elements; and redistributing tasks among said plurality of processing elements in response to said clockwise transfer parameters and said anti-clockwise parameters within each of said plurality of processing elements.
2. The method of claim I wherein said determining within each of said processing elements a total number of tasks present within said loop, comprises: transmitting said local number of tasks associated with each of said processing elements to each other of said plurality of processing elements within said loop; receiving within each of said processing elements said number of local tasks associated with said each other of said plurality of processing elements; and summing said number of local tasks associated with each of said processing elements with said number of local tasks associated with each other of said plurality of processing elements.
(
3. The method of claim I wherein said determining said total number of tasks present i=N-I within said loop includes solving the equation V = vi, where V represents said i=-N total number of tasks, 2N represents the number of processing elements in said loop, and vi represents said local number of tasks associated with an Ah processing element in said loop.
4. The method of claim 1 wherein said calculating a local mean number of tasks within each of said plurality of processing elements (PET) includes solving the equation M, = Trunc((V + Er) I 2N), where M, is said local mean for PEr, where 2N is the total number of processing elements in said loop, and where E, is a number in the range of O to (2N-I) and wherein each processing element has a different E, value.
5. The method of claim 4 wherein Er controls said Trunc function such that said total number of tasks (V) for said loop is equal to the sum of the local mean number of tasks (M,) for each of said plurality of processing elements in said loop (i.e., i=N-I V= Semi) i=-N
6. The method of claim 4 wherein said local mean M, = Trunc((V + E,) I N) for each local PEr within said loop is equal to one of X and (X+l).
7. The method of claim I wherein said calculating a local deviation within each of said plurality of processing elements comprises finding the diffcrcncc bctwocn said local number of tasks and said local mean number for each of said plurality of processing clemcnts.
8. The method of claim I wherein said dctennining a sum weighted deviation within each of said processing elements for one-half of said loop in an anti- clockwise direction comprises: assigning a weight to each other of said plurality of processing elements within said loop; transmitting said local deviation and said weight associated with each of said processing elements half way around said loop in an anti-clockwise direction, said one-half of said loop being relative to each of said sclccted processing elements;
( receiving said local deviation and said weight associated with each other of said processing elements half way around said loop in a clockwise direction, said one-half of said loop being relative to each of said selected processing elements; and summing the product of said local deviation and said weight associated with each other of said processing elements half way around said loop in a clockwise direction.
9. The method of claim I wherein said determining a sum weighted deviation within each of said processing elements in one-half of said loop in a clockwise direction comprises: assigning a weight to each other of said plurality of processing elements within said loop; transmitting said local deviation and said weight associated with each of said processing elements half way around said loop in an clockwise direction, said one half of said loop being relative to each of said selected processing elements; receiving said local deviation and said weight associated with each other of said processing elements half way around said loop in a anti-clockwise direction, said one half of said loop being relative to each of said selected processing elements; and summing the product of said local deviation and said weight associated with each other of said processing elements half way around said loop in a anti- clockwise direction.
10. The method of claim I wherein said determining a clockwise transLcr parameter and an anti-clockwise transfer parameter within each of said processing elements comprises: sewing Ta = (S14) - A; and setting To = (S14) + A, where T. represents said clockwise transfer parameter, Ta represents said anti-clockwise transfer paramctcr, A = (A -- C)14N, A represents the sum weighted deviation within each of said processing elements in one-half of said loop in an anti-clockwise direction, C rcprcscnts sum weighted deviation within each of said processing elements in one-half of said loop in a clockwise direction, and N represents the number of PEs on the loop.
11. The method of claim I wherein said determining a clockwise transfer parameter and an anti-clockwise transfer parameter within each of said processing elements comprises at least one of: setting T. = Trunc[(2.5'4A) 4J and Ta =.; - Tc; and
( setting the Ta = Trunc[(2S - A) 4] and To = S - Ta; where Tc represents said clockwise transfer parameter, where Ta represents said antieloekwise transfer parameter, where A = Mag, if > Mag, where A = -Mag, if < -Mag, where Mag = abs(2S), and where S represents the local deviation of a selected processing element.
12. A method for reassigning tasks among an odd numbered plurality of processing elements within a parallel processing system, said processing elements being connected in a loop and each having a local number of tasks associated therewith, the method comprising: determining a total number of tasks on said loop; computing a local mean value for a selected processing element; computing a local deviation for said selected processing element, said local deviation representative of the difference between said local number of tasks for said selected processing element and said local mean value for said selected processing element; inserting a phantom processing element within said loop; assigning a weight to each of said plurality of processing elements; summing a weighted deviation of said processing elements located within one half of the loop in an anti- eloekwise direction relative to said selected processing element; summing said weighted deviation of said processing elements located within one half of the loop in a clockwise direction relative to said selected processing element; computing a number of tasks to transfer in a clockwise direction for said selected processing element; computing a number of tasks to transfer in an anti-clockwise direction for said selected processing element; and reassigning tasks relative to the said number of tasks to transfer in a clockwise direction and said number of task to transfer in an anti-clockwise direction.
13. he method of claim 12 wherein said determining the total number of tasks on said loop, comprises: transmitting said local number of tasks associated with each of said processing elements to each other of said plurality of processing elements within said loop; receiving within each of said processing elements said number of local tasks associated with said each other of said plurality of processing elements; and IX
( summing said number of local tasks associated with each of said processing elements with said number of local tasks associated with each other of said plurality of processing elements.
14. The method of claim 12 wherein computing a local mean value for a selected processing element includes solving the equation M, = Tmnc((V + E,) / 2N), where Mr is said local mean for a selected PE,, 2N is the total number of processing elements in said loop, and E, is a number in the range of O to (2N-I) and wherein each processing element has a different Er value.
15. The method of claim 14 wherein E, controls said Trunc function such that said total number of tasks ( V) for said loop is equal to the sum of the local mean number of tasks (M,) for each of said plurality of processing elements in said loop (i.e., i=2 N-l V= Semi) i=0
16. The method of claim 12 wherein said inserting a phantom processing element within said loop further comprises: locating said phantom processing element in a position within said loop that is diametrically opposed to said processing element; and assigning a zero deviation value to said phantom processing clement.
17. T he method of claim 12 wherein said assigning a weight to each of said plurality of processing elements includes assigning a weight dependent upon each of said processing element's location to said selected processing clement.
18. The method of claim 12 wherein said computing a local mean value for a sclectcd processing clement, said computing a local deviation for said selected processing element, said inserting a phantom processing element within said loop, said assigning a weight to each of said plurality of processing elements, said summing said weighted deviation of said processing elements located within one-half of the loop in an anti-
clockwise direction, summing said weighted deviation of said processing elements located within one-half of the loop in a clockwise direction, computing a number ot tasks to transfer in a clockwise direction for said selected processing elcmcnt, computing a number of tasks to transfer in an anti-clockwise direction for said selected processing element, and reassigning tasks relative to the said number of tasks
l ( to transfer in a clockwise direction and said number of tasks to transfer in an anti clockwise direction are completed simultaneously for each of said plurality of processing elements within said loop.
19. The method of claim 12 wherein said summing said weighted deviation of said processing elements located within one-half of the loop in an anti-clockwise direction relative to said selected processing element comprises: transmitting said local weighted deviation associated with each of said processing elements half way around said loop in an anticlockwise direction, said one-half of said loop being relative to each of said selected processing elements; receiving said local weighted deviation associated with each other of said processing elements half way around said loop in a clockwise direction, said one-half of said loop being relative to each of said selected processing elements; and summing said local weighted deviations associated with each other of said processing elements half way around said loop in a clockwise direction.
20. The method of claim 12 wherein summing said weighted deviation of said processing elements located within one-half of the loop in a clockwise direction relative to said selected processing element comprises: transmitting said local weighted deviation associated with each of said processing elements half way around said loop in an clockwise direction, said one-half of said loop being relative to each of said selected processing elements; receiving said local weighted deviation associated with each other of said processing elements half way around said loop in a anti-clockwise direction, said one-
half of said loop being relative to each of said selected processing elements; and summing said local weighted deviations associated with each other of said processing elements half way around said loop in an anticlockwise direction.
21. A memory device carrying a set of instructions which, when executed, perform a method composing: determining within each of said processing elements a total number of tasks present within said loop; calculating a local mean number of tasks within each of said plurality of processing elements; calculating a local deviation within each of said plurality of processing elements;
( determining a sum weighted deviation within each of said processing elements for one-half of said loop in an antiloekwise direction, said onehalf of said loop being relative to each of said selected processing elements; determining a sum weighted deviation within each of said processing elements in one-half of said loop in a clockwise direction, said one-half of said loop being relative to each of said selected processing element; determining a clockwise transfer parameter and an anti-clockwise transfer parameter within each of said processing elements; and redistributing tasks among said plurality of processing elements in response to said clockwise transfer parameters and said anti-clockwise parameters within each of said plurality of processing elements.
i
GB0309202A 2002-09-17 2003-04-23 Method for using filtering to load balance a loop of parallel processing elements Expired - Fee Related GB2393282B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
GB0221562A GB0221562D0 (en) 2002-09-17 2002-09-17 Host memory interface for a parallel processor
GB0221563A GB2395299B (en) 2002-09-17 2002-09-17 Control of processing elements in parallel processors

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US10689355 US7448038B2 (en) 2003-04-23 2003-10-20 Method for using filtering to load balance a loop of parallel processing elements

Publications (3)

Publication Number Publication Date
GB0309202D0 GB0309202D0 (en) 2003-05-28
GB2393282A true true GB2393282A (en) 2004-03-24
GB2393282B GB2393282B (en) 2005-09-14

Family

ID=26247117

Family Applications (12)

Application Number Title Priority Date Filing Date
GB0309198A Expired - Fee Related GB2393279B (en) 2002-09-17 2003-04-23 Method for manipulating data in a group of processing elements
GB0309200A Expired - Fee Related GB2393281B (en) 2002-09-17 2003-04-23 Method for rounding values for a plurality of parallel processing elements
GB0309214A Expired - Fee Related GB2393290B (en) 2002-09-17 2003-04-23 Method for load balancing a loop of parallel processing elements
GB0309209A Expired - Fee Related GB2393287B (en) 2002-09-17 2003-04-23 Method for using extrema to load balance a loop of parallel processing elements
GB0309212A Expired - Fee Related GB2393289C (en) 2002-09-17 2003-04-23 Method for load balancing a line of parallel processing elements
GB0309204A Expired - Fee Related GB2393283B (en) 2002-09-17 2003-04-23 Method for load balancing an N-dimensional array of parallel processing elements
GB0309207A Expired - Fee Related GB2393286B (en) 2002-09-17 2003-04-23 Method for finding local extrema of a set of values for a parallel processing element
GB0309205A Expired - Fee Related GB2393284B (en) 2002-09-17 2003-04-23 Method for finding global extrema of a set of shorts distributed across an array of parallel processing elements
GB0309211A Expired - Fee Related GB2393288B (en) 2002-09-17 2003-04-23 Method of obtaining interleave interval for two data values
GB0309202A Expired - Fee Related GB2393282B (en) 2002-09-17 2003-04-23 Method for using filtering to load balance a loop of parallel processing elements
GB0309206A Expired - Fee Related GB2393285B (en) 2002-09-17 2003-04-23 Method for finding global extrema of a set of bytes distributed across an array of parallel processing elements
GB0309199A Expired - Fee Related GB2393280B (en) 2002-09-17 2003-04-23 Method for manipulating data in a group of processing elements to transpose the data using a memory stack

Family Applications Before (9)

Application Number Title Priority Date Filing Date
GB0309198A Expired - Fee Related GB2393279B (en) 2002-09-17 2003-04-23 Method for manipulating data in a group of processing elements
GB0309200A Expired - Fee Related GB2393281B (en) 2002-09-17 2003-04-23 Method for rounding values for a plurality of parallel processing elements
GB0309214A Expired - Fee Related GB2393290B (en) 2002-09-17 2003-04-23 Method for load balancing a loop of parallel processing elements
GB0309209A Expired - Fee Related GB2393287B (en) 2002-09-17 2003-04-23 Method for using extrema to load balance a loop of parallel processing elements
GB0309212A Expired - Fee Related GB2393289C (en) 2002-09-17 2003-04-23 Method for load balancing a line of parallel processing elements
GB0309204A Expired - Fee Related GB2393283B (en) 2002-09-17 2003-04-23 Method for load balancing an N-dimensional array of parallel processing elements
GB0309207A Expired - Fee Related GB2393286B (en) 2002-09-17 2003-04-23 Method for finding local extrema of a set of values for a parallel processing element
GB0309205A Expired - Fee Related GB2393284B (en) 2002-09-17 2003-04-23 Method for finding global extrema of a set of shorts distributed across an array of parallel processing elements
GB0309211A Expired - Fee Related GB2393288B (en) 2002-09-17 2003-04-23 Method of obtaining interleave interval for two data values

Family Applications After (2)

Application Number Title Priority Date Filing Date
GB0309206A Expired - Fee Related GB2393285B (en) 2002-09-17 2003-04-23 Method for finding global extrema of a set of bytes distributed across an array of parallel processing elements
GB0309199A Expired - Fee Related GB2393280B (en) 2002-09-17 2003-04-23 Method for manipulating data in a group of processing elements to transpose the data using a memory stack

Country Status (1)

Country Link
GB (12) GB2393279B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6078945A (en) * 1995-06-21 2000-06-20 Tao Group Limited Operating system for use with computer networks incorporating two or more data processors linked together for parallel processing and incorporating improved dynamic load-sharing techniques
WO2001088696A2 (en) * 2000-05-19 2001-11-22 Neale Bremner Smith Processor with load balancing

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4215401A (en) * 1978-09-28 1980-07-29 Environmental Research Institute Of Michigan Cellular digital array processor
JPS6365987B2 (en) * 1983-07-26 1988-12-19 Fujitsu Ltd
US4816993A (en) * 1984-12-24 1989-03-28 Hitachi, Ltd. Parallel processing computer including interconnected operation units
JPH0833810B2 (en) * 1989-06-19 1996-03-29 甲府日本電気株式会社 Vector data retrieval apparatus
JPH05501460A (en) * 1990-05-30 1993-03-18
JP2637862B2 (en) * 1991-05-29 1997-08-06 甲府日本電気株式会社 Element number calculating device
CA2148719A1 (en) * 1992-11-05 1994-05-11 Warren Marwood Scalable dimensionless array
JPH0764766A (en) * 1993-08-24 1995-03-10 Fujitsu Ltd Maximum and minimum value calculating method for parallel computer
US5546336A (en) * 1995-01-19 1996-08-13 International Business Machine Corporation Processor using folded array structures for transposition memory and fast cosine transform computation
US6029244A (en) * 1997-10-10 2000-02-22 Advanced Micro Devices, Inc. Microprocessor including an efficient implementation of extreme value instructions
DE69835159D1 (en) * 1997-10-10 2006-08-17 Advanced Micro Devices Inc MICROPROCESSOR WITH of extreme value instructions and compare instructions
US5991785A (en) * 1997-11-13 1999-11-23 Lucent Technologies Inc. Determining an extremum value and its index in an array using a dual-accumulation processor
US6892295B2 (en) * 2000-03-08 2005-05-10 Sun Microsystems, Inc. Processing architecture having an array bounds check capability

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6078945A (en) * 1995-06-21 2000-06-20 Tao Group Limited Operating system for use with computer networks incorporating two or more data processors linked together for parallel processing and incorporating improved dynamic load-sharing techniques
WO2001088696A2 (en) * 2000-05-19 2001-11-22 Neale Bremner Smith Processor with load balancing

Also Published As

Publication number Publication date Type
GB2393285A (en) 2004-03-24 application
GB0309206D0 (en) 2003-05-28 grant
GB0309212D0 (en) 2003-05-28 grant
GB2393289A (en) 2004-03-24 application
GB2393283B (en) 2005-09-14 grant
GB0309209D0 (en) 2003-05-28 grant
GB0309202D0 (en) 2003-05-28 grant
GB2393287A (en) 2004-03-24 application
GB2393286A (en) 2004-03-24 application
GB0309199D0 (en) 2003-05-28 grant
GB2393286B (en) 2006-10-04 grant
GB2393279A (en) 2004-03-24 application
GB0309204D0 (en) 2003-05-28 grant
GB2393281B (en) 2005-09-14 grant
GB0309211D0 (en) 2003-05-28 grant
GB0309205D0 (en) 2003-05-28 grant
GB2393289B (en) 2005-11-30 grant
GB2393281A (en) 2004-03-24 application
GB2393288B (en) 2005-11-09 grant
GB2393290B (en) 2005-09-14 grant
GB2393284A (en) 2004-03-24 application
GB2393289C (en) 2008-02-28 grant
GB2393279B (en) 2006-08-09 grant
GB2393288A (en) 2004-03-24 application
GB2393285B (en) 2007-01-03 grant
GB2393280B (en) 2006-01-18 grant
GB2393290A (en) 2004-03-24 application
GB0309207D0 (en) 2003-05-28 grant
GB0309200D0 (en) 2003-05-28 grant
GB2393282B (en) 2005-09-14 grant
GB2393283A (en) 2004-03-24 application
GB0309198D0 (en) 2003-05-28 grant
GB2393287B (en) 2005-09-14 grant
GB2393280A (en) 2004-03-24 application
GB2393284B (en) 2007-01-03 grant

Similar Documents

Publication Publication Date Title
US4881168A (en) Vector processor with vector data compression/expansion capability
US5218709A (en) Special purpose parallel computer architecture for real-time control and simulation in robotic applications
US4101960A (en) Scientific processor
US5828894A (en) Array processor having grouping of SIMD pickets
US5404562A (en) Massively parallel processor including queue-based message delivery system
Fridman et al. The tigersharc DSP architecture
US4837676A (en) MIMD instruction flow computer architecture
US5752071A (en) Function coprocessor
US4541048A (en) Modular programmable signal processor
US6038582A (en) Data processor and data processing system
US9015390B2 (en) Active memory data compression system and method
US5717943A (en) Advanced parallel array processor (APAP)
US5606520A (en) Address generator with controllable modulo power of two addressing capability
US5588152A (en) Advanced parallel processor including advanced support hardware
US5469549A (en) Computer system having multiple asynchronous processors interconnected by shared memories and providing fully asynchronous communication therebetween
US6088783A (en) DPS having a plurality of like processors controlled in parallel by an instruction word, and a control processor also controlled by the instruction word
US5765011A (en) Parallel processing system having a synchronous SIMD processing with processing elements emulating SIMD operation using individual instruction streams
US5625836A (en) SIMD/MIMD processing memory element (PME)
US5963745A (en) APAP I/O programmable router
Kurzak et al. Solving systems of linear equations on the CELL processor using Cholesky factorization
US6426746B2 (en) Optimization for 3-D graphic transformation using SIMD computations
US3787673A (en) Pipelined high speed arithmetic unit
US20070022428A1 (en) Context switching method, device, program, recording medium, and central processing unit
EP0570729A2 (en) Apap I/O programmable router
US5561784A (en) Interleaved memory access system having variable-sized segments logical address spaces and means for dividing/mapping physical address into higher and lower order addresses

Legal Events

Date Code Title Description
PCNP Patent ceased through non-payment of renewal fee

Effective date: 20140423