CA2464506A1 - Method and apparatus for the data-driven synchronous parallel processing of digital data - Google Patents

Method and apparatus for the data-driven synchronous parallel processing of digital data Download PDF

Info

Publication number
CA2464506A1
CA2464506A1 CA002464506A CA2464506A CA2464506A1 CA 2464506 A1 CA2464506 A1 CA 2464506A1 CA 002464506 A CA002464506 A CA 002464506A CA 2464506 A CA2464506 A CA 2464506A CA 2464506 A1 CA2464506 A1 CA 2464506A1
Authority
CA
Canada
Prior art keywords
data
cache
processor
buffer
instruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
CA002464506A
Other languages
French (fr)
Inventor
Daniel Gudmunson
Alexei Krouglov
Robert Coleman
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Leitch Technology International Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CA002360712A external-priority patent/CA2360712A1/en
Application filed by Leitch Technology International Inc filed Critical Leitch Technology International Inc
Priority to CA002464506A priority Critical patent/CA2464506A1/en
Publication of CA2464506A1 publication Critical patent/CA2464506A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/448Execution paradigms, e.g. implementations of programming paradigms
    • G06F9/4494Execution paradigms, e.g. implementations of programming paradigms data driven

Abstract

A method and apparatus for the data-driven synchronous parallel processing of digital data, which temporally separates the processes of instructions distributions and data requests from the process of actual data processing.
The method includes the steps of: dividing the stream of digital data into data packets, distributing instructions to data processing units before their execution, consecutively synchronously processing data packets by multiple data processing units processing in parallel, and synchronization of parallel multiple data processing units by data tokens attached to the data packets. In the preferred embodiment the method comprises one or more of the steps of:
storing instructions inside the data processing units, requesting data before the start of data processing, storing records for requested data packets, associating received data with the records of data requests, attaching to each data packet a validity signal (data token) indicating the validity or non-validity of the received data for processing, and extension of data buffers coupled to the data processing units into elastic data buffers capable of absorbing variations in the data rate. In the preferred embodiment a data buffer is provided between adjacent data handling units, and the invention manipulates the timing of the buffer's emptiness and fullness signals, processing each data packet coming into buffer in accordance with its validity signal (data token), and associating a validity signal (data token) with the data packet sent out from buffer. In one embodiment the invention provides method and apparatus for the data-driven processing of digital data using a non-blocking cache, which temporally separates the processes of instructions distributions and data requests from the processes of memory accesses for cache misses and actual data processing, in which the method further includes the steps of checking the requested data against the data previously stored in a data cache, and requesting a cache missed data before the start of data processing. This embodiment of the invention optionally provides a method and apparatus to modify data previously stored in the data cache with data received from the data processing units.

Description

METHOD AND APPARATUS FOR THE DATA-DRIVEN
SYNCHRONOUS PARALLEL PROCESSING OF DIGITAL DATA
FIELD OF THE INVENTION
The present invention relates to the field of data processors system organization, and in particular relates to data processors containing a plurality of interconnected modules combined in a multi-processing system and to multiple level cache memory organization of data processing systems employed to increase the speed and efficiency of memory accessing.
BACKGROUND OF THE INVENTION
The multiprocessing system is one of the systems employed for improving the performance and reliability of a single processor system. Various types of such systems have thus far been proposed. Great advances in semiconductor technology have provided cheap and high performance large-scale integration processors, resulting in easier hardwaxe design of the multi-processor system.
It is said that a multi-processor system with a combination of n processors cannot produce an upgrade of the performance by n-times that of the single processor.
The major causes for drawback of the performance improvement are, for example, conflicts in access to the main storage used in common among the processors, the conflict control associated with common use of the resource, and an increase of the overhead arising from the communication among the processors. Also important is that conventionally all execution steps of the operating system (OS) are sequentially processed by one single processor.
As used herein the term parallel processing refers to a single program that runs on multiple processors simultaneously. In relation to the level of parallelism among instructions and data there are four categories of processors: SISD (single instruction stream, single data stream), SIMD (single instruction stream, multiple data streams), MISD (multiple instruction streams, single data stream), and MIMD (multiple instruction streams, multiple data streams). Another important concept is the granularity of parallel tasks. A large grain system is one where operations running in parallel are fairly large, in the order of programs. Small grain parallel systems divide programs into small pieces, up to few instructions.
To take advantage of multiprocessing, conventional multiprocessor systems utilize deep pipelining where processing tasks are broken into smaller subtasks, each subtask is executed by the distinct processing unit, and all or some processing units are working in parallel. Another technique used in conventional multiprocessor systems is to replicate the internal components of a processor so it can start doing multiple data processing tasks at the same time. This technique is called the superscalar execution. The third technique deployed in conventional multiprocessor systems is dynamic scheduling, wherein data processing tasks are allowed to be scheduled for processing out of order in order to avoid the stall of a processor due to memory fetching and computational delays. In practice these techniques may be combined together as well as with other techniques such as, for example, branch prediction.
Parallel multiprocessor systems are distinct as well according to their memory organization. In one type of system, known as a shared memory system, there is one large virtual memory, and all processors have equal access to data and instructions in this memory. The other type of system is a distributed memory, in which each processor has a local memory that is not accessible to any other processor.
The processors also can be connected by a single bus or via networks.
Among architectures of multiprocessor systems are vector processors, which operate on the vector values rather than the scalar values. Such processors are closely related to the SIMD category of processors, and contain the control unit responsible for fetching and interpreting instructions and several data processing units.
Another class of multiprocessor systems is superscalar processors, which operate on scalar values but are capable of executing more than one instruction at a time. This is possible because superscalar processors contain an instruction-fetching unit capable of fetching more than one instruction at the same time, an instruction-decoding logic capable of distinguishing independent instructions, and multiple data processing units able to process several instructions simultaneously.

An important aspect of microprocessor architecture is the asynchronous or synchronous character of the processor. An asynchronous processor can start and finish data handling at any moment in time. A synchronous processor synchronizes its operation with an internal clock. The present invention relates to a synchronous microprocessor architecture.
In multiprocessor systems, a primary consideration is how the work of multiple processors will be synchronized. There are two distinct approaches to solving this problem. The first is to build self timed system, which has some internal mechanism telling each processor when it has to start processing. The other approach utilizes external scheduling, where each processor receives a signal indicating the start of processing from an external device such as, for example, an operating system. The present invention relates to a self timed multiprocessor system.
One of the known mechanisms for providing the function of a self timed asynchronous multiprocessor system is a so-called "data-driven processor,"
where data packets moving between multiple processors are accompanied by the data tolcens.
This non-conventional, non-von Neumann architecture was designed for cloclcless (asynchronous) multiprocessor systems, wherein arrival of data tokens serves as a trigger starting the work of each data processor.
Another problem in the prior art has been organizing multiple level cache memory in data processing systems in a way that provides fast and efficient memory accessing.
A cache memory is a small, high-speed buffer memory inserted between the data processor and main memory of a data processing system and as close to the data processor as possible. The cache memory duplicates and temporarily holds portions of the contents of main memory, which are currently in use or expected to be in use by the data processor.
The advantage of cache memory lies in its access time, which is generally much less than that of main memory. A cache memory thus permits a data processor to spend significantly less time waiting for instructions and data to be fetched and/or stored, which results in an overall increase in efficiency.

Cache memory comprises one or more levels of dedicated high-speed memory holding recently accessed data, designed to speed up subsequent access to the same data. Cache technology is based on the premise that programs frequently re-execute the same instructions and data. When data is read from main system memory, a copy is also saved in the cache memory, along with an index to the associated main memory. The cache then monitors subsequent requests for data to see if the information needed has already been stored in the cache. If the data had indeed been stored in the cache, the data is delivered inunediately to the processor while the attempt to fetch the information from main memory is aborted (or not started).
If, on the other hand, the data had not been previously stored in cache then it is fetched directly from main memory and also saved in cache memory for future access.
Modern processors support multiple cache levels, most often two or three levels of cache. A level 1 cache (L 1 cache) is usually an internal cache built onto the same monolithic integrated circuit (IC) as the processor itself. Level 1 or "on-chip"
cache is the fastest (i.e., lowest latency) because it is accessed by the internal components of the processor. On the other hand, off chip cache is an external cache of static random access memory (SR.AM) chips. Off chip cache has much higher latency, although typically a much shorter latency than accesses to main memory.
Data in cache memory is arranged in the form of a plurality of cache lines. A
"cache line" is a contiguous block of data, which is the smallest unit for which a cache allocates and deallocates storage. The optimal size of a cache line depends largely on cache size and access time parameters. When it becomes necessary to update a cache with data from main memory, data within a cache line or a plurality of cache lines of the cache is replaced with data from the main memory.
During read cycles, data and instructions are fetched from the cache memory if they are currently stored in the cache memory ("read cache hits"). If the data and instructions are not currently stored in the cache memory ("read cache misses"), they are retrieved from the main memory and stored in the cache memory as well as provided to the data processor.

A read request affects a data processor's performance more directly than a write request. This is because a data processor must usually stall (wait) until the read data it has requested is returned before continuing execution.
Similarly, during write cycles, data is written into the cache memory if the data is currently stored in the cache memory ("write cache hits"). If the data is not currently stored in the cache memory ("write cache misses"), the data is either not written into the cache memory (no write allocate) or is written into the cache memory after forcing a cache line update (write allocate). Furthermore, data is written into the main memory either inunediately (write through) or when a cache line is reallocated (write baclc). When a data processor makes a write request, the address and data can typically be written into temporary buffers while the data processor continues execution.
Write requests can be serviced using techniques such as write through and write back. Using the write through technique, the data line in main memory is always updated with the write data, and the copy in the cache is updated only if it is present in the cache. Using the write back technique, the copy in the cache is updated only if the data is present in the cache. If the data is not present in the cache, then the data must first be read in, and then updated. Using this technique, some lines in main memory will be incorrect. To track the lines in main memory which hold incorrect data because the data in main memory has not been updated, "dirty bits"
associated with each cache line are used.
The effectiveness of the cache is measured primarily by the hit ratio, or its complement the miss ratio, as well as the mean time required to access the data from the cache if a hit occurs. The design of a data processing system having a cache involves minimization of the miss ratio as well as minimization of the mean access time associated with a hit. Since the data processor goes idle in the event of a cache miss, the size and operating characteristics of the cache memory are typically optimized to provide a high cache hit rate, thereby reducing data processor idle time and improving system performance.
Numerous tradeoffs are encountered in an attempt to optimize the above-mentioned considerations. For example, cache line size, cache size, the degree of associativity, real versus virtual addressing of the cache, when to update main memory, the number of caches and the type of priority scheme among caches must all be determined. Other well known examples are virtual addressing to make cache hits faster, early restart and out-of order fetching to reduce read miss penalty, use of a write buffer to reduce write miss penalty, and use of two level caches to reduce read/write miss penalty.
Another distinction in the cache organization is either cache management operates by demand fetching, in which case all data must be requested by the application program before being brought into cache memory, or cache organization implements some policy for prefetching data. The most common prefetch policy is to cluster data objects that are often used together near each other on secondary storage, and then fetch an entire cluster of data objects into cache when any data object in the cluster is requested. The term "prefetch" as used herein, refers to transferring data (e.g. a cache line) into a cache prior to a request for the data being received by the cache. Generally, prefetch algorithms are based upon the pattern of accesses which have been performed by the data processor. If the prefetched data is later accessed by the processor, then the "cache hit" rate may be increased due to transferring the prefetched data into the cache before the data is requested.
Unfortunately, cache hit rates may be decreased (or alternatively cache miss rates increased) by performing prefetching if the data being prefetched is not later accessed by the processor. A cache is a finite storage resource, and therefore the prefetched cache lines generally displace cache lines stored in the cache.
When a particular prefetched cache line displaces a particular cache line in the cache, the prefetched cache line is not later accessed by the processor, and the particular cache line is later accessed by the processor, then a miss is detected for the particular cache line that was displaced by the prefetched cache line. The miss is effectively caused by the prefetch operation. The process of displacing a later-accessed cache line with a non-referenced prefetched cache line is referred to herein as cache pollution.
Cache systems are built either processor blocking or non-bloclcing. In a blocking system, each time there is a cache miss every subsequent instruction must be suspended until the miss instruction is completely executed (i.e. until external memory is accessed). This is done by stalling or blocking execution of the data processor and must be done to prevent data inconsistencies. Obviously, this implementation does provide for data consistency but at a reduced operational speed due to the number of stalls generated for each cache miss. Such cache systems do not differentiate between the character of the subsequent operations to determine if they are dependent or independent on the result of the missed instruction or if the subsequent instructions may be allowed to execute out of order without causing data inconsistencies.
The instruction execution units in the execution pipeline cannot predict how long it will talce to fetch the data into the operand registers specified by a particular load operation. Processors typically handle this uncertainty by delaying execution and stalling the execution pipeline until the fetched data is returned. This stalling is inconsistent with high speed, multiple instructions per cycle processing.
In a pipelined hierarchical cache system that generates multiple cache accesses per clock cycle, coordinating data traffic is problematic. A cache line fill operation, for example, needs to be synchronized with the return data, but the lower level cache executing the line fill operation cannot predict when the required data will be returned. One method of handling this uncertainty in prior designs is by using "blocking" cache that prohibits or blocks cache activity until a miss has been serviced by a higher cache level or main memory and the line fill operation completed.
Blocking cache stalls the memory pipeline, slowing memory access and reducing overall processor performance.
Other cache systems are non-blocking in that they never bloclc subsequent instructions. From a performance standpoint these systems operate very rapidly and efficiently. However, these systems employ extremely complex and advanced circuitry to insure data consistency during the operation of subsequent instructions since some instructions will be executed out of order.
On the other hand, where one or more levels is non-blocking each cache level is unaware of the results of the accesses (i.e., hit or miss) or the resources available at the next higher level of the lv.erarchy. In a non-blocl~ing cache, a cache miss launches a line fill operation that will eventually be serviced, however, the cache continues to allow load/store requests from lower cache levels or registers. To complete cache operations such as a line fill after a miss in a non-blocking cache, each cache level must compete with adjacent levels' attention. This requires that data operations arbitrate with each other for the resources necessary to complete an operation.
Arbitration slows cache and hence processor performance. Prior non-blocking cache designs include circuitry to track resources in the next higher cache level.
This resource tracking is used to prevent the cache from accessing the higher level when it does not have sufficient resources to track and service the access.
The same advantage of reduced memory access time that prompts the use of cache memories in a single data processor system is also available in multi-processor systems. However, in such systems the use of different data streams and conventional cache line replacement algorithms almost inevitably creates a situation in which the contents of the cache memories of the different data processors are different.
In such circumstances, even if the miss ratio at each cache remains within normal limits, the demands made on main memory and its output communication channel to the cache memories can be severe. As a result, average memory access time can be degraded or extraordinary measures must be taken to enhance the throughput (or bandwidth) of the main memory and its output communication channel.
Advanced data processing systems may include a plurality of data processors, which are capable of reading and/or writing to memory. This complicates the cache consistency requirement. There may also be a plurality of data processors in a single data processing system. Or, the data processing system may also include other types of devices such as direct memory access (DMA) controllers or the like.
In such a system, the various caches may be coupled to various combinations of buses. It is desirable that the various devices access the caches over these buses) in a non-blocking manner, to enhance system performance.
Unlike previous systems, the present invention uses a data-driven multiprocessor architecture to effect the parallel operation of a synchronous multiprocessor system and to organizing multiple level cache memory in data processing systems in a way that provides fast and efficient memory accessing.

SUMMARY OF THE INVENTION
The present invention provides a method and apparatus for performing a non-stalling synchronous parallel processing of digital data, and a non-stalling synchronous data flow through a digital data processor.
Unlike prior art methods, the method of the invention allows for non-stalling synchronous digital data processing by separating the process of distributing instructions and delivering data to data processing units from the actual process of data processing. To accomplish this, the invention provides data tolcens for self synchronization of the parallel operation of a multiprocessor system and its subsystems.
The proposed invention conceptually utilizes a deep pipelining technique combined with vector processing, whereby processing tasks have been divided among paxallel data processing units with coarse granularity. The method and apparatus of the invention separates the task of instruction processing and the task of data processing, which decreases the number of stalls in the pipelined stages of data processing, and thus achieves an increase in the multiprocessor system's performance.
According to a preferred embodiment of the method of the invention, digital data is divided into distinct pieces (data packets), and each piece of data is consecutively processed by multiple data processing units, while all data processing units are work in parallel. The instructions for data processing are sent to the data processing units before the data processing units are actually available to start processing the data. In the preferred embodiment the data processing units internally store instructions records for future reference, and send out data requests with the address of the requesting data processing unit wlule internally storing the data requests records as well. The returning pieces of data previously requested each comprise a validity signal preferably comprising a data token, and are put into an internal buffer from which they are retrieved for further processing. The corresponding record from the list of outstanding data requests is cancelled after the requested piece of data arrives, acid the instruction record obtains a special indication that this piece of data is inside the data processing unit and available for processing.
The data packet is talcen from the buffer, the corresponding instruction record is retrieved, the data is processed according to the instruction, and the result is sent out.

An apparatus implementing the proposed method comprises a digital data processor including in the preferred embodiment the following components: a module for receiving instructions and/or digital data from one or more external devices and sending instructions and/or digital data to one or more external devices, an instruction path inside the processor, a data path inside the processor, a row of data processing units organized for parallel processing of digital data, and a module for consecutively processing and distributing instructions to the data processing wits.
In further aspects the data processing unit in the device of invention also preferably has the following components: a storage for storing a list of instruction records, a storage for storing outstanding data requests records, a storage for receiving incoming data, and a computation module for data processing. The data processing unit also has control logic, which provides the appropriate instruction and data flows through the data processing uiut.
In the preferred embodiment the digital data processing device according to the invention includes the support required for a non-stalling synchronous data-driven flow of digital data through the digital data processor. The invention provides a method and apparatus for such data flow which preferably comprises the steps of:
providing a data buffer between adjacent data handling units, processing the incoming data according to a data validity signal (data token), providing a data validity signal (data token) for the outgoing data, providing a signal indicating the data buffer's fullness from the data buffer to previous adjacent unit, providing a signal indicating buffer's emptiness from the data buffer to next adjacent unit, asserting the data buffer's fullness signal in advance of filling the data buffer, asserting the data buffer's emptiness signal in advance of depleting the data buffer, and programming the timing of asserting the buffer's fullness and emptiness signals to allow for digital data flow management according to the system's configuration.
The invention is applicable to a processor suitable for a single processor system or a multiprocessor system.
In one aspect the invention provides a method and an apparatus to combine a data-driven processing of digital data with a non-blocl~ing cache technique.
According to a preferred embodiment of this method of the invention, data processing units internally keep instructions records for future reference and check the availability of requested data in the internal data storage (Level 1 data cache). The cache misses are then sent out while the data processing unit internally keeps the record of the cache hits and outstanding data requests (cache misses). The returned data packets, which were previously requested, contain a validity signal (data token) attached to them, and are put into Level 1 cache, from which they are taken for further processing.
The corresponding record from the list of outstanding data requests is cancelled after previously requested data packet arrives, and the instruction record obtains a special indication that this data paclcet is inside the Level 1 cache available for processing.
The data packet is taken from the cache, put into small internal buffer, instruction record is retrieved from the list, data packet is processed according to the instruction, and the result is sent out. Resultant data is either stored in the Level 1 cache modifying previously stored data (Read-Write Cache) or sent out without storage in the Level 1 cache (Read-Only Cache).
In a further embodiment of this aspect of the invention, the digital data processor keeps data requests records from all data processing units for future reference and checks the availability of requested data in its internal data storage (Level 2 data cache). The data requests receive an indication if a data packet is available in the data cache (cache hit) or not (cache miss). The requests for cache misses are then sent out to the external data storage. The data packets coming from the external data storage, which were previously requested, contain a validity signal (data token) attached to them, and are put into Level 2 data cache. The data requests for cache misses receive an indication that corresponding data packets are now available. Then cache controller distributes the data paclcets from the Level 2 data cache to the data processing units according to the data requests record.
Moreover, the data processor may store the data packets modified by the individual data processing units in the Level 2 data cache (Read-Write Cache) or distribute them without storage in the Level 2 data cache (Read-Only Cache).
An apparatus implementing this aspect of the method of the invention comprises a digital data processor including in the preferred embodiment the following parts: a module receiving instructions and/or digital data from external devices and sending instructions and/or digital data to external devices, an instruction path inside a processor, a data path inside a processor, a row of data processing units organized for parallel processing of digital data, a module consecutively processing instructions and distributing instructions to the data processing units, and a local storage for the digital data (Level 2 data cache). In further aspects the device of invention provides a data processing unit, which comprises the following parts: a storage for the list of instruction records and data requests, a local storage to accommodate digital data packets (Level 1 data cache), a small buffer to alleviate the data flow, and a computation module for actual data processing. The unit also has a control logic indicating cache hits and misses in the data requests records and providing an appropriate instruction and data flow and control over cache operations.
In further embodiments the apparatus implementing this aspect of the invention provides a Level 2 data cache, which comprises the following parts:
a storage for data requests; a local storage for data packets (data cache), a small buffer to alleviate the data flow, and control logic indicating cache hits and misses in the data requests records. The cache also has a control logic providing an appropriate data flow and control over cache operations.
The present invention thus provides a method for data-driven synchronous parallel processing of a stream of data packets by multiple data processing units working in parallel, comprising the steps of: a. distributing at least one instruction for data processing to one data processing unit of the multiple data processing units, before the data processing omit is available to process the instruction; b.
storing the instruction in an execution instructions memory; c. sending from the one data processing unit a data request for at least one data packet corresponding to the instruction, required to execute the instruction; d. storing a record of the at least one data packet requested; e. associating with the at least one data packet an address of the one data processing unit; f. associating with the each data packet sent out a data token showing the readiness of the paclcet for further processing; g. when the at least one data packet is received by the processing unit, associating the data packet with the corresponding instruction and distributing the data paclcet to the one data processing unit; and h. processing the data according to the corresponding instruction.
In further aspects of the method for data-driven synchronous parallel processing: instructions are distributed to the multiple data processing units consecutively; instructions are distributed to the multiple data processing units concurrently; the method includes, after step f., the step of putting the requested data packets into an internal data buffer in a data processing unit; the method includes, after step g., the step of erasing the record of the data request corresponding to the data packet; the method includes, during step g., the step of sending to the corresponding instruction in the execution instructions memory an indication that the at least one data packet has been received by the processing unit and is available for processing; the method includes, during step e., the step of associating with the data packets an address of its sender and, during the step g, associating the data packet with the corresponding instruction according to the address of the data packet sender;
the method includes, during the step g, associating the data paclcet with the corresponding instruction according to the order of the data packet received;
the method includes the step of retrieving each data packet from the internal data buffer to be processed according to the corresponding instruction; an output of the processing step is sent to another data processing unit or out of the processor, or both;
and/or processing occurs in real-time.
The present invention further provides a method of providing a substantially non-stalling sequential flow of data paclcets through a digital data-driven processor, the digital processor storing at least one instruction for processing data packets in accordance with the instruction, comprising the steps of a. providing a buffer between adjacent units processing, distributing or otherwise handling the data; b.
providing a , fullness signal indicating a fullness state of the buffer from the data buffer to a previous adjacent unit, before the buffer is full; c. providing an emptiness signal indicating an emptiness state of the buffer from the data buffer to a next adjacent unit, before the buffer is empty; d. providing an incoming data validity signal for synchronization of data handling by the buffer with the arrival of a data paclcet to the buffer; and e. providing an outgoing data validity signal for synchronization of data handling by a unit next after the buffer with an outgoing data packet from the buffer, wherein assertion of the fullness signal in advance of filling the buffer allows the buffer to absorb data packets in transit between a previous unit sending the data and the buffer receiving the data, and assertion of the emptiness signal in advance of depleting the buffer allows for the processor to request new data packets before the buffer becomes empty.

In further aspects of the method of providing a substantially non-stalling sequential flow of data packets through a digital data-driven processor: the validity signal comprises a data token; and/or the method is performed in a processor having a plurality of processing units, and includes the step of programming a timing of assertion of the fullness signal and of the emptiness signal to allow for management of synchronous data flow to the processing units.
The present invention further provides an apparatus for substantially non-stalling synchronous data packet flow through a digital data-driven processor, each data packet being associated with an address of a processing unit containing an instruction for processing the data packet, comprising a data buffer for temporary storage of the data paclcets, the buffer comprising an input port for receiving incoming data packets and their associated addresses; an output port for sending out outgoing data and their associated addresses; an input port for receiving an incoming validity signal; an output port for sending an outgoing validity signal; an outgoing fullness signal indicating a fullness of the buffer, adapted to be asserted in advance of the filling of the buffer; an outgoing emptiness signal indicating an emptiness of the buffer, adapted to be asserted in advance of the depletion of the buffer; and control logic for regulating a timing of assertion of the fullness and the emptiness signals in a mufti-processing system.
The present invention further provides an apparatus for substantially non-stalling data-driven synchronous parallel processing of data packets including a digital data processor, further comprising: an interface for receiving instructions and digital data from at least one external device and sending instructions or digital data or both to at least one external device; an instruction path contained inside the processor; a data path contained inside the processor; a plurality of data processing units organized for parallel processing of the data; and a distributing unit organized for distributing one or more instructions at a time to the data processing units.
In further aspects of the apparatus of the invention: the validity signal comprises a data token; the buffer comprises a FIFO buffer; instructions are distributed to the plurality of data processing routs consecutively;
instructions are distributed to the plurality of data processing units concurrently; each data processing unit comprises a storage for instructions, a storage for records of outstanding data requests, a storage for receiving requested data packets, and a computation module for processing the requested data packets in accordance with at least one associated instruction; the apparatus comprises control logic for controlling instruction and data flows through the processor; the digital data processor comprises a general-purpose ;
the digital data processor comprises a graphics processor; the digital data processor comprises a digital signal processor; the computational module operates using vector values; and/or the computational module operates using scalar values.
The present invention further provides a method for data-driven synchronous parallel processing of a stream of data paclcets by multiple data processing units working in parallel, comprising the steps o~ a. distributing at least one instruction for data processing to one data processing unit of the multiple data processing units, before the data processing unit is available to process the instruction; b.
storing the instruction in an execution instructions memory; c. sending from the one data processing unit a data request for at least one data packet corresponding to the instruction, required to execute the instruction; d. storing a record of the at least one data packet requested; e. associating with the at least one data packet an address of the one data processing unit; ~ associating with the at least one data packet a data token indicating a readiness of the data packet for further processing and comprising data for associating the at least one data packet with the corresponding instruction at the one data processing unit; g. when the at least one data packet is received by the processing unit, associating the data paclcet with the corresponding instruction and distributing the data packet to the one data processing unit; and h.
processing the data according to the corresponding instruction.
The present invention further provides a method for data-driven synchronous processing of a stream of data packets by multiple data processing units working in parallel and using at least one data cache, comprising the steps of: a.
distributing at least one instruction for data processing to one data processing unit of the multiple data processing units, before the one data processing unit is available to process the instruction; b. storing the instruction in an execution instructions memory;
c.
checking a data request against data stored in a data cache of the one data processing unit; d. sending a data request from the one data processing unit for at least one data packet corresponding to the instruction and required to execute the instruction, but missing from the data cache; e. storing inside the one data processing unit a record of the at least one outstanding request for the data packet; f. associating with the data packet an address of its target data processing unit; g. associating with each data packet sent in response to a data request, a data token or signal indicating the readiness of the data packet for further processing; h. when the at least one data paclcet is received by the one data processing unit, putting the received data packet into the data cache in the one data processing unit; i. associating the data packet with the corresponding instruction and retrieving the data paclcet from the data cache, to be processed according to the corresponding instruction; and j. processing the data according to the corresponding instruction.
In further aspects of the method of the invention for data-driven synchronous processing of a stream of data paclcets by multiple data processing units working in parallel and using at least one data cache: instructions are distributed to the multiple data processing units consecutively; instructions are distributed to the multiple data processing units concurrently; the method includes, after step d., the step of checking a data request against data stored in a next-higher level data cache of the digital data processor; the method includes, after step h., the step of erasing the outstanding record of the data request corresponding to the received data packet; the method includes, during step i., the step of sending to the corresponding instruction in the execution instructions memory an indication that the at least one data packet has been received by the processing unit and is available for processing; the method includes, during step ~, the step of associating with the data packet an address of its sender and, during step i, associating the data packet with the corresponding instruction according to the address of the sender of the data packet; the method includes, during step i, associating the data packet with the corresponding instruction according to a sequential order in which the data packet is received; the method includes, after step i., the step of temporarily putting the data packet into a data buffer to facilitate a smooth flow of data through the data processing uW t; an output of the processing step is sent to another data processing unit or out of the data processor, or both;
an output of the processing step is stored in a local data storage in a data processing unit, or sent to another data processing unit, or sent out of the processor, or any combination thereof; processing occurs in real-time; the method further includes the steps of: k.
storing data requests from all data processing units in a data requests memory; 1.

checking a data request against data stored in the next- higher level data cache of the digital data processor; m. sending to an external data storage a data request from the data processor for at least one data packet missing in the next-higher level data cache;
n. storing inside the digital data processor a record of the request for the missed data paclcet; o. associating with the data paclcet an address of its target data processing unit; p. associating with each data paclcet sent in response to a data request a data token or signal showing the readiness of the data packet for further processing; q.
when the at least one data packet is received by the digital data processor from the external data storage, putting the received data packets into a data cache in a digital data processor; r. associating the data packet with the corresponding data request from a data processing unit in a data requests memory and sending to the corresponding data request in the data requests memory an indication that the at least one data packet has been received by the digital data processor and is available for distribution; and s. retrieving the data paclcet from the data cache according to the corresponding data request in the data requests memory and sending the data to the corresponding data processing unit; the method includes, after step q., the step of erasing the record of the data request corresponding to the data packet received from the external data storage; the method includes, during step o., the step of associating with the data packet an address of its sender; the method includes, during step r., associating the data packet with the data request in the data requests memory according to the address of the sender of data packet; the method includes, during the step r., associating the data packet with the data request in a data requests memory according to a sequential order in which the data packet is received; and/or an output of at least one data processing unit is stored in the data cache in the data processor.
The invention further provides an apparatus for the data-driven synchronous processing of a stream of data packets by multiple data processing units working in parallel, comprising at least one data cache, and further comprising: an interface for receiving instructions and digital data from at least one external device and sending instructions or digital data, or both, to at least one external device; an instruction path contained inside the processor; a data path contained inside the processor; a plurality of data processing units organized for parallel processing of the data; a distributing unit organized for distributing one or more instructions at a time to the data processing units; and at least one data cache for storing the data packets, associated with a cache controller.
In further aspects of the apparatus comprising at least one data cache:
instructions are distributed to the plurality of data processing units consecutively;
instructions are distributed to the plurality of data processing units concurrently; each data processing unit comprises at least one storage for storing instructions and data requests; a logic unit for indicating cache hits and misses in the data requests records;
and a computation module for processing requested data packets in accordance with at least one associated instruction; the apparatus comprises control logic for controlling instruction and data flows through the processor; the apparatus comprises a data buffer located between the data cache and the computation module; the apparatus comprises a means to store a processed data packet into the data cache; the data cache further comprises a storage for data requests and a logic indicating cache hits and misses in the data requests storage; the apparatus comprises a data buffer downstream of a data cache; the digital data processor comprises a general-purpose microprocessor; the digital data processor comprises a graphics processor; the digital data processor comprises a digital signal processor; and/or the computational module operates using vector values.
BRIEF DESCRIPTION OF THE DRAWINGS
In drawings which illustrate by way of example only a preferred embodiment of the invention, Fig. 1 is a schematic diagram showing a comparison of conventional consecutive and parallel multiprocessing techniques.
Fig. 2 is a schematic diagram showing an example of consecutive multiprocessing by parallel Data Processing Units according to the invention.
Fig. 3 is a schematic diagram showing an example of the system organization of a processor with multiple Data Processing Units according to the invention.
Fig. 4 is a top-level block diagram of a Data Processing Unit in Figure 3 Fig. 5 is a schematic illustration of an Elastic Buffer according to the invention.

Fig. 6 is a schematic diagram showing a Processor with Level 2 Data Cache and Multiple Data Processing Units.
Fig. 7 is a schematic diagram of the Data Processing Unit with Read-Only Level Data Cache.
Fig. 8 is a schematic diagram of the Data Processing Unit with Read-Write Level 1 Data Cache.
Fig. 9 is a schematic diagram of the Data Processor with Non-blocking Level 2 Data Cache.
DETAILED DESCRIPTION OF THE INVENTION
The invention is applicable to the organization of digital data processing units, and in particular to the organization of multiple data processing units connected together.
The interconnection of data processing units may be organized such that the data processing units process digital data either consecutively or in parallel, or in a mixed consecutive-parallel manner. Examples of the consecutive and parallel organization of multiple data processing units are shown schematically in Figure 1.
To take advantage of the data processing capabilities of multiple data processing units during consecutive processing, it is beneficial to have the processors working in parallel. This may be accomplished by dividing the digital data stream into a sequence of distinct segments or pieces, for example data packets, having each data processing unit process a piece of data, and sending the outcome of one data processing unit out, and/or to another data processing unit for consecutive processing.
Figure 2 shows an example of multiple data processing units organized to carry out consecutive processing of digital data by working in parallel.
Figure 3 illustrates an example of the system organization of a processor 10 according to the invention, containing multiple data processing units 12, where processor 10 may for example be general-purpose microprocessors, graphics processors, digital signal processors, or otherwise as suitable for the intended application. Each processor 10 may be connected to other processors 10, storage memory (not shown), or other external devices such as a monitor, keyboard etc.
(not shown). Processor 10 also comprises Instructions and Data Interface 14, through which the processor 10 receives data to be processed and instructions as to how to process the data. Processor 10 may also receive various control signals (not shown).
Instructions are transmitted through Instructions Path 16 to the Instructions Distributing Unit 18 where they are processed and sent to the individual Data Processing Units 12. Data is transmitted through the Data Path 20 from the Instructions and Data Interface 14 to Data Processing Units 12. After data is processed it can be sent via Data Path 20 to other Data Processing Units 12 or to the Instructions and Data Interface 14 to be sent out of the processor 10.
Processor 10 may also send out various control signals (not shown).
It will be noted that the system orgaiuzation shown in Figure 3 reveals a discrepancy between the consecutive operation of the Instructions Distributing Unit 18 and the parallel operation of the multiple Data Processing Units 12. This is compensated for by the use of data tokens that allow instructions to be processed according to the timing in which valid data packets are received, rather than the order in which the instructions are received, which will be described in more detail following the description of the organization of the Data Processing Units 12.
To process the data, each Data Processing Unit 12 must receive instructions describing where to retrieve the data from, what the Data Processing Unit 12 is required to do with the data, and where to send the result. The need to receive data to be processed can significantly delay the start of actual data processing by a Data Processing Unit 12, especially when data has to be fetched, for example, from an external storage memory. In a conventional parallel processing system this considerably delays processing, as instructions for retrieving data for the next processing operation cannot be issued by the processor until the data fox the current processing operation has been received and processed.
In order to avoid decreasing the data processing performance of the processor caused by delays in data retrieval, it is desirable to send the data requests (together with the target address for returning data packets) by the Data Processing Units 12 far in advance of the actual moment when the data has to be available for each Data Processing Unit 12 for processing.

Since the number of outstanding data requests may vary, each Data Processing Unit 12 must maintain a record of such data requests in storage memory. Once the piece of earlier requested data is received by a particular Data Processing Unit 12 in accordance with the address of the data packet, the corresponding data request record can be erased.
To synchronize the start of data processing by a Data Processing Unit 12 with the arrival of the requested data, a special signal (data token) can be attached to the data indicating its validity or non-validity. The arrival of a data token serves as a trigger, which activates the start of data processing by the Data Processing Unit 12 according to the instructions stored inside the Data Processing Unit 12. Thus the work of Data Processing Unit 12 is data-driven, or more specifically, data tolcen-driven.
To achieve a non-stalling data-driven synchronous data stream through the Data Path 20, in the preferred embodiment the invention comprises an elastic buffer 30, illustrated in Figures 4 and 5, interposed between consecutive units, such as Data Processing Units 12 and Instructions and Data Interface 14, which process, distribute, or otherwise handle the data. The elasticity of the buffer 30 is achieved by manipulating the timing of the assertion of buffer status signals indicating the buffer emptiness and fullness. For example, when the buffer's fullness signal is asserted in advance of the buffer's actual filling it allows for data paclcets which are in transit to the buffer 30 from the previous unit to be absorbed by the buffer 30.Similarly, the buffer's emptiness signal can be asserted in advance of the buffer's actual depletion, which allows for the Data Processing Unit 12 to request the next required data packets before the buffer 30 is empty. The number of data packets the buffer 30 can accommodate after the fullness signal is asserted and the number of data packets the buffer 30 can send out after the emptiness signal is asserted can be programmed, to manage the data behavior for the multiprocessor system. The management of data behavior can be used for, among other purposes, the management of power consumption inside the processor.
Thus, in the preferred embodiment, to absorb variations in the data rate at which previously requested data packets are delivered to a Data Processing Unit 12 with the rate at which the particular Data Processing Unit 12 processes one data packet, the Data Processing Units 12 are each provided with Elastic Data Buffers 30 to accommodate the incoming data packets. In addition to the data packets and addresses of the respective target Data Processing Units 12, the buffer 30 receives the validity signal (data token) corresponding to each data packet coming in and sends out the validity signal (data token) corresponding to each data packet going out.
Fig. 4 shows the data flow through a preferred Data Processing Unit 12. The Data Processing Unit 12 receives an execution instruction, which describes an operation that the Data Processing Unit 12 is to perform and contains information about the data that the Data Processing Unit 12 has to process. The Data Processing Unit 12 keeps the record of the instruction in the Execution Instructions Records storage 34 for future reference, requests data to perform the data processing operation on, and keeps records of all outstanding data requests in the Data Requests Records storage 32.
Therefore instructions are received by the Instructions and Data Interface 14 of Processor 10 via the processor's external connections, passed to the Instructions Distributing Unit 18 via Instructions Path 16 and distributed to Data Processing Units 12 where each instruction is temporarily stored in the Execution Instructions Records storage 34. The instructions so stored cause the processor 10 to send a request for one or more data packets, with the address of the requesting Data Processing Unit 12, to an internal or external storage device (not shown), in which the requested data resides.
A record of the requested data packet is written to the Data Request Records storage 32. The aforementioned process is repeated as further instructions continue to be received by the processor 10.
Previously requested data with the address of each particular data packet is received by the Data Processing Unit 12 via Data Path 20, with an attached validity signal (data token) showing the incoming data validity or non-validity, which associates the pieces of data (data packets) with the instructions that caused the data request. Each incoming data packet is put into the Elastic Data Buffer 30 and the corresponding record of outstanding data requests is erased from the Data Request Records storage 32. The previously stored instruction inside the Execution Instructions Records storage 34 receives an indication that the corresponding data packet is now available for processing by the Data Processing Unit 12. As soon as the Computation Module 36 within the Data Processing Unit 12 becomes vacant, it takes one or more data packets from the Elastic Data Buffer 30 and the corresponding instructions from the Execution Instruction Records storage 34, processes the data packet/packets according to the corresponding instructions, and sends the result out.
The association of data packets from the Elastic Data Buffer 30 with instructions from the Execution Instruction Records storage 34 can be done either in the order instructions are stored (if data packets are coming in the same order as data requests were previously sent) or according to the address of the unit sending the data paclcet, such as Data Processing Units 12, Instructions and Data Interface 14, or an external storage device (not shown). (Each unit sends data packets in the same order as data requests are received, although the order of data paclcets from different units may not be preserved.) In the preferred embodiment, after the Computation Module 36 starts processing the data the corresponding instruction is erased from the Data Request Records storage 32. Then the Computation Module 36 takes the next one or more data packets and corresponding instruction from the Execution Instruction Records storage 34, processes the data, sends the result out and so on.
Each data packet has an associated data token attached to it or associated with it, which establishes the validity of the data paclcet and serves as a synchronization signal to trigger the processing of the data packet. Thus, synchronization of the , parallel operation of the multiprocessor system is driven by the data tokens attached to or associated with the data packets.
Having the instructions distributed to Data Processing Units 12, and data requests sent out, in advance of the actual availability of a Data Processing Unit 12 to process the data, can help to balance the consecutive operation of the Instructions Distributing Unit 18 with parallel work of Data Processing Units 12.
Improvement in performance is obtained when rate of instructions distributed by the Instructions Distributing Unit 18 exceeds the rate of data processing by the Data Processing Units 12. This result is achieved by taking advantage of the time difference between the rate of instructions distribution and the rate of data processing, and the utilization of time delays arising from the need to deliver data to the Data Processing Units 12.
When the rate of distribution of the instructions does not exceed the rate of data processing, the same improvement in performance may nevertheless be achieved by distributing more than one instruction at a time. By doing so, a rate of distribution of the instructions per instruction which exceeds the rate of data processing for one instruction is obtained.
Figure 5 illustrates the operation of an Elastic Data Buffer 30. The buffer 30, for example a FIFO buffer, has an input port 30a for receiving data and an output port 30b for sending data out, and several control signals: an incoming signal indicating the validity of incoming data (data token), and outgoing signal indicating the buffer's fullness, an outgoing signal indicating the validity of outgoing data (data token), and an outgoing signal indicating the buffer's emptiness. Asserting the buffer's fullness signal in advance of its actual filling allows for the data paclcets, which are in transit between the previous unit such as Data Processing Units 12, Instructions and Data Interface 14, or an external storage device (not shown), sending the data and the buffer 30 receiving it, to be absorbed by the receiving buffer 30. Asserting the buffer's emptiness signal in advance of its actual depletion allows for the buffer 30 to ask for subsequent data packets, required for the execution of instructions stored in the Execution Instruction Records storage 34, before the buffer 30 becomes empty.
The incoming data validity signal (data token) thus provides data-driven synchronization by the elastic buffer 30 for an incoming data paclcet, while the outgoing data validity signal (data token) provides data-driven synchronization for each data packet transmitted out of the elastic buffer 30 to another module.
The timing of buffer's fullness and emptiness signals can be programmed, which facilitates the management of data behavior inside a particular multiprocessor system according to specific target application.
In one embodiment, shown in Figures 6 to 9, the invention is implemented in the organization of local data storage (data cache) for improving the effectiveness of both data processing systems and data processing units.
Figure 6 illustrates an example of a data processing system (or Processor) 10 of the invention for use in the organization of local data storage, containing multiple Data Processing Units 12 and a local data storage 40 (Level 2 Data Cache).
Each such processor 10 may be connected to other processors 10, external storage memory (not shown), or other devices like moiutor, lceyboard etc. (not shown) The Processor 10 has an Instructions and Data Interface 14, through which it receives data to be processed and instructions how to process data. Processor 10 may also receive several control signals (not shown). Instructions are sent through Instructions Path 16 to the Instructions Distributing Unit 18 where they are processed and sent to the particular Data Processing Units 12. Data packets may go through a distinct Data Path 20 from the Instructions and Data Interface 14 to the Data Cache 40 and/or Data Processing Units 12. After the data is processed it can be sent to other Data Processing Units 12, to the Data Cache 40, and/or to the Instructions and Data Interface 14 to be sent out.
The Processor 10 may also send out a plurality of control signals (not shown).
the system organization shown in Fig. 5 reveals a discrepancy between the consecutive work of the Instructions Distributing Unit 18 and the parallel work of multiple Data Processing Units 12. This contradiction will be addressed after the description of the organization of the Data Processing Units 12.
To process data, each Data Processing Unit 12 has to receive at least one instruction describing where it must find and retrieve the data from, what it is required to do with the data, and where to send the result. The need to deliver data to be processed can significantly delay the start of actual data processing by the Data Processing Unit 12, especially when data has to be fetched, for example, from an external data storage (not shown).
In order to avoid such delays decreasing the data processing performance, in this embodiment of the invention a local data storage (data cache) is provided from which data can be fetched (in the case of a cache hit). Further, the data request (in the case of a cache miss) is sent by the Data Processing Units 12 far in advance of the actual moment when data has to be available for the Data Processing Unit 12 to start processing.
Since number of outstanding data requests may vary, each Data Processing Unit 12 has to lceep a record of such data requests in a local storage memory.
When a piece of earlier requested data is received by the particular Data Processing Unit 12 which requested it, the particular Data Processing Unit 12 erases the corresponding record of the outstanding data request.
To synchronize the start of data processing by the Data Processing Unit 12 with the arrival of the requested data, the data can have a special data token or signal attached to it indicating its validity or non-validity. This data token or signal serves as a trigger, which activates the start of data processing.
Figures 7 and 8 illustrate a top-level block diagram of a Data Processing Unit 12, showing the data flow through a Data Processing Unit 12. The Data Processing Unit 12 receives an execution instruction, which in particular describes an operation the Data Processing Unit 12 has to perform, and contains information about data that the Data Processing Unit 12 has to process. The Data Processing Unit 12 keeps a record of the instruction and a record of data request in the Execution Instructions Records 34 and Data Requests Records 32 for future reference. The Data Processing Unit 12 performs a check of data requests against internal data storage (Level 1 Data Cache) 42 and indicates if data packet is available in the data cache 42.
Requests for cache misses are sent out. The Data Processing Unit 12 keeps a record of all outstanding data requests. When a previously requested data paclcet is received by the Data Processing Unit 12 it has an attached data token or signal showing the validity of the incoming data. The data packet is put into the internal Data Cache 42 and the corresponding record of the outstanding data request is erased, indicating that data packet is available for processing. Also the instruction inside the Execution Instructions Records 34 receives an indication that corresponding data packets are available for processing by Data Processing Unit 12. Later, when the Computation Module 36 located inside Data Processing Unit becomes vacant, it takes the data packet from the Data Cache 42 and the corresponding instruction from the Execution Instruction Records 34, processes the data packet, and sends the result out.
Then the Computation Module 36 talces next piece of data and the corresponding instruction, processes the data, sends the result out, and so on. To facilitate smooth flow of data through the Data Processing Unit 12 a small buffer 44 may be placed between the Data Cache 42 and the Computation Module 36.
Data packets produced by the Computation Module 36 are either stored in the local data storage (Read-Write Level 1 Data Cache) 42 inside the Data Processing Unit 12 modifying previously stored data (as shown in Figure 8) or sent out without storage (as shown in Figure 7) inside the Read-Only Level 1 Data Cache 42 of the Data Processing Unit 12.

Figure 9 illustrates a non-blocking Level 2 Data Cache 40 in a Digital Data Processor 10. The Data Cache 40 receives data requests from individual Data Processing Units 12. The Processor 10 keeps a record of incoming requests in the Data Requests Records 32 for future reference and performs a check of data requests against its internal data storage (Level 2 Data Cache) 42, sending out to the external data storage 40 requests for data cache misses. The Processor 10 indicates in the Data Requests Records 32 which data packets are already available in the internal data storage 42 (cache hits) and which data paclcets are missing (cache misses).
When a previously requested data packet is received by the Data Processor 10 it has an attached data token or signal showing the validity of incoming data. The data packet is put into the Level 2 Data Cache 40 and the corresponding record of the outstanding data request is erased. Hence, the previously stored incoming data request in the Data Requests Records 32 receives an indication that the particular data packet is available for distribution to the corresponding Data Processing Units 12. Later, when the Data Processing Unit 12 intended to receive the data becomes available, Cache Controller 46 sends the data paclcet to the Data Processing Unit 12 intended to receive the data with the attached data token or signal from the Level 2 Data Cache 40. Then the Cache Controller 46 of the Level 2 Data Cache distributes next data packet to corresponding Data Processing Unit 12, and so on. To facilitate the smooth data flow of data from the Level 2 Data Cache 40 to the Data Processing Units 12, a small buffer 50 may be placed after the Level 2 Data Cache 42 inside the Digital Data Processor 12. Data packets produced by each Data Processing Unit 12 are either stored in the local data storage Read-Write Level 2 Data Cache 40 inside Data Processor 10 modifying previously stored data, or distributed without storage inside the Data Processor's Read-Only Level 2 Data Cache 40.
Having the instructions distributed to Data Processing Units 12, and data requests (cache misses) sent out, in advance of the actual availability of Data Processing Units 12 to process the data, can help to balance the consecutive work of the Instructions Distributing Unit 18 with the parallel work of the Data Processing Units 12. Improvement in performance is obtained when the rate of instructions distributed by the Instructions Distributing Unit 18 exceeds the rate of data processing by the Data Processing Units 12. This result is achieved by taking advantage of the time difference between the rate of instructions distribution and the rate of data processing, and the utilization of time delays arising from the need to deliver data to the Data Processing Units 12. When the rate of distribution of the instructions does not exceed the rate of data processing, the same improvement in performance may nevertheless be achieved by distributing more than one instruction at a time.
By doing so, a rate of distribution of the instructions (per instruction) is obtained which exceeds the rate of data processing for one instruction.
The usage of non-blocking data cache makes these improvements in performance even greater, since the Data Processor 10 can (a) utilize time periods required (by the Data Processing Units 12) to process data packets corresponding to multiple consecutive cache hits, for delivering to cache the data packets corresponding to cache misses when they were requested in advance, and (b) speed up the average time required to deliver a data packet to the cache, by combining data packets in bunches to better utilize bus capacities.
Similarly, having the data requests from the Data Processing Units 12 checked against internal data storage (Level 2 Data Cache) 40 of the Data Processor 10 and missing data requests (cache misses) sent out to the external data storage in advance of the actual need for data packets by the Data Processing Units 12 can help to balance the consecutive work of the Level 2 Cache Controller 46 with the parallel worlc of multiple Data Processing Units 12. Improvement in performance here is obtained by taking advantage of the time periods when the Data Processing Units 12 process the data packets corresponding to consecutive Level 1 cache 42 hits, for delivering to multiple Data Processing Units 12 the data paclcets corresponding to Level 1 cache 42 misses.
The usage of next-level non-bloclcing Data Cache also advances the performance of the Data Processor 10 since (a) it extends the time periods corresponding to consecutive cache hits by matching the most cache misses in the Level 1 Data Cache 42 with the cache hits in the Level 2 Data Cache 40, and (b) it increases the average time required to deliver a data packet to the Level 1 Data Cache 42 and eventually to the Data Processing Unit 12, by better utilization of the capacities of buses between the external data storage (not shown) and a Data Processor 10, and between the Level 2 Data Cache 40 and multiple Data Processing Units 12.

A preferred embodiment of the present invention has been shown and described by way of example only. It will be apparent to those slcilled in the art that changes and modifications may be made without departing from the scope of the invention, as set out in the appended claims.

Claims (59)

WE CLAIM:
1. A method for data-driven synchronous parallel processing of a stream of data packets by multiple data processing units working in parallel, comprising the steps of:
a. distributing at least one instruction for data processing to one data processing unit of the multiple data processing units, before the data processing unit is available to process the instruction;
b. storing the instruction in an execution instructions memory;
c. sending from the one data processing unit a data request for at least one data packet corresponding to the instruction, required to execute the instruction;
d. storing a record of the at least one data packet requested;
e. associating with the at least one data packet an address of the one data processing unit;
f. associating with the each data packet sent out a data token showing the readiness of the packet for further processing;
g. when the at least one data packet is received by the processing unit, associating the data packet with the corresponding instruction and distributing the data packet to the one data processing unit; and h. processing the data according to the corresponding instruction.
2. The method of claim 1 wherein instructions are distributed to the multiple data processing units consecutively.
3. The method of claim 1 wherein instructions are distributed to the multiple data processing units concurrently.
4. The method of claim 1 including, after step f., the step of putting the requested data packets into an internal data buffer in a data processing unit.
5. The method of claim 1 including, after step g., the step of erasing the record of the data request corresponding to the data packet.
6. The method of claim 1 including, during step g., the step of sending to the corresponding instruction in the execution instructions memory an indication that the at least one data packet has been received by the processing unit and is available for processing.
7. The method of claim 1 including, during step e., the step of associating with the data packets an address of its sender and, during the step g, associating the data packet with the corresponding instruction according to the address of the data packet sender.
8. The method of claim 1 including, during the step g, associating the data packet with the corresponding instruction according to the order of the data packet received.
9. The method of claim 4 including the step of retrieving each data packet from the internal data buffer to be processed according to the corresponding instruction.
10. The method of claim 1 wherein an output of the processing step is sent to another data processing unit or out of the processor, or both.
11. The method of claim 1 wherein processing occurs in real-time.
12. A method of providing a substantially non-stalling sequential flow of data packets through a digital data-driven processor, the digital processor storing at least one instruction for processing data packets in accordance with the instruction, comprising the steps of:
a. providing a buffer between adjacent units processing, distributing or otherwise handling the data;
b. providing a fullness signal indicating a fullness state of the buffer from the data buffer to a previous adjacent unit, before the buffer is full;
c. providing an emptiness signal indicating an emptiness state of the buffer from the data buffer to a next adjacent unit, before the buffer is empty;
d. providing an incoming data validity signal for synchronization of data handling by the buffer with the arrival of a data packet to the buffer; and e. providing an outgoing data validity signal for synchronization of data handling by a unit next after the buffer with an outgoing data packet from the buffer, wherein assertion of the fullness signal in advance of filling the buffer allows the buffer to absorb data packets in transit between a previous unit sending the data and the buffer receiving the data, and assertion of the emptiness signal in advance of depleting the buffer allows for the processor to request new data packets before the buffer becomes empty.
13. The method of claim 12 wherein the validity signal comprises a data token.
14. The method of claim 13 in a processor having a plurality of processing units, including the step of programming a timing of assertion of the fullness signal and of the emptiness signal to allow for management of synchronous data flow to the processing units.
15. An apparatus for substantially non-stalling synchronous data packet flow through a digital data-driven processor, each data packet being associated with an address of a processing unit containing an instruction for processing the data packet, comprising a data buffer for temporary storage of the data packets, the buffer comprising an input port for receiving incoming data packets and their associated addresses;
an output port for sending out outgoing data and their associated addresses;
an input port for receiving an incoming validity signal;
an output port for sending an outgoing validity signal;
an outgoing fullness signal indicating a fullness of the buffer, adapted to be asserted in advance of the filling of the buffer;
an outgoing emptiness signal indicating an emptiness of the buffer, adapted to be asserted in advance of the depletion of the buffer; and control logic for regulating a timing of assertion of the fullness and the emptiness signals in a mufti-processing system.
16. The apparatus of claim 15 wherein the validity signal comprises a data token.
17. The apparatus of claim 16 wherein the buffer comprises a FIFO buffer.
18. An apparatus for substantially non-stalling data-driven synchronous parallel processing of data packets including a digital data processor, further comprising:
an interface for receiving instructions and digital data from at least one external device and sending instructions or digital data or both to at least one external device;
an instruction path contained inside the processor;
a data path contained inside the processor;
a plurality of data processing units organized for parallel processing of the data; and a distributing unit organized for distributing one or more instructions at a time to the data processing units.
19. The apparatus of claim 18 wherein instructions axe distributed to the plurality of data processing units consecutively.
20. The apparatus of claim 18 wherein instructions axe distributed to the plurality of data processing units concurrently.
21. The apparatus of claim 18 wherein each data processing unit comprises a storage for instructions;
a storage for records of outstanding data requests;
a storage for receiving requested data packets; and a computation module for processing the requested data packets in accordance with at least one associated instruction.
22. The apparatus of claim 21 comprising control logic for controlling instruction and data flows through the processor.
23. The apparatus of claim 18 wherein the digital data processor comprises a general-purpose microprocessor.
24. The apparatus of claim 18 wherein the digital data processor comprises a graphics processor.
25. The apparatus of claim 18 wherein the digital data processor comprises a digital signal processor.
26. The apparatus of claim 21 wherein the computational module operates using vector values.
27. The apparatus of claim 21 wherein the computational module operates using scalar values.
28. A method for data-driven synchronous parallel processing of a stream of data packets by multiple data processing units working in parallel, comprising the steps of:
a. distributing at least one instruction for data processing to one data processing unit of the multiple data processing units, before the data processing unit is available to process the instruction;
b. storing the instruction in an execution instructions memory;
c. sending from the one data processing unit a data request for at least one data packet corresponding to the instruction, required to execute the instruction;
d. storing a record of the at least one data packet requested;
e. associating with the at least one data packet an address of the one data processing unit;
f. associating with the at least one data packet a data token indicating a readiness of the data packet for further processing and comprising data for associating the at least one data packet with the corresponding instruction at the one data processing unit;

g. when the at least one data packet is received by the processing unit, associating the data packet with the corresponding instruction and distributing the data packet to the one data processing unit; and h. processing the data according to the corresponding instruction.
29. A method for data-driven synchronous processing of a stream of data packets by multiple data processing units working in parallel and using at least one data cache, comprising the steps of:
a. distributing at least one instruction for data processing to one data processing unit of the multiple data processing units, before the one data processing unit is available to process the instruction;
b. storing the instruction in an execution instructions memory;
c. checking a data request against data stored in a data cache of the one data processing unit;
d. sending a data request from the one data processing unit for at least one data packet corresponding to the instruction and required to execute the instruction, but missing from the data cache;
e. storing inside the one data processing unit a record of the at least one outstanding request for the data packet;
f. associating with the data packet an address of its target data processing unit;
g. associating with each data packet sent in response to a data request, a data token or signal indicating the readiness of the data packet for further processing;
h. when the at least one data packet is received by the one data processing unit, putting the received data packet into the data cache in the one data processing unit;
associating the data packet with the corresponding instruction and retrieving the data packet from the data cache, to be processed according to the corresponding instruction; and processing the data according to the corresponding instruction.
30. The method of claim 29 wherein instructions are distributed to the multiple data processing units consecutively.
31. The method of claim 29 wherein instructions are distributed to the multiple data processing units concurrently.
32. The method of claim 29 including, after step d., the step of checking a data request against data stored in a next-higher level data cache of the digital data processor.
33. The method of claim 29 including, after step h., the step of erasing the outstanding record of the data request corresponding to the received data packet.
34. The method of claim 29 including, during step i., the step of sending to the corresponding instruction in the execution instructions memory an indication that the at least one data packet has been received by the processing unit and is available for processing.
35. The method of claim 29 including, during step f., the step of associating with the data packet an address of its sender and, during step i, associating the data packet with the corresponding instruction according to the address of the sender of the data packet.
36. The method of claim 29 including, during step i, associating the data packet with the corresponding instruction according to a sequential order in which the data packet is received.
37. The method of claim 29 including, after step i., the step of temporarily putting the data packet into a data buffer to facilitate a smooth flow of data through the data processing unit.
38. The method of claim 29 wherein an output of the processing step is sent to another data processing unit or out of the data processor, or both.
39. The method of claim 29 wherein an output of the processing step is stored in a local data storage in a data processing unit, or sent to another data processing unit, or sent out of the processor, or any combination thereof.
40. The method of claim 29 wherein processing occurs in real-time.
41. The method of claim 32 comprising the steps of:
k. storing data requests from all data processing units in a data requests memory;
l. checking a data request against data stored in the next- higher level data cache of the digital data processor;
m. sending to an external data storage a data request from the data processor for at least one data packet missing in the next-higher level data cache;
n. storing inside the digital data processor a record of the request for the missed data packet;
o. associating with the data packet an address of its target data processing unit;
p. associating with each data packet sent in response to a data request a data token or signal showing the readiness of the data packet for further processing;
q. when the at least one data packet is received by the digital data processor from the external data storage, putting the received data packets into a data cache in a digital data processor;
r. associating the data packet with the corresponding data request from a data processing unit in a data requests memory and sending to the corresponding data request in the data requests memory an indication that the at least one data packet has been received by the digital data processor and is available for distribution;
and s. Retrieving the data packet from the data cache according to the corresponding data request in the data requests memory and sending the data to the corresponding data processing unit.
42. The method of claim 41 including, after step q., the step of erasing the record of the data request corresponding to the data packet received from the external data storage.
43. The method of claim 41 including, during step o., the step of associating with the data packet an address of its sender and, during step r., associating the data packet with the data request in the data requests memory according to the address of the sender of data packet.
44. The method of claim 41 including, during the step r., associating the data packet with the data request in a data requests memory according to a sequential order in which the data packet is received.
45. The method of claim 41 wherein an output of at least one data processing unit is stored in the data cache in the data processor.
46. An apparatus for the data-driven synchronous processing of a stream of data packets by multiple data processing units working in parallel, comprising at least one data cache, and further comprising:
an interface for receiving instructions and digital data from at least one external device and sending instructions or digital data, or both, to at least one external device;
an instruction path contained inside the processor;
a data path contained inside the processor;
a plurality of data processing units organized for parallel processing of the data;
a distributing unit organized for distributing one or more instructions at a time to the data processing units; and at least one data cache for storing the data packets, associated with a cache controller.
47. The apparatus of claim 46 wherein instructions are distributed to the plurality of data processing units consecutively.
48. The apparatus of claim 46 wherein instructions are distributed to the plurality of data processing units concurrently.
49. The apparatus of claim 46 wherein each data processing unit comprises at least one storage for storing instructions and data requests;
a logic unit for indicating cache hits and misses in the data requests records; and a computation module for processing requested data packets in accordance with at least one associated instruction.
50. The apparatus of claim 49 comprising control logic for controlling instruction and data flows through the processor.
51. The apparatus of claim 49 comprising a data buffer located between the data cache and the computation module.
52. The apparatus of claim 49 comprising a means to store a processed data packet into the data cache.
53. The apparatus of claim 46 wherein the data cache further comprises a storage for data requests; and a logic indicating cache hits and misses in the data requests storage.
54. The apparatus of claim 53 comprising a data buffer downstream of a data cache.
55. The apparatus of claim 46 wherein the digital data processor comprises a general-purpose microprocessor.
56. The apparatus of claim 46 wherein the digital data processor comprises a graphics processor.
57. The apparatus of claim 46 wherein the digital data processor comprises a digital signal processor.
58. The apparatus of claim 49 wherein the computational module operates using vector values.
59. The apparatus of claim 49 wherein the computational module operates using scalar values.
CA002464506A 2001-10-31 2002-10-30 Method and apparatus for the data-driven synchronous parallel processing of digital data Abandoned CA2464506A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CA002464506A CA2464506A1 (en) 2001-10-31 2002-10-30 Method and apparatus for the data-driven synchronous parallel processing of digital data

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CA002360712A CA2360712A1 (en) 2001-10-31 2001-10-31 Method and apparatus for the data-driven synchronous parallel processing of digital data
CA2,360,712 2001-10-31
PCT/CA2002/001636 WO2003038602A2 (en) 2001-10-31 2002-10-30 Method and apparatus for the data-driven synchronous parallel processing of digital data
CA002464506A CA2464506A1 (en) 2001-10-31 2002-10-30 Method and apparatus for the data-driven synchronous parallel processing of digital data

Publications (1)

Publication Number Publication Date
CA2464506A1 true CA2464506A1 (en) 2003-05-08

Family

ID=32597865

Family Applications (1)

Application Number Title Priority Date Filing Date
CA002464506A Abandoned CA2464506A1 (en) 2001-10-31 2002-10-30 Method and apparatus for the data-driven synchronous parallel processing of digital data

Country Status (1)

Country Link
CA (1) CA2464506A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113830009A (en) * 2020-06-08 2021-12-24 北京新能源汽车股份有限公司 Signal transmission method and device and electric automobile

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113830009A (en) * 2020-06-08 2021-12-24 北京新能源汽车股份有限公司 Signal transmission method and device and electric automobile
CN113830009B (en) * 2020-06-08 2024-03-08 北京新能源汽车股份有限公司 Signal transmission method and device and electric automobile

Similar Documents

Publication Publication Date Title
WO2003038602A2 (en) Method and apparatus for the data-driven synchronous parallel processing of digital data
US5867735A (en) Method for storing prioritized memory or I/O transactions in queues having one priority level less without changing the priority when space available in the corresponding queues exceed
US5812799A (en) Non-blocking load buffer and a multiple-priority memory system for real-time multiprocessing
US5185868A (en) Apparatus having hierarchically arranged decoders concurrently decoding instructions and shifting instructions not ready for execution to vacant decoders higher in the hierarchy
US8006244B2 (en) Controller for multiple instruction thread processors
US5251306A (en) Apparatus for controlling execution of a program in a computing device
US6732242B2 (en) External bus transaction scheduling system
US8756605B2 (en) Method and apparatus for scheduling multiple threads for execution in a shared microprocessor pipeline
US6871264B2 (en) System and method for dynamic processor core and cache partitioning on large-scale multithreaded, multiprocessor integrated circuits
CN102375800B (en) For the multiprocessor systems on chips of machine vision algorithm
US20170249151A1 (en) Software-Assisted Instruction Level Execution Preemption
WO1994027216A1 (en) Multiprocessor coupling system with integrated compile and run time scheduling for parallelism
US20070143582A1 (en) System and method for grouping execution threads
JPH10283203A (en) Method and device for reducing thread changeover waiting time in multi-thread processor
US5557764A (en) Interrupt vector method and apparatus
EP1062572A1 (en) Zero overhead computer interrupts with task switching
US5784711A (en) Data cache prefetching under control of instruction cache
US4949247A (en) System for transferring multiple vector data elements to and from vector memory in a single operation
JP2008515117A (en) Method and apparatus for providing source operands for instructions in a processor
US7725659B2 (en) Alignment of cache fetch return data relative to a thread
JPH06110688A (en) Computer system for parallel processing of plurality of instructions out of sequence
US11875425B2 (en) Implementing heterogeneous wavefronts on a graphics processing unit (GPU)
US6016531A (en) Apparatus for performing real time caching utilizing an execution quantization timer and an interrupt controller
US5623685A (en) Vector register validity indication to handle out-of-order element arrival for a vector computer with variable memory latency
US6915516B1 (en) Apparatus and method for process dispatching between individual processors of a multi-processor system

Legal Events

Date Code Title Description
FZDE Dead