EP1559005A2 - Rechner mit verbesserter rechnerarkitektur und dazugehörende vorrichtung und verfahren. - Google Patents

Rechner mit verbesserter rechnerarkitektur und dazugehörende vorrichtung und verfahren.

Info

Publication number
EP1559005A2
EP1559005A2 EP03781554A EP03781554A EP1559005A2 EP 1559005 A2 EP1559005 A2 EP 1559005A2 EP 03781554 A EP03781554 A EP 03781554A EP 03781554 A EP03781554 A EP 03781554A EP 1559005 A2 EP1559005 A2 EP 1559005A2
Authority
EP
European Patent Office
Prior art keywords
data
buffer
control
under
configuration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP03781554A
Other languages
English (en)
French (fr)
Inventor
Chandan Mathur
Scott Hellenbach
John W. Rapp
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lockheed Martin Corp
Original Assignee
Lockheed Corp
Lockheed Martin Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US10/683,929 external-priority patent/US20040136241A1/en
Application filed by Lockheed Corp, Lockheed Martin Corp filed Critical Lockheed Corp
Publication of EP1559005A2 publication Critical patent/EP1559005A2/de
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline or look ahead
    • G06F9/3877Concurrent instruction execution, e.g. pipeline or look ahead using a slave processor, e.g. coprocessor
    • G06F9/3879Concurrent instruction execution, e.g. pipeline or look ahead using a slave processor, e.g. coprocessor for non-native instruction execution, e.g. executing a command; for Java instruction set
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline or look ahead
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/76Architectures of general purpose stored program computers
    • G06F15/78Architectures of general purpose stored program computers comprising a single central processing unit

Definitions

  • a common computing architecture for processing relatively large amounts of data in a relatively short period of time includes multiple interconnected processors that share the processing burden. By sharing the processing burden, these multiple processors can often process the data more quickly than a single processor can for a given clock frequency. For example, each of the processors can process a respective portion of the data or execute a respective portion of a processing algorithm.
  • FIG. 1 is a schematic block diagram of a conventional computing machine 10 having a multi-processor architecture.
  • the machine 10 includes a master processor 12 and coprocessors 14 ⁇ - 14 n , which communicate with each other and the master processor via a bus 16, an input port 18 for receiving raw data from a remote device (not shown in FIG. 1), and an output port 20 for providing processed data to the remote source.
  • the machine 10 also includes a memory 22 for the master processor 12, respective memories 24* / - 24 n for the coprocessors 14 ⁇ - 14 n , and a memory 26 that the master processor and coprocessors share via the bus 16.
  • the memory 22 serves as both a program and a working memory for the master processor 12, and each memory 24 ⁇ - 24 tone serves as both a program and a working memory for a respective coprocessor 14 ⁇ - 14 nom.
  • the shared memory 26 allows the master processor 12 and the coprocessors 14 to transfer data among themselves, and from/to the remote device via the ports 18 and 20, respectively.
  • the master processor 12 and the coprocessors 14 also receive a common clock signal that controls the speed at which the machine 10 processes the raw data.
  • the computing machine 10 effectively divides the processing of raw data among the master processor 12 and the coprocessors 14.
  • the remote source such as a sonar array loads the raw data via the port 18 into a section of the shared memory 26, which acts as a first-in-first-out (FIFO) buffer (not shown) for the raw data.
  • the master processor 12 retrieves the raw data from the memory 26 via the bus 16, and then the master processor and the coprocessors 14 process the raw data, transferring data among themselves as necessary via the bus 16.
  • the master processor 12 loads the processed data into another FIFO buffer (not shown) defined in the shared memory 26, and the remote source retrieves the processed data from this FIFO via the port 20.
  • the computing machine 10 processes the raw data by sequentially performing n + 1 respective operations on the raw data, where these operations together compose a processing algorithm such as a Fast Fourier Transform (FFT). More specifically, the machine 10 forms a data-processing pipeline from the master processor 12 and the coprocessors 14. For a given frequency of the clock signal, such a pipeline often allows the machine 10 to process the raw data faster than a machine having only a single processor.
  • FFT Fast Fourier Transform
  • the master processor 12 After retrieving the raw data from the raw-data FIFO (not shown) in the memory 26, the master processor 12 performs a first operation, such as a trigonometric function, on the raw data. This operation yields a first result, which the processor 12 stores in a first-result FIFO (not shown) defined within the memory 26.
  • the processor 12 executes a program stored in the memory 22, and performs the above-described actions under the control of the program.
  • the processor 12 may also use the memory 22 as working memory to temporarily store data that the processor generates at intermediate intervals of the first operation.
  • the coprocessor 14 ⁇ performs a second operation, such as a logarithmic function, on the first result. This second operation yields a second result, which the coprocessor 14 ⁇ stores in a second-result FIFO (not shown) defined within the memory 26.
  • the coprocessor 14 ⁇ executes a program stored in the memory 24 ⁇ , and performs the above-described actions under the control of the program.
  • the coprocessor 14 ⁇ may also use the memory 24 ⁇ as working memory to temporarily store data that the coprocessor generates at intermediate intervals of the second operation.
  • the coprocessors 24 2 - 24 n sequentially perform third - n th operations on the second - (n-1) th results in a manner similar to that discussed above for the coprocessor 24 ⁇ .
  • the n th operation which is performed by the coprocessor 24 n , yields the final result, i.e., the processed data.
  • the coprocessor 24 registers the processed data into a processed-data FIFO (not shown) defined within the memory 26, and the remote device (not shown in FIG. 1) retrieves the processed data from this FIFO.
  • the computing machine 10 is often able to process the raw data faster than a computing machine having a single processor that sequentially performs the different operations.
  • the single processor cannot retrieve a new set of the raw data until it performs all n + 1 operations on the previous set of raw data.
  • the master processor 12 can retrieve a new set of raw data after performing only the first operation. Consequently, for a given clock frequency, this pipeline technique can increase the speed at which the machine 10 processes the raw data by a factor of approximately n + 1 as compared to a single-processor machine (not shown in FIG. 1).
  • the computing machine 10 may process the raw data in parallel by simultaneously performing n + 1 instances of a processing algorithm, such as an FFT, on the raw data. That is, if the algorithm includes n + 1 sequential operations as described above in the previous example, then each of the master processor 12 and the coprocessors 14 sequentially perform all n + 1 operations on respective sets of the raw data. Consequently, for a given clock frequency, this parallel-processing technique, like the above-described pipeline technique, can increase the speed at which the machine 10 processes the raw data by a factor of approximately n + 1 as compared to a single-processor machine (not shown in FIG.
  • a processing algorithm such as an FFT
  • the computing machine 10 can process data more quickly than a single-processor computer machine (not shown in FIG. 1 ), the data-processing speed of the machine 10 is often significantly less than the frequency of the processor clock. Specifically, the data-processing speed of the computing machine 10 is limited by the time that the master processor 12 and coprocessors 14 require to process data. For brevity, an example of this speed limitation is discussed in conjunction with the master processor 12, although it is understood that this discussion also applies to the coprocessors 14. As discussed above, the master processor 12 executes a program that controls the processor to manipulate data in a desired manner. This program includes a sequence of instructions that the processor 12 executes.
  • the processor 12 typically requires multiple clock cycles to execute a single instruction, and often must execute multiple instructions to process a single value of data. For example, suppose that the processor 12 is to multiply a first data value A (not shown) by a second data value B (not shown). During a first clock cycle, the processor 12 retrieves a multiply instruction from the memory 22. During second and third clock cycles, the processor 12 respectively retrieves A and B from the memory 26. During a fourth clock cycle, the processor 12 multiplies A and B, and, during a fifth clock cycle, stores the resulting product in the memory 22 or 26 or provides the resulting product to the remote device (not shown). This is a best-case scenario, because in many cases the processor 12 requires additional clock cycles for overhead tasks such as initializing and closing counters.
  • Gops Gigaoperations/second
  • FIG. 2 is a block diagram of a hardwired data pipeline 30 that can typically process data faster than a processor can for a given clock frequency, and often at substantially the same rate at which the pipeline is clocked.
  • the pipeline 30 includes operator circuits 32 ⁇ - 32 n that each perform a respective operation on respective data without executing program instructions. That is, the desired operation is "burned in" to a circuit 32 such that it implements the operation automatically, without the need of program instructions.
  • the pipeline 30 can typically perform more operations per second than a processor can for a given clock frequency.
  • the pipeline 30 can often solve the following equation faster than a processor can for a given clock frequency:
  • Y(x k ) (5x k + 3)2 xk
  • x k represents a sequence of raw data values.
  • the operator circuit 32 ⁇ is a multiplier that calculates 5X k
  • the circuit 32 2 is an adder that calculates 5X k + 3
  • the circuit 32 ⁇ receives data value x-t and multiplies it by 5 to generate 5x ⁇ .
  • the pipeline 30 continues processing subsequent raw data values x in this manner until all the raw data values are processed. [21] Consequently, a delay of two clock cycles after receiving a raw data value x-t — this delay is often called the latency of the pipeline 30 — the pipeline generates the result (5x* ⁇ + 3)2 x1 , and thereafter generates one result — e.g., (5x 2 + 3)2 x2 , (5x 3 + 3)2 x3 , . . ., 5x impart + 3)2 xn — each clock cycle.
  • the pipeline 30 thus has a data-processing speed equal to the clock speed.
  • the master processor 12 and coprocessors 14 (FIG. 1) have data-processing speeds that are 0.4 times the clock speed as in the above example, the pipeline 30 can process data 2.5 times faster than the computing machine 10 (FIG. 1) for a given clock speed.
  • a designer may choose to implement the pipeline 30 in a programmable logic IC (PLIC), such as a field-programmable gate array (FPGA), because a PLIC allows more design and modification flexibility than does an application specific IC (ASIC).
  • PLIC programmable logic IC
  • FPGA field-programmable gate array
  • ASIC application specific IC
  • the designer merely sets interconnection-configuration registers disposed within the PLIC to predetermined binary states. The combination of all these binary states is often called “firmware.”
  • the designer loads this firmware into a nonvolatile memory (not shown in FIG. 2) that is coupled to the PLIC. When one "turns on” the PLIC, it downloads the firmware from the memory into the interconnection-configuration registers.
  • the designer merely modifies the firmware and allows the PLIC to download the modified firmware into the interconnection-configuration registers.
  • This ability to modify the PLIC by merely modifying the firmware is particularly useful during the prototyping stage and for upgrading the pipeline 30 "in the field".
  • the hardwired pipeline 30 typically cannot execute all algorithms, particularly those that entail significant decision making.
  • a processor can typically execute a decision-making instruction (e.g., conditional instructions such as "if A, then go to B, else go to C") approximately as fast as it can execute an operational instruction (e.g., "A + B") of comparable length.
  • the pipeline 30 may be able to make a relatively simple decision (e.g., "A > B?"), it typically cannot execute a relatively complex decision (e.g., "if A, then go to B, else go to C”).
  • the size and complexity of the required circuitry often makes such a design impractical, particularly where an algorithm includes multiple different complex decisions.
  • processors are typically used in applications that require significant decision making, and hardwired pipelines are typically limited to "number crunching" applications that entail little or no decision making.
  • processors typically include industry-standard communication interfaces that facilitate the interconnection of the components to form a processor-based computing machine.
  • a standard communication interface typically includes two layers: a physical layer and a service layer.
  • the physical layer includes the circuitry and the corresponding circuit interconnections that form the interface and the operating parameters of this circuitry.
  • the physical layer includes the pins that connect the component to a bus, the buffers that latch data received from the pins, and the drivers that drive data onto the pins.
  • the operating parameters include the acceptable voltage range of the data signals that the pins receive, the signal timing for writing and reading data, and the supported modes of operation (e.g., burst mode, page mode).
  • Conventional physical layers include transistor-transistor logic (TTL) and RAMBUS.
  • TTL transistor-transistor logic
  • RAMBUS RAMBUS.
  • the service layer includes the protocol by which a computing component transfers data. The protocol defines the format of the data and the manner in which the component sends and receives the formatted data.
  • Conventional communication protocols include file-transfer protocol (FTP) and TCP/IP (expand).
  • a computing machine includes a first buffer and a processor coupled to the buffer.
  • the processor is operable to execute an application, a first data-transfer object, and a second data-transfer object, publish data under the control of the application, load the published data into the buffer under the control of the first data-transfer object, and retrieve the published data from the buffer under the control of the second data-transfer object.
  • the processor is operable to retrieve data and load the retrieved data into the buffer under the control of the first data-transfer object, unload the data from the buffer under the control of the second data-transfer object, and process the unloaded data under the control of the application.
  • the computing machine is a peer-vector machine that includes a hardwired pipeline accelerator coupled to the processor
  • the buffer and data-transfer objects facilitate the transfer of data — whether unidirectional or bidirectional — between the application and the accelerator.
  • FIG. 1 is a block diagram of a computing machine having a conventional multi-processor architecture.
  • FIG. 2 is a block diagram of a conventional hardwired pipeline.
  • FIG. 3 is schematic block diagram of a computing machine having a peer-vector architecture according to an embodiment of the invention.
  • FIG. 4 is a functional block diagram of the host processor of FIG. 3 according to an embodiment of the invention.
  • FIG. 5 is a functional block diagram of the data-transfer paths between the data-processing application and the pipeline bus of FIG. 4 according to an embodiment of the invention.
  • FIG. 6 is a functional block diagram of the data-transfer paths between the accelerator exception manager and the pipeline bus of FIG. 4 according to an embodiment of the invention.
  • FIG. 7 is a functional block diagram of the data-transfer paths between the accelerator configuration manager and the pipeline bus of FIG. 4 according to an embodiment of the invention.
  • FIG. 3 is a schematic block diagram of a computing machine 40, which has a peer-vector architecture according to an embodiment of the invention.
  • the peer-vector machine 40 includes a pipeline accelerator 44, which performs at least a portion of the data processing, and which thus effectively replaces the bank of coprocessors 14 in the computing machine 10 of FIG. 1. Therefore, the host-processor 42 and the accelerator 44 are "peers" that can transfer data vectors back and forth. Because the accelerator 44 does not execute program instructions, it typically performs mathematically intensive operations on data significantly faster than a bank of coprocessors can for a given clock frequency.
  • the machine 40 has the same abilities as, but can often process data faster than, a conventional computing machine such as the machine 10.
  • a conventional computing machine such as the machine 10.
  • providing the accelerator 44 with the same communication interface as the host processor 42 facilitates the design and modification of the machine 40, particularly where the communications interface is an industry standard.
  • the accelerator 44 includes multiple components (e.g., PLICs), providing these components with this same communication interface facilitates the design and modification of the accelerator, particularly where the communication interface is an industry standard.
  • the machine 40 may also provide other advantages as described below and in the previously cited patent applications.
  • the peer-vector computing machine 40 includes a processor memory 46, an interface memory 48, a bus 50, a firmware memory 52, optional raw ( - data input ports 54 and 56, processed-data output ports 58 and 60, and an optional router 61.
  • the host processor 42 includes a processing unit 62 and a message handler 64
  • the processor memory 46 includes a processing-unit memory 66 and a handler memory 68, which respectively serve as both program and working memories for the processor unit and the message handler.
  • the processor memory 46 also includes an accelerator-configuration registry 70 and a message-configuration registry 72, which store respective configuration data that allow the host processor 42 to configure the functioning of the accelerator 44 and the structure of the messages that the message handler 64 sends and receives.
  • the pipeline accelerator 44 is disposed on at least one PLIC (not shown) and includes hardwired pipelines 74 ⁇ - 74 n , which process respective data without executing program instructions.
  • the firmware memory 52 stores the configuration firmware for the accelerator 44. If the accelerator 44 is disposed on multiple PLICs, these PLICs and their respective firmware memories may be disposed on multiple circuit boards, i.e., daughter cards (not shown). The accelerator 44 and daughter cards are discussed further in previously cited U.S. Patent App. Serial Nos.
  • the accelerator 44 may be disposed on at least one ASIC, and thus may have internal interconnections that are unconfigurable. In this alternative, the machine 40 may omit the firmware memory 52. Furthermore, although the accelerator 44 is shown including multiple pipelines 74, it may include only a single pipeline. In addition, although not shown, the accelerator 44 may include one or more processors such as a digital-signal processor (DSP).
  • DSP digital-signal processor
  • FIG. 4 is a functional block diagram of the host processor 42 and the pipeline bus 50 of FIG. 3 according to an embodiment of the invention.
  • the processing unit 62 executes one or more software applications
  • the message handler 64 executes one or more software objects that transfer data between the software application(s) and the pipeline accelerator 44 (FIG. 3).
  • splitting the data-processing, data-transferring, and other functions among different applications and objects allows for easier design and modification of the host-processor software.
  • a software application is described as performing a particular operation, it is understood that in actual operation, the processing unit 62 or message handler 64 executes the software application and performs this operation under the control of the application.
  • a software object is described as performing a particular operation, it is understood that in actual operation, the processing unit 62 or message handler 64 executes the software object and performs this operation under the control of the object.
  • the processing unit 62 executes a data-processing application 80, an accelerator exception manager application (hereinafter the exception manager) 82, and an accelerator configuration manager application (hereinafter the configuration manager) 84, which are collectively referred to as the processing-unit applications.
  • the data-processing application processes data in cooperation with the pipeline accelerator 44 (FIG. 3).
  • the data- processing application 80 may receive raw sonar data via the port 54 (FIG. 3), parse the data, and send the parsed data to the accelerator 44, and the accelerator may perform an FFT on the parsed data and return the processed data to the data- processing application for further processing.
  • the exception manager 82 handles exception messages from the accelerator 44, and the configuration manager 84 loads the accelerator's configuration firmware into the memory 52 during initialization of the peer-vector machine 40 (FIG. 3).
  • the configuration manager 84 may also reconfigure the accelerator 44 after initialization in response to, e.g., a malfunction of the accelerator.
  • the processing-unit applications may communicate with each other directly as indicated by the dashed lines 85, 87, and 89, or may communicate with each other via the data-transfer objects 86.
  • the message handler 64 executes the data-transfer objects 86, a communication object 88, and input and output read objects 90 and 92, and may execute input and output queue objects 94 and 96.
  • the data-transfer objects 86 transfer data between the communication object 88 and the processing-unit applications, and may use the interface memory 48 as a data buffer to allow the processing-unit applications and the accelerator 44 to operate independently.
  • the memory 48 allows the accelerator 44, which is often faster than the data-processing application 80, to operate without "waiting" for the data-processing application.
  • the communication object 88 transfers data between the data objects 86 and the pipeline bus 50.
  • the input and output read objects 90 and 92 control the data-transfer objects 86 as they transfer data between the communication object 88 and the processing-unit applications. And, when executed, the input and output queue objects 94 and 96 cause the input and output read objects 90 and 92 to synchronize this transfer of data according to a desired priority
  • the message handler 64 instantiates and executes a conventional object factory 98, which instantiates the data-transfer objects 86 from configuration data stored in the message-configuration registry 72 (FIG. 3).
  • the message handler 64 also instantiates the communication object 88, the input and output reader objects 90 and 92, and the input and output queue objects 94 and 96 from the configuration data stored in the message-configuration registry 72. Consequently, one can design and modify these software objects, and thus their data-transfer parameters, by merely designing or modifying the configuration data stored in the registry 72. This is typically less time consuming than designing or modifying each software object individually.
  • FIG. 5 is a functional block diagram of the data-processing application
  • the data-processing application 80 includes a number of threads 100 ⁇
  • the thread 100 ⁇ may perform an addition
  • the thread 100 2 may perform a subtraction, or both the threads 100 ⁇ and 100 2 may perform an addition.
  • Each thread 100 generates, i.e., publishes, data destined for the pipeline accelerator 44 (FIG. 3), receives, i.e., subscribes to, data from the accelerator, or both publishes and subscribes to data. For example, each of the threads 100 ⁇ - IOO 4 both publish and subscribe to data from the accelerator 44.
  • a thread 100 may also communicate directly with another thread 100. For example, as indicated by the dashed line 102, the threads IOO3 and WO4 may directly communicate with each other.
  • a thread 100 may receive data from or send data to a component (not shown) other than the accelerator 44 (FIG. 3). But for brevity, discussion of data transfer between the threads 100 and such another component is omitted.
  • the interface memory 48 and the data-transfer objects 86 ⁇ a - 86 nb functionally form a number of unidirectional channels 104 ⁇ - 104 immunity for transferring data between the respective threads 100 and the communication object 88.
  • the interface memory 48 includes a number of buffers IO6 1 - 106 radical, one buffer per channel 104.
  • the buffers 106 may each hold a single grouping (e.g., byte, word, block) of data, or at least some of the buffers may be FIFO buffers that can each store respective multiple groupings of data.
  • the channel 104 ⁇ includes a buffer IO6 1 , a data-transfer object 86 ⁇ a for transferring published data from the thread 100 ⁇ to the buffer IO6 1 , and a data-transfer object 86 ⁇ b for transferring the published data from the buffer IO6 1 to the communication object 88.
  • Including a respective channel 104 for each allowable data transfer reduces the potential for data bottlenecks and also facilitates the design and modification of the host processor 42 (FIG. 4).
  • the object factory 98 instantiates the data-transfer objects 86 and defines the buffers 104. Specifically, the object factory 98 downloads the configuration data from the registry 72 and generates the software code for each data-transfer object 86 xb that the data-processing application 80 may need.
  • the identity of the data-transfer objects 86 Xb that the application 80 may need is typically part of the configuration data — the application 80, however, need not use all of the data-transfer objects 86.
  • the object factory 98 respectively instantiates the data objects 86 xa .
  • the object factory 98 instantiates data-transfer objects 86 xa and 86 X b that access the same buffer 104 as multiple instances of the same software code. This reduces the amount of code that the object factory 98 would otherwise generate by approximately one half.
  • the message handler 64 may determine which, if any, data-transfer objects 86 the application 80 does not need, and delete the instances of these unneeded data-transfer objects to save memory. Alternatively, the message handler 64 may make this determination before the object factory 98 generates the data-transfer objects 86, and cause the object factory to instantiate only the data-transfer objects that the application 80 needs. In addition, because the data-transfer objects 86 include the addresses of the interface memory 48 where the respective buffers 104 are located, the object factory 98 effectively defines the sizes and locations of the buffers when it instantiates the data-transfer objects.
  • the object factory 98 instantiates the data-transfer objects 86 ⁇ a and 86ib in the following manner. First, the factory 98 downloads the configuration data from the registry 72 and generates the common software code for the data-transfer object 86 ⁇ a and 86 ⁇ b . Next, the factory 98 instantiates the data-transfer objects 86 and 86 ⁇ b as respective instances of the common software code. That is, the message handler 64 effectively copies the common software code to two locations of the handler memory 68 orto other program memory (not shown), and executes one location as the object 86 ⁇ a and the other location as the object 86 1b . [64] Still referring to FIGS. 3-5, after initialization of the host processor 42, the data-processing application 80 processes data and sends data to and receives data from the pipeline accelerator 44.
  • the thread 100 ⁇ generates and publishes data to the data-transfer object 86 ⁇ a .
  • the thread 100 ⁇ may generate the data by operating on raw data that it receives from the accelerator 44 (further discussed below) or from another source (not shown) such as a sonar array or a data base via the port 54.
  • the data-object 86 ⁇ a loads the published data into the buffer
  • the data-transfer object 86 1b determines that the buffer 106 ⁇ has been loaded with newly published data from the data-transfer object 86 ⁇ a .
  • the output reader object 92 may periodically instruct the data-transfer object 86 ⁇ b to check the buffer 106 ⁇ for newly published data.
  • the output reader object 92 notifies the data-transfer object 86i b when the buffer 106 ⁇ has received newly published data.
  • the output queue object 96 generates and stores . a unique identifier (not shown) in response to the data-transfer object 86 ⁇ a storing the published data in the buffer 106 ⁇ .
  • the output reader object 92 In response to this identifier, the output reader object 92 notifies the data-transfer object 86i b that the buffer 106 1 contains newly published data. Where multiple buffers 106 contain respective newly published data, then the output queue object 96 may record the order in which this data was published, and the output reader object 92 may notify the respective data-transfer objects 86 xb in the same order. Thus, the output reader object 92 and the output queue object 96 synchronize the data transfer by causing the first data published to be the first data that the respective data-transfer object 86 Xb sends to the accelerator 44, the second data published to be the second data that the respective data- transfer object 86 Xb sends to the accelerator, etc.
  • the output reader and output queue objects 92 and 96 may implement a priority scheme other than, or in addition to, this first-in-first-out scheme. For example, suppose the thread 100 ⁇ publishes first data, and subsequently the thread 100 2 publishes second data but also publishes to the output queue object 96 a priority flag associated with the second data. Because the second data has priority over the first data, the output reader object 92 notifies the data-transfer object 86 2 b of the published second data in the buffer 106 2 before notifying the data-transfer object 86 1b of the published first data in the buffer 106 ⁇ .
  • the data-transfer object 86i b retrieves the published data from the buffer 106 ⁇ and formats the data in a predetermined manner. For example, the object 86i b generates a message that includes the published data (i.e., the payload) and a header that, e.g., identifies the destination of the data within the accelerator 44. This message may have an industry-standard format such as the Rapid IO
  • the data-transfer object 86i b formats the published data, it sends the formatted data to the communication object 88.
  • the communication object 88 sends the formatted data to the pipeline accelerator 44 via the bus 50.
  • the communication object 88 is designed to implement the communication protocol (e.g., Rapid IO, TCP/IP) used to transfer data between the host processor 42 and the accelerator 44.
  • the communication object 88 implements the required hand shaking and other transfer parameters (e.g., arbitrating the sending and receiving of messages on the bus 50) that the protocol requires.
  • the data-transfer object 86 Xb can implement the communication protocol, and the communication object 88 can be omitted.
  • the pipeline accelerator 44 then receives the formatted data, recovers the data from the message (e.g., separates the data from the header if there is a header), directs the data to the proper destination within the accelerator, and processes the data.
  • the pipeline accelerator 44 (FIG. 3) sending data to the host processor 42 (FIG. 3) is discussed in conjunction with the channel 104 2 .
  • the pipeline accelerator 44 generates and formats data. For example, the accelerator 44 generates a message that includes the data payload and a header that, e.g., identifies the destination threads 100 ⁇ and 100 2 , which are the threads that are to receive and process the data. As discussed above, this message may have an industry-standard format such as the Rapid IO (input/output) format.
  • the accelerator 44 drives the formatted data onto the bus 50 in a conventional manner.
  • the communication object 88 receives the formatted data from the bus 50 and provides the formatted data to the data-transfer object 86 2b .
  • the formatted data is in the form of a message
  • the communication object 88 analyzes the message header (which, as discussed above, identifies the destination threads 100 ⁇ and 100 2 ) and provides the message to the data-transfer object 86 2b in response to the header.
  • the communication object 88 provides the message to all of the data-transfer objects 86 foi b , each of which analyzes the message header and processes the message only if its function is to provide data to the destination threads 100 ⁇ and 7O0 2 . Consequently, in this example, only the data-transfer object 86 2b processes the message.
  • the data-transfer object 86 2b loads the data received from the communication object 88 into the buffer 106 2 .
  • the data-transfer object 86 2b recovers the data from the message (e.g., by stripping the header) and loads the recovered data into the buffer 106 2 .
  • the data-transfer object 86 2a determines that the buffer 106 2 has received new data from the data-transfer object 86 2b .
  • the input reader object 90 may periodically instruct the data-transfer object 86 2a to check the buffer 106 2 for newly received data. Alternatively, the input reader object 90 notifies the data-transfer object 86 2a when the buffer 106 2 has received newly published data.
  • the input queue object 94 generates and stores a unique identifier (not shown) in response to the data-transfer object 86 2b storing the published data in the buffer 106 2 .
  • the input reader object 90 notifies the data-transfer object 86 2a that the buffer 106 2 contains newly published data.
  • the input queue object 94 may record the order in which this data was published, and the input reader object 90 may notify the respective data-transfer objects 86 xa in the same order.
  • the input reader and input queue objects 90 and 94 may implement a priority scheme other than, or in addition to, this first-in-first-out scheme.
  • the data-object 86 2a transfers the data from the buffer 106 2 to the subscriber threads 100 ⁇ and 100 2 , which perform respective operations on the data.
  • the data-object 86 2a transfers the data from the buffer 106 2 to the subscriber threads 100 ⁇ and 100 2 , which perform respective operations on the data.
  • FIG. 5 an example of one thread receiving and processing data from another thread is discussed in conjunction with the thread WO 4 receiving and processing data published by the thread IOO 3 .
  • the thread WO 3 publishes the data directly to the thread WO 4 via the optional connection (dashed line) 7 * 02.
  • the thread IOO 3 publishes the data to the thread OO 4 via the channels 104 5 and 104 ⁇ - Specifically, the data-transfer object 86 5a loads the published data into the buffer IO6 5 .
  • the data-transfer object 86 5b retrieves the data from the buffer 106s and transfers the data to the communication object 88, which publishes the data to the data-transfer object 86 ⁇ b - Then, the data-transfer object 86 6 b loads the data into the buffer 106 ⁇ .
  • the data-transfer object 86 6a transfers the data from the buffer 106 ⁇ to the thread IOO 4 .
  • the data is not being transferred via the bus 50, then one may modify the data-transfer object 865 b such that it loads the data directly into the buffer 7 * 06 6 , thus bypassing the communication object 88 and the data-transfer object 86 ⁇ b - But modifying the data-transfer object 865 b to be different from the other data-transfer objects 86 may increase the complexity modularity of the message handler 64.
  • a single thread may publish data to multiple locations within the pipeline accelerator 44 (FIG. 3) via respective multiple channels.
  • a single thread may publish data to multiple locations within the pipeline accelerator 44 (FIG. 3) via respective multiple channels.
  • a single thread may publish data to multiple locations within the pipeline accelerator 44 (FIG. 3) via respective multiple channels.
  • the accelerator 44 may receive data via a single channel 104 and provide it to multiple locations within the accelerator. Furthermore, multiple threads (e.g., threads 100 ⁇ and 100 2 ) may subscribe to data from the same channel (e.g., channel 104 2 ).
  • FIG. 6 is a functional block diagram of the exception manager 82, the data-transfer objects 86, and the interface memory 48 according to an embodiment of the invention.
  • the exception manager 82 receives and logs exceptions that may occur during the initialization or operation of the pipeline accelerator 44 (FIG. 3).
  • an exception is a designer-defined event where the accelerator 44 acts in an undesired manner.
  • a buffer (not shown) that overflows may be an exception, and thus cause the accelerator 44 to generate an exception message and send it to the exception manager 82.
  • Generation of an exception message is discussed in previously cited U.S. Patent App. Serial No. 10/683,929 entitled PIPELINE ACCELERATOR FOR IMPROVED COMPUTING ARCHITECTURE AND RELATED SYSTEM AND METHOD.
  • the exception manager 82 may also handle exceptions that occur during the initialization or operation of the pipeline accelerator 44 (FIG. 3). For example, if the accelerator 44 includes a buffer (not shown) that overflows, then the exception manager 82 may cause the accelerator to increase the size of the buffer to prevent future overflow. Or, if a section of the accelerator 44 malfunctions, the exception manager 82 may cause another section of the accelerator or the data-processing application 80 to perform the operation that the malfunctioning section was intended to perform. Such exception handling is further discussed below and in previously cited U.S. Patent App. Serial No. 10/;683,929 entitled
  • the exception manager 82 subscribes to data from one or more subscriber threads 100 (FIG. 5) and determines from this data whether an exception has occurred.
  • the exception manager 82 subscribes to the same data as the subscriber threads 700 (FIG. 5) subscribe to. Specifically, the manager 82 receives this data via the same respective channels 104 s (which include, e.g., channel 104 2 of FIG. 5) from which the subscriber threads 100 (which include, e.g., threads 100 1 and 100 2 of FIG. 5) receive the data. Consequently, the channels 104 s provide this data to the exception manager 82 in the same manner that they provide this data to the subscriber threads 100.
  • the exception manager 82 subscribes to data from dedicated channels 7 " 06 (not shown), which may receive data from sections of the accelerator 44 (FIG. 3) that do not provide data to the threads 100 via the subscriber channels 7O4 S .
  • the object factory 98 (FIG. 4) generates the data-transfer objects 86 for these channels during initialization of the host processor 42 as discussed above in conjunction with FIG. 4.
  • the exception manager 82 may subscribe to the dedicated channels 106 exclusively or in addition to the subscriber channels 704 s .
  • the exception manager 82 analyzes the data to determine if an exception has occurred.
  • the data may represent the result of an operation performed by the accelerator 44.
  • the exception manager 82 determines whether the data contains an error, and, if so, determines that an exception has occurred and the identity of the exception.
  • the exception manager 82 logs, e.g., the corresponding exception code and the time of occurrence, for later use such as during a debug of the accelerator 44.
  • the exception manager 82 may also determine and convey the identity of the exception to, e.g., the system designer, in a conventional manner.
  • the exception manager 82 may implement an appropriate procedure for handling the exception.
  • the exception manager 82 may handle the exception by sending an exception-handling instruction to the accelerator 44, the data-processing application 80, or the configuration manager 84.
  • the exception manager 82 may send the exception-handling instruction to the accelerator 44 either via the same respective channels 7 " 04 p (e.g., channel 104 ⁇ of FIG. 5) through which the publisher threads 700 (e.g., thread 700 ? of FIG. 5) publish data, or through dedicated exception-handling channels 104 (not shown) that operate as described above in conjunction with FIG. 5.
  • the exception manager 82 sends instructions via other channels 7O4, then the object factory 98 (FIG.
  • the exception manager 82 may publish exception-handling instructions to the data-processing application 80 and to the configuration manager 84 either directly (as indicated by the dashed lines 85 and 89 in FIG. 4) or via the channels 104 dpa1 and 104 dpa2 (application 80) and channels 104 cm ⁇ and 104 cm2 (configuration manager 84), which the object factory 98 also generates during the initialization of the host processor 42.
  • the exception-handling instructions may cause the accelerator 44, data-processing application 80, or configuration manager 84 to handle the corresponding exception in a variety of ways.
  • the exception-handling instruction may change the soft configuration or the functioning of the accelerator. For example, as discussed above, if the exception is a buffer overflow, the instruction may change the accelerator's soft configuration (i.e., by changing the contents of a soft configuration register) to increase the size of the buffer.
  • the instruction may change the accelerator's functioning by causing the accelerator to take the disabled section "off line.”
  • the exception manager 82 may, via additional instructions, cause another section of the accelerator 44, or the data-processing application 80, to "take over" the operation from the disabled accelerator section as discussed below. Altering the soft configuration of the accelerator 44 is further discussed in previously cited U.S. Patent App. Serial No. 10/683,929 entitled PIPELINE ACCELERATOR FOR IMPROVED COMPUTING ARCHITECTURE AND RELATED SYSTEM AND METHOD (Attorney Docket No. 1934-13-3).
  • the exception-handling instructions may cause the data-processing application to "take over" the operation of a disabled section of the accelerator 44 that has been taken off line.
  • the processing unit 62 (FIG. 3) may perform this operation more slowly and less efficiently than the accelerator 44, this may be preferable to not performing the operation at all. This ability to shift the performance of an operation from the accelerator 44 to the processing unit 62 increases the flexibility, reliability, maintainability, and fault-tolerance of the peer-vector machine 40 (FIG. 3).
  • the exception-handling instruction may cause the configuration manager to change the hard configuration of the accelerator 44 so that the accelerator can continue to perform the operation of a malfunctioning section that has been taken off line. For example, if the accelerator 44 has an unused section, then the configuration manager 84 may configure this unused section to perform the operation that was to be the malfunctioning section. If the accelerator 44 has no unused section, then the configuration manager 84 may reconfigure a section of the accelerator that currently performs a first operation to perform a second operation of, i.e., take over for, the malfunctioning section.
  • This technique may be useful where the first operation can be omitted but the second operation cannot, or where the data-processing application 80 is more suited to perform the first operation than it is the second operation.
  • This ability to shift the performance of an operation from one section of the accelerator 44 to another section of the accelerator increases the flexibility, reliability, maintainability, and fault-tolerance of the peer-vector machine 40 (FIG. 3).
  • the configuration manager 84 loads the firmware that defines the hard configuration of the accelerator 44 during initialization of the peer-vector machine 40 (FIG. 3), and, as discussed above in conjunction with FIG. 6, may load firmware that redefines the hard configuration of the accelerator in response to an exception according to an embodiment of the invention.
  • the configuration manager 84 often reduces the complexity of designing and modifying the accelerator 44 and increases the fault-tolerance, reliability, maintainability, and flexibility of the peer-vector machine 40 (FIG. 3).
  • the configuration manager 84 receives configuration data from the accelerator configuration registry 70, and loads configuration firmware identified by the configuration data.
  • the configuration data are effectively instructions to the configuration manager 84 for loading the firmware. For example, if a section of the initialized accelerator 44 performs an FFT, then one designs the configuration data so that the firmware loaded by the manager 84 implements an FFT in this section of the accelerator. . Consequently, one can modify the hard configuration of the accelerator 44 by merely generating or modifying the configuration data before initialization of the peer-vector machine 40.
  • the configuration manager 84 typically reduces the complexity of designing and modifying the accelerator 44.
  • the configuration manager 84 determines whether the accelerator 44 can support the configuration defined by the configuration data. For example, if the configuration data instructs the configuration manager 84 to load firmware for a particular PLIC (not shown) of the accelerator 44, then the configuration manager 84 confirms that the PLIC is present before loading the data. If the PLIC is not present, then the configuration manager 84 halts the initialization of the accelerator 44 and notifies an operator that the accelerator does not support the configuration. [101] After the configuration manager 84 confirms that the accelerator supports the defined configuration, the configuration manager loads the firmware into the accelerator 44, which sets its hard configuration with the firmware, e.g., by loading the firmware into the firmware memory 52.
  • the configuration manager 84 sends the firmware to the accelerator 44 via one or more channels 104 t that are similar in generation, structure, and operation to the channels 104 of FIG. 5.
  • the configuration manager 84 may also receive data from the accelerator 44 via one or more channels 104 u .
  • the accelerator 44 may send confirmation of the successful setting of its hard configuration to the configuration manager 84.
  • the configuration manager 84 may set the accelerator's hard configuration in response to an exception-handling instruction from the exception manager 84 as discussed above in conjunction with FIG. 6.
  • the configuration manager 84 downloads the appropriate configuration data from the registry 70, loads reconfiguration firmware identified by the configuration data, and sends the firmware to the accelerator 44 via the channels 704*.
  • the configuration manager 84 may receive confirmation of successful reconfiguration from the accelerator 44 via the channels 104 u .
  • the configuration manager 84 may receive the exception-handling instruction directly from the exception manager 82 via the line 89 (FIG. 4) or indirectly via the channels 104 cm ⁇ and 104 cm2 .
  • the configuration manager 84 may also reconfigure the data-processing application 80 in response to an exception-handling instruction from the exception manager 84 as discussed above in conjunction with FIG. 6.
  • the configuration manager 84 instructs the data-processing application 80 to reconfigure itself to perform an operation that, due to malfunction or other reason, the accelerator 44 cannot perform.
  • the configuration manager 84 may so instruct the data-processing application 80 directly via the line 87 (FIG. 4) or indirectly via channels 104 dp ⁇ and 104 dp2 , and may receive information from the data-processing application, such as confirmation of successful reconfiguration, directly or via another channel 704 (not shown).
  • the exception manager 82 may send an exception-handling instruction to the data-processing 80, which reconfigures itself, thus bypassing the configuration manager 82.
  • the configuration manager 82 may reconfigure the accelerator 44 or the data-processing application 80 for reasons other than the occurrence of an accelerator malfunction.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Advance Control (AREA)
  • Multi Processors (AREA)
  • Stored Programmes (AREA)
  • Logic Circuits (AREA)
  • Microcomputers (AREA)
  • Programmable Controllers (AREA)
  • Complex Calculations (AREA)
  • Bus Control (AREA)
EP03781554A 2002-10-31 2003-10-31 Rechner mit verbesserter rechnerarkitektur und dazugehörende vorrichtung und verfahren. Withdrawn EP1559005A2 (de)

Applications Claiming Priority (13)

Application Number Priority Date Filing Date Title
US684053 2000-10-06
US42250302P 2002-10-31 2002-10-31
US422503P 2002-10-31
US10/683,929 US20040136241A1 (en) 2002-10-31 2003-10-09 Pipeline accelerator for improved computing architecture and related system and method
US10/684,053 US7987341B2 (en) 2002-10-31 2003-10-09 Computing machine using software objects for transferring data that includes no destination information
US684102 2003-10-09
US683929 2003-10-09
US10/684,102 US7418574B2 (en) 2002-10-31 2003-10-09 Configuring a portion of a pipeline accelerator to generate pipeline date without a program instruction
US684057 2003-10-09
US10/684,057 US7373432B2 (en) 2002-10-31 2003-10-09 Programmable circuit and related computing machine and method
US10/683,932 US7386704B2 (en) 2002-10-31 2003-10-09 Pipeline accelerator including pipeline circuits in communication via a bus, and related system and method
US683932 2003-10-09
PCT/US2003/034559 WO2004042574A2 (en) 2002-10-31 2003-10-31 Computing machine having improved computing architecture and related system and method

Publications (1)

Publication Number Publication Date
EP1559005A2 true EP1559005A2 (de) 2005-08-03

Family

ID=34280226

Family Applications (5)

Application Number Title Priority Date Filing Date
EP03781551A Ceased EP1576471A2 (de) 2002-10-31 2003-10-31 Programmierbare schaltung und dazugehörige rechnermaschine und verfahren
EP03781553A Withdrawn EP1573515A2 (de) 2002-10-31 2003-10-31 Pipeline-beschleuniger, system und verfahren dafür
EP03781552A Expired - Fee Related EP1570344B1 (de) 2002-10-31 2003-10-31 Pipeline-coprozessor
EP03781550A Ceased EP1573514A2 (de) 2002-10-31 2003-10-31 Pipeline-beschleuniger, rechner und verfahren dafür
EP03781554A Withdrawn EP1559005A2 (de) 2002-10-31 2003-10-31 Rechner mit verbesserter rechnerarkitektur und dazugehörende vorrichtung und verfahren.

Family Applications Before (4)

Application Number Title Priority Date Filing Date
EP03781551A Ceased EP1576471A2 (de) 2002-10-31 2003-10-31 Programmierbare schaltung und dazugehörige rechnermaschine und verfahren
EP03781553A Withdrawn EP1573515A2 (de) 2002-10-31 2003-10-31 Pipeline-beschleuniger, system und verfahren dafür
EP03781552A Expired - Fee Related EP1570344B1 (de) 2002-10-31 2003-10-31 Pipeline-coprozessor
EP03781550A Ceased EP1573514A2 (de) 2002-10-31 2003-10-31 Pipeline-beschleuniger, rechner und verfahren dafür

Country Status (8)

Country Link
EP (5) EP1576471A2 (de)
JP (9) JP2006518058A (de)
KR (5) KR101062214B1 (de)
AU (5) AU2003287321B2 (de)
CA (5) CA2503622C (de)
DE (1) DE60318105T2 (de)
ES (1) ES2300633T3 (de)
WO (4) WO2004042561A2 (de)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7676649B2 (en) 2004-10-01 2010-03-09 Lockheed Martin Corporation Computing machine with redundancy and related systems and methods
US7987341B2 (en) 2002-10-31 2011-07-26 Lockheed Martin Corporation Computing machine using software objects for transferring data that includes no destination information

Families Citing this family (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8095508B2 (en) 2000-04-07 2012-01-10 Washington University Intelligent data storage and processing using FPGA devices
US7711844B2 (en) 2002-08-15 2010-05-04 Washington University Of St. Louis TCP-splitter: reliable packet monitoring methods and apparatus for high speed networks
KR101062214B1 (ko) * 2002-10-31 2011-09-05 록히드 마틴 코포레이션 향상된 컴퓨팅 아키텍쳐를 갖는 컴퓨팅 머신 및 관련시스템 및 방법
EP1627284B1 (de) 2003-05-23 2018-10-24 IP Reservoir, LLC Intelligente datenspeicherung und verarbeitung unter verwendung von fpga-einrichtungen
US10572824B2 (en) 2003-05-23 2020-02-25 Ip Reservoir, Llc System and method for low latency multi-functional pipeline with correlation logic and selectively activated/deactivated pipelined data processing engines
AU2006221023A1 (en) 2005-03-03 2006-09-14 Washington University Method and apparatus for performing biosequence similarity searching
JP4527571B2 (ja) * 2005-03-14 2010-08-18 富士通株式会社 再構成可能演算処理装置
WO2007011203A1 (en) * 2005-07-22 2007-01-25 Stichting Astron Scalable control interface for large-scale signal processing systems.
US7702629B2 (en) 2005-12-02 2010-04-20 Exegy Incorporated Method and device for high performance regular expression pattern matching
JP2007164472A (ja) * 2005-12-14 2007-06-28 Sonac Kk 待ち合わせ機構を有する演算装置
US7954114B2 (en) * 2006-01-26 2011-05-31 Exegy Incorporated Firmware socket module for FPGA-based pipeline processing
US7921046B2 (en) 2006-06-19 2011-04-05 Exegy Incorporated High speed processing of financial information using FPGA devices
US7840482B2 (en) 2006-06-19 2010-11-23 Exegy Incorporated Method and system for high speed options pricing
US7660793B2 (en) 2006-11-13 2010-02-09 Exegy Incorporated Method and system for high performance integration, processing and searching of structured and unstructured data using coprocessors
US8326819B2 (en) 2006-11-13 2012-12-04 Exegy Incorporated Method and system for high performance data metatagging and data indexing using coprocessors
US8374986B2 (en) 2008-05-15 2013-02-12 Exegy Incorporated Method and system for accelerated stream processing
US20110138158A1 (en) * 2008-07-30 2011-06-09 Masatomo Mitsuhashi Integrated circuit
EP2370946A4 (de) 2008-12-15 2012-05-30 Exegy Inc Verfahren und vorrichtung zur hochgeschwindigkeitsverarbeitung von finanzmarkttiefendaten
US8478965B2 (en) 2009-10-30 2013-07-02 International Business Machines Corporation Cascaded accelerator functions
CA2820898C (en) 2010-12-09 2020-03-10 Exegy Incorporated Method and apparatus for managing orders in financial markets
US11436672B2 (en) 2012-03-27 2022-09-06 Exegy Incorporated Intelligent switch for processing financial market data
US9990393B2 (en) 2012-03-27 2018-06-05 Ip Reservoir, Llc Intelligent feed switch
US10650452B2 (en) 2012-03-27 2020-05-12 Ip Reservoir, Llc Offload processing of data packets
US10121196B2 (en) 2012-03-27 2018-11-06 Ip Reservoir, Llc Offload processing of data packets containing financial market data
FR2996657B1 (fr) * 2012-10-09 2016-01-22 Sagem Defense Securite Organe electrique generique configurable
US9633093B2 (en) 2012-10-23 2017-04-25 Ip Reservoir, Llc Method and apparatus for accelerated format translation of data in a delimited data format
WO2014066416A2 (en) 2012-10-23 2014-05-01 Ip Reservoir, Llc Method and apparatus for accelerated format translation of data in a delimited data format
US10102260B2 (en) 2012-10-23 2018-10-16 Ip Reservoir, Llc Method and apparatus for accelerated data translation using record layout detection
US9792062B2 (en) 2013-05-10 2017-10-17 Empire Technology Development Llc Acceleration of memory access
GB2541577A (en) 2014-04-23 2017-02-22 Ip Reservoir Llc Method and apparatus for accelerated data translation
US9977422B2 (en) * 2014-07-28 2018-05-22 Computational Systems, Inc. Intelligent configuration of a user interface of a machinery health monitoring system
US10942943B2 (en) 2015-10-29 2021-03-09 Ip Reservoir, Llc Dynamic field data translation to support high performance stream data processing
JP2017135698A (ja) * 2015-12-29 2017-08-03 株式会社半導体エネルギー研究所 半導体装置、コンピュータ及び電子機器
JPWO2017149591A1 (ja) * 2016-02-29 2018-12-20 オリンパス株式会社 画像処理装置
WO2018119035A1 (en) 2016-12-22 2018-06-28 Ip Reservoir, Llc Pipelines for hardware-accelerated machine learning
JP6781089B2 (ja) * 2017-03-28 2020-11-04 日立オートモティブシステムズ株式会社 電子制御装置、電子制御システム、電子制御装置の制御方法
GB2570729B (en) * 2018-02-06 2022-04-06 Xmos Ltd Processing system
IT202100020033A1 (it) * 2021-07-27 2023-01-27 Carmelo Ferrante Sistema di interfacciamento tra due dispositivi a controllo elettronico e unità a controllo elettronico comprendente tale sistema di interfacciamento

Family Cites Families (52)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4703475A (en) * 1985-12-04 1987-10-27 American Telephone And Telegraph Company At&T Bell Laboratories Data communication method and apparatus using multiple physical data links
US4811214A (en) * 1986-11-14 1989-03-07 Princeton University Multinode reconfigurable pipeline computer
US4914653A (en) * 1986-12-22 1990-04-03 American Telephone And Telegraph Company Inter-processor communication protocol
US4956771A (en) * 1988-05-24 1990-09-11 Prime Computer, Inc. Method for inter-processor data transfer
JP2522048B2 (ja) * 1989-05-15 1996-08-07 三菱電機株式会社 マイクロプロセッサ及びそれを使用したデ―タ処理装置
JP2858602B2 (ja) * 1991-09-20 1999-02-17 三菱重工業株式会社 パイプライン演算回路
US5283883A (en) * 1991-10-17 1994-02-01 Sun Microsystems, Inc. Method and direct memory access controller for asynchronously reading/writing data from/to a memory with improved throughput
US5268962A (en) * 1992-07-21 1993-12-07 Digital Equipment Corporation Computer network with modified host-to-host encryption keys
US5440687A (en) * 1993-01-29 1995-08-08 International Business Machines Corporation Communication protocol for handling arbitrarily varying data strides in a distributed processing environment
JPH06282432A (ja) * 1993-03-26 1994-10-07 Olympus Optical Co Ltd 演算処理装置
US5583964A (en) * 1994-05-02 1996-12-10 Motorola, Inc. Computer utilizing neural network and method of using same
US5568614A (en) * 1994-07-29 1996-10-22 International Business Machines Corporation Data streaming between peer subsystems of a computer system
US5692183A (en) * 1995-03-31 1997-11-25 Sun Microsystems, Inc. Methods and apparatus for providing transparent persistence in a distributed object operating environment
JP2987308B2 (ja) * 1995-04-28 1999-12-06 松下電器産業株式会社 情報処理装置
US5748912A (en) * 1995-06-13 1998-05-05 Advanced Micro Devices, Inc. User-removable central processing unit card for an electrical device
US5752071A (en) * 1995-07-17 1998-05-12 Intel Corporation Function coprocessor
JP3156562B2 (ja) * 1995-10-19 2001-04-16 株式会社デンソー 車両用通信装置及び走行車両監視システム
US5784636A (en) * 1996-05-28 1998-07-21 National Semiconductor Corporation Reconfigurable computer architecture for use in signal processing applications
JPH1084339A (ja) * 1996-09-06 1998-03-31 Nippon Telegr & Teleph Corp <Ntt> ストリーム暗号による通信方法、ならびに通信システム
US5892962A (en) * 1996-11-12 1999-04-06 Lucent Technologies Inc. FPGA-based processor
JPH10304184A (ja) * 1997-05-02 1998-11-13 Fuji Xerox Co Ltd 画像処理装置および画像処理方法
DE19724072C2 (de) * 1997-06-07 1999-04-01 Deutsche Telekom Ag Vorrichtung zur Durchführung eines Blockchiffrierverfahrens
JP3489608B2 (ja) * 1997-06-20 2004-01-26 富士ゼロックス株式会社 プログラマブル論理回路システムおよびプログラマブル論理回路装置の再構成方法
US6216191B1 (en) * 1997-10-15 2001-04-10 Lucent Technologies Inc. Field programmable gate array having a dedicated processor interface
JPH11120156A (ja) * 1997-10-17 1999-04-30 Nec Corp マルチプロセッサシステムにおけるデータ通信方式
US6076152A (en) * 1997-12-17 2000-06-13 Src Computers, Inc. Multiprocessor computer architecture incorporating a plurality of memory algorithm processors in the memory subsystem
US6049222A (en) * 1997-12-30 2000-04-11 Xilinx, Inc Configuring an FPGA using embedded memory
KR100572945B1 (ko) * 1998-02-04 2006-04-24 텍사스 인스트루먼츠 인코포레이티드 효율적으로 접속 가능한 하드웨어 보조 처리기를 구비하는디지탈 신호 처리기
JPH11271404A (ja) * 1998-03-23 1999-10-08 Nippon Telegr & Teleph Corp <Ntt> プログラムによって再構成可能な回路における自己試験方法および自己試験装置
US6282627B1 (en) * 1998-06-29 2001-08-28 Chameleon Systems, Inc. Integrated processor and programmable data path chip for reconfigurable computing
JP2000090237A (ja) * 1998-09-10 2000-03-31 Fuji Xerox Co Ltd 描画処理装置
SE9902373D0 (sv) * 1998-11-16 1999-06-22 Ericsson Telefon Ab L M A processing system and method
JP2000278116A (ja) * 1999-03-19 2000-10-06 Matsushita Electric Ind Co Ltd Fpga用コンフィギュレーションインターフェース
JP2000295613A (ja) * 1999-04-09 2000-10-20 Nippon Telegr & Teleph Corp <Ntt> 再構成可能なハードウェアを用いた画像符号化方法,画像符号化装置および画像符号化のためのプログラム記録媒体
JP2000311156A (ja) * 1999-04-27 2000-11-07 Mitsubishi Electric Corp 再構成可能並列計算機
US6308311B1 (en) * 1999-05-14 2001-10-23 Xilinx, Inc. Method for reconfiguring a field programmable gate array from a host
EP1061438A1 (de) * 1999-06-15 2000-12-20 Hewlett-Packard Company Rechnerarchitektur mit Prozessor und Coprozessor
US20030014627A1 (en) * 1999-07-08 2003-01-16 Broadcom Corporation Distributed processing in a cryptography acceleration chip
JP3442320B2 (ja) * 1999-08-11 2003-09-02 日本電信電話株式会社 通信方式切替無線端末及び通信方式切替方法
US6526430B1 (en) * 1999-10-04 2003-02-25 Texas Instruments Incorporated Reconfigurable SIMD coprocessor architecture for sum of absolute differences and symmetric filtering (scalable MAC engine for image processing)
US6326806B1 (en) * 2000-03-29 2001-12-04 Xilinx, Inc. FPGA-based communications access point and system for reconfiguration
JP3832557B2 (ja) * 2000-05-02 2006-10-11 富士ゼロックス株式会社 プログラマブル論理回路への回路の再構成方法および情報処理システム
US6982976B2 (en) * 2000-08-11 2006-01-03 Texas Instruments Incorporated Datapipe routing bridge
US7196710B1 (en) * 2000-08-23 2007-03-27 Nintendo Co., Ltd. Method and apparatus for buffering graphics data in a graphics system
JP2002207078A (ja) * 2001-01-10 2002-07-26 Ysd:Kk レーダ信号処理装置
WO2002057921A1 (en) * 2001-01-19 2002-07-25 Hitachi,Ltd Electronic circuit device
US6657632B2 (en) * 2001-01-24 2003-12-02 Hewlett-Packard Development Company, L.P. Unified memory distributed across multiple nodes in a computer graphics system
JP2002269063A (ja) * 2001-03-07 2002-09-20 Toshiba Corp メッセージングプログラム、及び分散システムにおけるメッセージング方法、並びにメッセージングシステム
JP3873639B2 (ja) * 2001-03-12 2007-01-24 株式会社日立製作所 ネットワーク接続装置
JP2002281079A (ja) * 2001-03-21 2002-09-27 Victor Co Of Japan Ltd 画像データ伝送装置
KR101062214B1 (ko) * 2002-10-31 2011-09-05 록히드 마틴 코포레이션 향상된 컴퓨팅 아키텍쳐를 갖는 컴퓨팅 머신 및 관련시스템 및 방법
US7373528B2 (en) * 2004-11-24 2008-05-13 Cisco Technology, Inc. Increased power for power over Ethernet applications

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO2004042574A2 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7987341B2 (en) 2002-10-31 2011-07-26 Lockheed Martin Corporation Computing machine using software objects for transferring data that includes no destination information
US8250341B2 (en) 2002-10-31 2012-08-21 Lockheed Martin Corporation Pipeline accelerator having multiple pipeline units and related computing machine and method
US7676649B2 (en) 2004-10-01 2010-03-09 Lockheed Martin Corporation Computing machine with redundancy and related systems and methods
US8073974B2 (en) 2004-10-01 2011-12-06 Lockheed Martin Corporation Object oriented mission framework and system and method

Also Published As

Publication number Publication date
WO2004042574A2 (en) 2004-05-21
AU2003287321A1 (en) 2004-06-07
JP2011181078A (ja) 2011-09-15
AU2003287321B2 (en) 2010-11-18
CA2503620A1 (en) 2004-05-21
JP2006518058A (ja) 2006-08-03
KR20050084628A (ko) 2005-08-26
AU2003287320B2 (en) 2010-12-02
KR20050086423A (ko) 2005-08-30
KR20050084629A (ko) 2005-08-26
EP1573514A2 (de) 2005-09-14
KR101062214B1 (ko) 2011-09-05
WO2004042574A3 (en) 2005-03-10
EP1573515A2 (de) 2005-09-14
AU2003287320A1 (en) 2004-06-07
AU2003287318B2 (en) 2010-11-25
JP2006518057A (ja) 2006-08-03
WO2004042561A2 (en) 2004-05-21
JP2011175655A (ja) 2011-09-08
JP5568502B2 (ja) 2014-08-06
AU2003287319A1 (en) 2004-06-07
CA2503613C (en) 2011-10-18
CA2503611C (en) 2013-06-18
DE60318105T2 (de) 2008-12-04
WO2004042569A3 (en) 2006-04-27
KR101035646B1 (ko) 2011-05-19
AU2003287319B2 (en) 2010-06-24
AU2003287318A1 (en) 2004-06-07
WO2004042560A3 (en) 2005-03-24
JP2006518056A (ja) 2006-08-03
CA2503611A1 (en) 2004-05-21
KR20050088995A (ko) 2005-09-07
KR101012744B1 (ko) 2011-02-09
ES2300633T3 (es) 2008-06-16
EP1570344B1 (de) 2007-12-12
KR101012745B1 (ko) 2011-02-09
EP1576471A2 (de) 2005-09-21
DE60318105D1 (de) 2008-01-24
KR20050086424A (ko) 2005-08-30
JP2006518495A (ja) 2006-08-10
KR100996917B1 (ko) 2010-11-29
WO2004042560A2 (en) 2004-05-21
AU2003287317B2 (en) 2010-03-11
CA2503622C (en) 2015-12-29
JP2011154711A (ja) 2011-08-11
CA2503613A1 (en) 2004-05-21
CA2503622A1 (en) 2004-05-21
AU2003287317A1 (en) 2004-06-07
EP1570344A2 (de) 2005-09-07
JP2006515941A (ja) 2006-06-08
CA2503617A1 (en) 2004-05-21
WO2004042569A2 (en) 2004-05-21
WO2004042561A3 (en) 2006-03-02
JP2011170868A (ja) 2011-09-01

Similar Documents

Publication Publication Date Title
US7987341B2 (en) Computing machine using software objects for transferring data that includes no destination information
CA2503622C (en) Computing machine having improved computing architecture and related system and method
US7676649B2 (en) Computing machine with redundancy and related systems and methods
WO2004042562A2 (en) Pipeline accelerator and related system and method
WO2006039713A2 (en) Configurable computing machine and related systems and methods

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20050530

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LI LU MC NL PT RO SE SI SK TR

AX Request for extension of the european patent

Extension state: AL LT LV MK

DAX Request for extension of the european patent (deleted)
RBV Designated contracting states (corrected)

Designated state(s): DE ES FR GB

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

INTG Intention to grant announced

Effective date: 20161021

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20170301