EP1675015B1 - Rekonfigurierbares Mehrprozessorsystem besonders zur digitalen Verarbeitung von Radarbildern - Google Patents

Rekonfigurierbares Mehrprozessorsystem besonders zur digitalen Verarbeitung von Radarbildern Download PDF

Info

Publication number
EP1675015B1
EP1675015B1 EP04425935A EP04425935A EP1675015B1 EP 1675015 B1 EP1675015 B1 EP 1675015B1 EP 04425935 A EP04425935 A EP 04425935A EP 04425935 A EP04425935 A EP 04425935A EP 1675015 B1 EP1675015 B1 EP 1675015B1
Authority
EP
European Patent Office
Prior art keywords
node
bus
data
input
nodes
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP04425935A
Other languages
English (en)
French (fr)
Other versions
EP1675015A1 (de
Inventor
Maurizio Piacentini
Michele Lombardi
Gregorio Vitale
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Selex Galileo SpA
Original Assignee
Galileo Avionica SpA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Galileo Avionica SpA filed Critical Galileo Avionica SpA
Priority to EP04425935A priority Critical patent/EP1675015B1/de
Priority to DE602004013458T priority patent/DE602004013458T2/de
Priority to AT04425935T priority patent/ATE393932T1/de
Publication of EP1675015A1 publication Critical patent/EP1675015A1/de
Application granted granted Critical
Publication of EP1675015B1 publication Critical patent/EP1675015B1/de
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/76Architectures of general purpose stored program computers
    • G06F15/80Architectures of general purpose stored program computers comprising an array of processing units with common control, e.g. single instruction multiple data processors
    • G06F15/8007Architectures of general purpose stored program computers comprising an array of processing units with common control, e.g. single instruction multiple data processors single instruction multiple data [SIMD] multiprocessors

Definitions

  • the present invention relates to a reconfigurable multiprocessor architecture that is particularly suitable for, but not limited to, the digital processing of radar signals.
  • the invention is in fact particularly suitable for all applications in which it is necessary to process a large amount of data and in which the algorithms to be run for processing can be divided into a cascade of specific functions.
  • N processors or CPUs Central Processing Units
  • the direct-connection architecture is disadvantageous, since the size of the switching matrix increases very rapidly as the number of processors increases, and the overall size and the management of the entire processing system become prohibitive, especially in an application environment such as for example avionics.
  • One known alternative solution consists in using a shared memory, in which a plurality of processors can read and write processing data.
  • the constructive complexity of this architecture is reduced with respect to the fully-connected solution, but the presence of shared memory is a critical bottleneck for memory access even with a low number of processors.
  • Another known type of architecture is based on a telephone-type connection, in which all the nodes can talk to each other via shared connections. All the computing elements are connected to a hierarchically organized switching matrix. Multiple computing elements are connected to a same connection element, and each connection element is connected to another connection element of a higher level, to which multiple connection elements are connected, so as to form a hierarchical structure of connection elements at the vertex of which the data input/output (I/O) devices are interfaced.
  • I/O data input/output
  • D1 US-A-5014189 discloses a processor array, wherein the processors are connected selectively in series or parallel.
  • the processor array comprises further an input and output bus with at least one processor interposed.
  • Controlling means are provided to control the switching devices in order to connect the first to the N-th processor together selectively.
  • D2 US 2001/054124 discloses a parallel processor system, comprising a pair of parallel buses, pipeline buses, and a plurality of processor nodes, which have functions of carrying out an operation process in response to an instruction and transferring data. Further the system includes a switch controller, which is controlling the connection mode of cluster switches and coupling the processor nodes in series and/or in parallel.
  • a hierarchical connection is effective if the number of communications in progress is very small with respect to the number of processors.
  • connection elements at the highest hierarchical level become saturated and the system may fail. Since communication requests cannot be predicted precisely, as they depend on the data and on their processing, connection attempts are random and failures can occur erratically, with limited possibilities of control and prevention.
  • the aim of the present invention is to overcome the drawbacks cited above by devising a multiprocessor architecture that is flexible and allows to process large amounts of data efficiently.
  • a particular object of the invention is to be able to vary the computing power of the system depending on the particular application of interest, using the chosen number of computing elements and configuring their mutual connection without saturating the connection elements or the memories of the system.
  • Another object of the invention is to spread the computing power over multiple processors even while the system is in operation, by acting exclusively via the operating software.
  • Another object is to provide a reconfigurable architecture that is independent of the particular type of computing element or of the particular hardware implementation of the functional blocks and is independent of the components used to provide the communications system.
  • Another object is to relieve the processing elements from data traffic management and to analyze said data externally in a non-intrusive way.
  • Another object is to provide an architecture that is modular, economically advantageous and simple to implement.
  • the architecture 10 comprises a series of processing or computing nodes 11, which are mutually connected by means of a data communications bus 14, which in turn is composed of an input bus 24 and an output bus 25, which are mutually independent.
  • the architecture of Figure 1 is preferably implemented on a motherboard in which each node has a physical address (slot address) determined by appropriate pins on the board and has a subaddress that identifies it within the board if said board clusters multiple nodes. It is optionally possible to associate a logical address with each node via software.
  • the communications network is composed of independent channels provided with a standard interface.
  • Each node comprises a processing unit 12 (designated by CPU in the figures) for processing and exchanging data with other nodes via the communications bus 14 and a node controller 13, which is connected to the processing unit 12 and receives and sends commands from and toward the processing unit 12 in order to manage mainly communications with the other nodes and access to the communications buses.
  • the processing units 12 comprise a digital signal processor or DSP 27 and the data communicated via the communications buses 14 are video data.
  • the communications buses 14 are also referenced hereinafter as "video buses”.
  • a processing unit may be any element capable of performing a certain function, for example an interface for transferring data out of the network or inside it, a video display unit that shows images or symbols to an operator, A/D or D/A converters (already shown in Figure 1 ), and so forth.
  • Data transfers are controlled by FPGA dedicated programmable components, by means of which the node controllers and the switches are physically implemented.
  • a particular node 17 optionally comprises analog-digital (A/D) converters 19 or digital-analog (D/A) converters 19 for accessing analog sources or receivers, such as for example a radar antenna, which is not shown.
  • A/D analog-digital
  • D/A digital-analog
  • another node 11a of the network can be provided with standard input/output (I/O) interfaces.
  • the architecture 10 comprises a particular resource allocation node 18 for overall network management, which is used mainly to update the operating software, to distribute commands to the various nodes and to configure the network.
  • the node 18 uses a particular communications bus, which is termed hereinafter"network bus" 15 and which connects in a loop all the processing nodes of the system.
  • auxiliary bus which connects all the processing nodes and is independent of the other communications buses.
  • auxiliary bus 16 is used for nonintrusive acquisition of data flowing on the video bus and acquired from any node of the network.
  • the hardware implementation of the buses 14-16 is based on serial connections.
  • FIGS 2a and 2b schematically illustrate the solution idea that characterizes the present invention.
  • the input bus 24 and the output bus 25 access internally each processing node 11, respectively via the inputs 24a and 25a, and exit from the node respectively via the outputs 24b and 25b.
  • the DSP 27 of the node 11 is connected in input to the input bus 24 and, more particularly, to the portion 24a of the bus 24, so that the data present on the portion 24a constitute the data to be processed by the DSP. Moreover, the DSP is connected in output to the video bus 25, so as to send over said video bus the data generated by the processing performed by the DSP 27, adding them to any data already present on the bus 25a.
  • the node 11 is characterized in that it comprises switching means, which are associated with the input and output buses 24 and 25 for switching the output bus on the input bus depending on the particular function to be provided.
  • each node 11 preferably comprises a crossing line 21 and two switches 22 and 23, which are respectively connected to the bus 24 and to the bus 25 and are provided so as to switch from a "parallel" state, in which the input 24a is connected to the output 24b of the video bus 24 and in which the input 25a is connected to the output 25b of the video bus 25, to a series state, in which the output 24b is disconnected from the input 24a and is connected to the crossing line 21 and in which the input 25a is disconnected from the output 25b and is connected to the same crossing line 21.
  • the switches 22 and 23 By acting simultaneously on the switches 22 and 23, it is possible to switch the output bus 25 on the input bus 24 so that the data that are present on the output bus of the node 11 and the data processed by the DSP 27 are transmitted on the output 24b and therefore presented in input to the successive node.
  • the switches are set to the "series" state, the DSP core of the node that follows the node 11 being considered (also referenced as "local node") receives in input the results generated by the DSP core 27 of the local node 11 and by the other nodes upstream of the node 11 that belong to the same function.
  • the switches are set to the "parallel" state, the data packets received respectively on the inputs 24a and 25a are repeated on the two outputs 24b and 25b, while the data that correspond to the results of the local processing performed by the DSP 27 are also transmitted on the output 25b.
  • the DSP core of the node that follows the node 11 therefore receives in input the same data received by the local DSP core and the results of the successive node can also be routed on the same channel of the local node 11. Accordingly, the two nodes are connected in parallel.
  • Figure 3b illustrates the circuit implementation of the function of Figure 3a in the architecture according to the invention.
  • the nodes 31 and 32 of the function A are connected in parallel by arranging the switches of the node 31 in the parallel state.
  • the cascade connection between the function A and the function B and between the function B and the function C is instead provided by selecting the series state for the switches of the nodes 32 and 33 respectively.
  • the nodes of the function C comprise their respective switches in the parallel state.
  • the results of the operation performed by the function shown in Figure 3a are presented on the output channel of the last node of the function C.
  • FIG. 4 illustrates an example of embodiment of a node of the architecture according to the present invention.
  • the video buses 24a and 25a and an auxiliary bus 26a which are provided in input to the node 11, are connected to respective series/parallel (S/P) converters 36 in order to convert the serial information contained in the channels 24a, 25a and 26a into 32-bit parallel information on respective channels 44a, 45a and 46a.
  • S/P series/parallel
  • the processing unit (not shown in Figure 4 ) is connected to the parallel channel 44a via a local bus 50 and an input buffer memory 48.
  • the channel 44a is further connected to the node controller 13 and to a first multiplexer 42 with 64 inputs, which comprises a 32-bit output 44b which, by means of a parallel/series converter, converges in the portion 24b of the input video bus.
  • the parallel channel 45a is connected in input to a second multiplexer 43 with 64 inputs, which also has in input the parallel output 49a (of the 32-bit type) of a buffer memory 49, which is connected to the processing unit and can also be accessed by the node controller 13.
  • the output 45b of the multiplexer 43 is connected both to the portion 25b of the output bus 25 by means of a respective parallel/series (P/S) converter 37 and to respective inputs of the multiplexer 42 and of an auxiliary multiplexer 46.
  • the auxiliary multiplexer 46 further comprises in input the auxiliary channel 46a and is connected in output to a respective parallel/series converter for converging the parallel information on the serial auxiliary channel 26b.
  • the multiplexers 42, 43 and 46 comprise respective selectors 130a, 130b, 130c, which are connected to the node controller 13 in order to select which of the two 32-bit inputs of each multiplexer must be carried integrally on the respective output.
  • the multiplexer 42 implements the function of the switches 22 and 23, since depending on the inputs selected by means of the selector 130a it is possible to switch onto the channel 24b the information received from the channels 44a and 45b.
  • the multiplexer 43 is instead controlled so as to repeat on the channel 45b the data received from the bus 25a and to transmit the processed data acquired from the output buffer 49.
  • the multiplexer 46 is controlled by the node controller 13 so as to repeat on the auxiliary output 26b the information that is present on the output of the multiplexer 43 and accordingly allows to acquire locally the processing data in output from the individual node, routing them on an independent auxiliary channel (which corresponds to the auxiliary bus 16 shown in Figure 1 ).
  • the architecture comprises a network or configuration bus 15, which connects in a loop all the nodes of the system and allows each node to exchange data with any other node.
  • the channel associated with the bus 15 can also be used to update the operating software of the system, but the bus is configured so that any malfunction of the software does not cause failure of the channel.
  • each node 11 of the architecture comprises the elements shown schematically in Figure 5 .
  • the input configuration bus 55a which arrives from the preceding node, is connected to a series/parallel converter in order to convert the serial data packet that travels on the channel 55a into parallel data packets (preferably 32-bit packets) on the respective channel 58a.
  • the data packet is forward-propagated (on 58b) and serialized (on 55b) and sent in input to the next node.
  • the node controller (designated by the reference numeral 53) is programmed to decode the data packet that arrives on the channel 58a and to check whether the packet is assigned to the computing node being considered. If so, the controller 53 itself transfers the data to the local CPU by means of a buffer 56 and a bidirectional FIFO memory 57.
  • the node controller 53 configures the multiplexer 54 so as to forward-propagate the data packet stored by the local CPU in the bidirectional FIFO memory 57 (in doing so, it disables the buffer 56).
  • control signals 51 and 52 are used to synchronize on the control bus the operations between the transmitter and the receiver(s), as shown in the following Table 1. These control signals are bidirectional and therefore each node controller can read them in addition to being able to determine their logic level.
  • Communications on the network configuration channel can be of the point-to-point type (i.e. between any two nodes of the system), of the multicast type (i.e. between one node and a subset of nodes) and of the broadcast type (i.e. between one and node and all the other nodes).
  • communications on the channel are managed by using a logic of the token-ring type.
  • This logic typically used in LAN networks, is implemented by propagating a control used as a node-to-node token: the node that takes the token can send a data packet to one or more nodes present in the system.
  • Packet transmission occurs by sending initially a header that contains information regarding the recipient of the packet.
  • the header is analyzed in hardware by all the nodes, as shown with reference to Figure 5 .
  • the recipient node once it has received the header, declares that it is ready to receive the packet and the sender node accordingly sends the data.
  • the sender node In the case of a multicast or broadcast communication, the sender node propagates the header. At the expiry of a preset time interval, the node controller of the sender node checks whether the NTWK_ERR control is not active. If so, the sender node propagates the rest of the packet, otherwise it waits for a second time interval.
  • the node controller enables the propagation on the configuration bus 15 of the rest of the packet regardless of the state of the control signal NTWK_ERR.
  • the controller of the sender node declares to the processing unit (CPU) that is associated with it that it has performed the transmission successfully; otherwise, it declares a communication error.
  • the controllers of the node 13 are inserted in a hardware control network that carries signals for enabling the reception and/or transmission of data packets from or toward preset nodes via the output bus 25 and the input bus 24.
  • the node controllers are set to monitor and change the state of these enable signals.
  • Each node controller comprises in input the following controls:
  • the node controller 13 also comprises the following output controls:
  • a logic of the token-ring type is implemented on the video bus locally between the nodes involved in a same function. Since a function can be configured dynamically using even a different number of nodes, in order to implement a token-ring logic the Bus-Token (BTI/BTO) and Bus-Token-Return (BTRI/BTRO) signals are used, as described hereinafter.
  • BTI/BTO Bus-Token
  • BTRI/BTRO Bus-Token-Return
  • each node controller 13 there are appropriate switches that are adapted to route said controls, which can be activated during the network configuration step.
  • a node can be configured as a "head” node, as an "intermediate” node, or as a "terminal" node of the function.
  • V2_BTRI of a head node the signals that are present on V2_BTRI of a head node are recirculated on V2_BTI of the same node.
  • the signals on V2_BTI are instead recirculated on V2_BTRO of the same node, while the signals on V1_RRI are switched on V1_TPO.
  • all the input controls are transferred to the respective output controls.
  • a particular configuration of the architecture comprises five nodes 61a, 61b, 62a, 62b, 62c, which are grouped into two groups 61 and 62 so as to form two functions with two and three nodes respectively.
  • the nodes 61a, 61b, 62a, 62b and 62c comprise respective processing units 66a, 66b, 68a, 68b, 68c and respective node controllers 63a, 63b, 67a, 67b and 67c.
  • the input bus 64 and the output bus 65 which can be accessed by the processing units 66a, 66b, 68a, 68b, 68c, are configured by virtue of the switching means that are present in the nodes according to the invention.
  • the nodes 61a and 61b are in parallel to each other and in series to the nodes 62a, 62b and 62c, which are in parallel to each other.
  • the head node and the terminal node of the function 61 are represented respectively by the nodes 61a and 61b, while the head node, the intermediate node and the terminal node of the function 62 correspond respectively to the nodes 62a, 62b and 62c.
  • each data packet comprises a label that defines it, known as header, so that only the node or nodes interested in the data contained in the packet can acquire said data from the input bus.
  • Each individual function has its local token on the video bus, which is created in the possession of the head node during the function configuration step.
  • the corresponding control V2_BTI is activated. If the data packet to be sent is not yet ready when the sender node 61a receives the token, the token is propagated forward by means of the control V2_BTO.
  • V2_BTRI and V2_BTRO are two controls used as return of the local bus-token associated with the group of nodes configured to implement a function. For example, with reference to the function 62 shown in Figure 6 , if a node is a head node within the function (node 67a), then it uses as a token, in order to be able to talk on the output video bus, the signal that is present on V2_BTRI and then forward-propagates the token on V2_BTO.
  • a node is an intermediate node within the function (node 67b), then it propagates, again on V2_BTRO, the signal that is present on V2_BTRI; if a node is a terminal node within the function (node 67c), then the token that it receives on V2_BTI propagates it on the corresponding V2_BTRO. In this manner, it is possible to configure locally a logic of the token-ring type.
  • the node 61a When the node 61a receives the token again, the value of the signal V1_TPI, which corresponds to the signal that is present on V1_RRO in output from the successive function 62 and propagated to all the nodes of the function 61, is checked. If the V1_TPI control is in the "not active" state, then all the nodes of the function 62 are ready to receive a header and accordingly the node 61a transmits on the output bus 65 the header of the data packet, entering a state in which it waits for activation of the signal V1_TPI.
  • the node 62b When the node 62b has recognized that the header of the data packet is assigned to it, it declares that it is ready to receive the rest of the packet by activating the control V1_RRO and accordingly the control V1_TPI of the sender node 61a becomes "active".
  • control V1_RRO that the receiving node propagates is the result of a logical AND between the state of V1-RRO of said node and the state of V1_RRO of the successive node of the same function.
  • the node 62b declares that it has received the entire packet, deactivating the control V1_RRO, and the node 61a releases the token.
  • a node on the data channel can be affected simultaneously and independently by a data transmission toward the successive function and by a data reception from the preceding function.
  • the network configuration channel and the data channel for example video data
  • the policy described above in fact provides forward-propagation of the token if the node that receives it does not have to talk on the bus or if said node has just finished talking.
  • Alternative embodiments can manage the token differently, for example by retaining it in the sender node until the node has to send a data packet.
  • This logic can be useful for example in the data channel, when it is necessary to forward-propagate the processed data packets in a preset order.
  • the token is retained by the sender node until permission is given (via software) to release it.
  • This management policy can be useful for example if it is necessary to send a message or data packet that is larger than the space available in the buffers of the sender node and it is therefore necessary to split it into multiple parts in order to transmit it in following steps.
  • the present invention achieves the intended aim and objects.
  • the described architecture allows to define, and therefore functionally define, the distribution of computing power.
  • the architecture allows a considerable increase in performance with respect to known architectures as the number of used computing nodes increases.
  • the components of each node can be replaced with new or more recent components without affecting the structure of the system.
  • the described architecture can interface with any existing system by inserting a node that implements a bridge toward any standard bus, such as for example PCI (Peripheral Component Interconnect), VME (Versa Module Europa), or LAN (Local Area Network).
  • PCI Peripheral Component Interconnect
  • VME Very Module Europa
  • LAN Local Area Network
  • control can be modified by using, instead of the described network bus 15, a standard bus such as VME, PCI or VITA4, maintaining the architecture of the processing and data communications portion.
  • a standard bus such as VME, PCI or VITA4, maintaining the architecture of the processing and data communications portion.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Theoretical Computer Science (AREA)
  • Computing Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multi Processors (AREA)
  • Small-Scale Networks (AREA)
  • Image Processing (AREA)
  • Radar Systems Or Details Thereof (AREA)

Claims (10)

  1. System (10) zum Zuordnen von Datenverarbeitungsressourcen, das eine Vielzahl von aufeinanderfolgenden Knoten (11, 61a, 61b, 62a, 62b, 62c) umfasst, von denen jeder zum Eingeben mit einem Eingangsdatenkommunikationsbus (14, 24, 64) und zum Ausgeben mit einem Ausgangsdatenkommunikationsbus (15, 25, 65) verbunden ist, wobei jeder Knoten (11, 61a, 61b, 62a, 62b, 62c) umfasst:
    - eine Datenverarbeitungseinheit (12, 66a, 66b, 68a, 68b, 68c) zum Verarbelten der auf dem Eingangsbus (14, 24, 64) vorhandenen Daten und zum Senden von bearbeiteten Daten auf dem Ausgangsbus (15, 25, 65);
    - Schaltmittel (21, 22, 23, 42, 43, 46) zum Schalten des Zustandes der Verbindung zwischen dem Knoten (11, 61a, 61b, 62a, 62b, 62c) und einem nachfolgenden Knoten aus der Vielzahl von Knoten, von einem parallelen Zustand zu einem seriellen Zustand und umgekehrt, über den Eingangsbus und den Ausgangsbus, wobei:
    sich der Knoten (11, 61a, 61b, 62a, 62b, 62c) und der nachfolgende Knoten in dem parallelen Zustand die in dem Eingangsbus (14, 24, 64) vorhandenen Daten teilen;
    der Ausgangsbus (15, 25, 65) In dem seriellen Zustand stromaufwärts des nachfolgenden Knotens mit dem Eingangsbus (14, 24, 64) verbunden ist, so dass die auf dem Ausgangsbus (15, 25, 65) vorhandenen Daten und die von dem Knoten (11, 61a, 61b, 62a, 62b, 62c) verarbeiteten Daten als Eingabe in den nachfolgenden Knoten vorhanden sind,
    der Zustand der Schaltmittel (21, 22, 23, 42, 43, 46) der Vielzahl von Knoten aufeinanderfolgende funktionale Gruppen definiert, wobei jede Gruppe einen der Knoten (11, 61a, 61b, 62a, 62b, 62c) oder eine Vielzahl der Knoten, die parallel zueinander über die entsprechenden Schaltmittel (21, 22, 23, 42, 43, 46) in dem parallelen Zustand verbunden sind, umfasst und wobei jede Gruppe mit der nachfolgenden Gruppe über einen der Knoten (11, 61a, 61b, 62a, 62b, 62c), dessen entsprechende Schaltmittel (21, 22, 23, 42, 43, 46) Im seriellen Zustand sind, verbunden ist;
    dadurch gekennzeichnet, dass jeder Knoten darüber hinaus eine Knotensteuerungseinheit (13, 53, 63a, 63b, 67a, 67b, 67c) aufweist, die mit der Vererbeitungseinheit (12, 66a, 66b, 68a, 68b, 68c) verbunden ist, wobei die Schaltmittel durch die Knotensteuerungseinheit (13, 53, 63a, 63b, 67a, 67b, 67c) gesteuert werden, und
    dass das System darüber hinaus ein Netzwerk aufweist, das die Knotensteuerungseinheiten (13, 53, 63a, 63b, 67a, 67b, 67c) zur Steuerung der Eingangs- (14, 24, 64) und Ausgangsbusse (15, 25, 65) verbindet, wobei das Steuerungsnetzwerk Signale zum Ermöglichen eines Empfangs und/oder eines Übertragens von Datenpaketen von oder zu voreingestellten Knoten über die Ausgangs- (15, 25, 65) und Eingangsbusse (14, 24, 64) trägt, wobei die Knotensteuerungseinheiten (13, 53, 63a, 63b, 67a, 67b, 67c) zum Überwachen und Ändern des Zustands der Ermöglichungssignale eingestellt sind.
  2. System (10) nach Anspruch 1, dadurch gekennzeichnet, dass es ein Konfigurationsnetzwerk zum Verbinden der Knotensteuerungseinheiten (13, 53, 63a, 63b, 67a, 67b, 67c) der Vielzahl von Knoten (11, 61a, 61b, 62a, 62b, 62c) gemäß einer Schleifenkonfiguration aufweist, wobei das Konfigurationsnetzwerk mit einer Ressourcenzuordnungseinheit (18) verbunden ist, um den Zustand der Schaltmittel der Vielzahl von Knoten (11, 61a, 61b, 62a, 62b, 62c) über das Konfigurationsnetzwerk und die Knotensteuerungseinheiten (13, 53, 63a, 63b, 67a, 67b, 67c) zu definierten.
  3. System (10) nach Anspruch 2, dadurch gekennzeichnet, dass die Knotensteuerungseinheiten (13, 53, 63a, 63b, 67a, 67b, 67c) eingestellt sind, um gemäß einer Token-Ring-Logik in dem Konfigurationsnetzwerk zu kommunizieren.
  4. System (10) nach einem der vorgehenden Ansprüche, dadurch gekennzeichnet, dass die Verarbeitungseinheit (12, 66a, 66b, 68a, 68b, 68c) eingestellt ist, um die auf dem Eingangsbus (14, 24, 64) vorhandenen Daten zu überwachen und um die Daten zu erfassen und zu verarbeiten, wenn sie einen Kopf aufweisen, der für den Knoten (11, 61a, 61b, 62a, 62b, 62c) spezifisch ist, welcher die Verarbeitungseinheit (12, 66a, 66b, 68a, 68b, 68c) aufweist.
  5. System (10) nach den Ansprüchen 1 und 4, dadurch gekennzeichnet, dass die Knotensteuerungseinheit (13, 53, 63a, 63b, 67a, 67b, 67c) eingestellt ist, um ein Signal zum Ermöglichen des Empfangs von Datenpaketen zu aktivieren und um es an das Steuerungsnetzwerk zu senden, wenn die in dem Knoten (11, 61a, 61b, 62a, 62b, 62c) enthaltene entsprechende Verarbeitungseinheit (12, 66a, 66b, 68a, 68b, 68c) den für den Knoten (11, 61a, 61b, 62a, 62b, 62c) spezifischen Kopf detektiert.
  6. System (10) nach einem der Ansprüche 4 bis 5, dadurch gekennzeichnet, dass die Knotensteuerungseinheit (13, 53, 63a, 63b, 67a, 67b, 67c) Mittel zum Prüfen des Aktivierungszustandes des in dem Steuerungsnetzwerk vorhandenen Empfangsermöglichungssignals aufweist, wobei die Prüfmittel eingestellt sind, um der Verarbeitungseinheit (12, 66a, 66b, 68a, 68b, 68c) zu ermöglichen, die Datenpakete auf dem Ausgangsbus (15, 25, 65) zu übertragen, wenn das Empfangsermöglichungssignal in dem aktiven Zustand ist.
  7. System (10) nach einem der vorhergehenden Ansprüche, dadurch gekennzeichnet, dass die Verarbeitungseinheit (12, 66a, 66b, 68a, 68b, 68c) eine digitale Videosignalverarbeitungseinheit oder einen DSP (27) umfasst.
  8. System (10) nach einem der vorhergehenden Ansprüche, dadurch gekennzeichnet, dass die Schaltmittel aufweise:
    einen ersten Multiplexer (42), der zwei Eingänge, die jeweils mit dem Eingangsbus bzw. Ausgangsbus verbunden sind, und einen Ausgang aufweist, der mit dem Eingangsbus verbunden ist, wobei der erste Multiplexer (42) über einen Auswahleingang (130a) verfügt, der mit der Knotensteuerungseinheit (13, 53, 63a, 63b, 67a, 67b, 67c) verbunden ist, um die Eingabe des ersten Multiplexers (42) auszuwählen, die an den Ausgang des ersten Multiplexers (42) zu übertragen ist;
    einen zweiten Multiplexer (43), der zwei Eingänge, die jeweils mit dem Ausgangsbus bzw. mit dem Prozessor verbunden sind, und einen Ausgang aufweist, der mit dem Ausgangsbus verbunden ist, wobei der zweite Multiplexer (43) über einen Auswahleingang (130b) verfügt, der mit der Knotensteuerungseinheit (13, 53, 63a, 63b, 67a, 67b, 67c) verbunden ist, um die Eingabe des zweiten Multiplexers auszuwählen, die an den Ausgang des zweiten Multiplexers (43) zu übertragen ist.
  9. System (10) nach einem der vorhergehenden Ansprüche, dadurch gekennzeichnet, dass es über einen Hilfsbus (16) verfügt, der mit allen Knoten (11, 61a, 61b, 62a, 62b, 62c) der Vielzahl von Knoten verbunden ist, um die verarbeiteten Daten zu erfassen, die am Ausgang von einem Knoten (11, 61a, 61b, 62a, 62b, 62c) vorhanden sind.
  10. Verwendung des Systems (10) nach einem der vorhergehenden Ansprüche in einer Schaltung zum Verarbeiten von Signalen, die von einer Radarantenne erfasst werden, die mit dem Eingangsbus (14, 24, 64) verbunden ist.
EP04425935A 2004-12-22 2004-12-22 Rekonfigurierbares Mehrprozessorsystem besonders zur digitalen Verarbeitung von Radarbildern Active EP1675015B1 (de)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP04425935A EP1675015B1 (de) 2004-12-22 2004-12-22 Rekonfigurierbares Mehrprozessorsystem besonders zur digitalen Verarbeitung von Radarbildern
DE602004013458T DE602004013458T2 (de) 2004-12-22 2004-12-22 Rekonfigurierbares Mehrprozessorsystem besonders zur digitalen Verarbeitung von Radarbildern
AT04425935T ATE393932T1 (de) 2004-12-22 2004-12-22 Rekonfigurierbares mehrprozessorsystem besonders zur digitalen verarbeitung von radarbildern

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
EP04425935A EP1675015B1 (de) 2004-12-22 2004-12-22 Rekonfigurierbares Mehrprozessorsystem besonders zur digitalen Verarbeitung von Radarbildern

Publications (2)

Publication Number Publication Date
EP1675015A1 EP1675015A1 (de) 2006-06-28
EP1675015B1 true EP1675015B1 (de) 2008-04-30

Family

ID=34932946

Family Applications (1)

Application Number Title Priority Date Filing Date
EP04425935A Active EP1675015B1 (de) 2004-12-22 2004-12-22 Rekonfigurierbares Mehrprozessorsystem besonders zur digitalen Verarbeitung von Radarbildern

Country Status (3)

Country Link
EP (1) EP1675015B1 (de)
AT (1) ATE393932T1 (de)
DE (1) DE602004013458T2 (de)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010060283A1 (zh) * 2008-11-28 2010-06-03 上海芯豪微电子有限公司 一种数据处理的方法与装置
CN101799750B (zh) * 2009-02-11 2015-05-06 上海芯豪微电子有限公司 一种数据处理的方法与装置
CN104459669B (zh) * 2014-12-10 2016-09-21 珠海纳睿达科技有限公司 雷达反射信号处理装置及其处理方法

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA1286031C (en) * 1986-06-27 1991-07-09 Ichiro Tamitani Processor array comprising processors connected selectively in series or in parallel
US5296936A (en) * 1991-07-22 1994-03-22 International Business Machines Corporation Communication apparatus and method for transferring image data from a source to one or more receivers
WO2000028430A1 (fr) * 1998-11-10 2000-05-18 Fujitsu Limited Systeme de processeur parallele

Also Published As

Publication number Publication date
DE602004013458T2 (de) 2009-06-18
ATE393932T1 (de) 2008-05-15
DE602004013458D1 (de) 2008-06-12
EP1675015A1 (de) 2006-06-28

Similar Documents

Publication Publication Date Title
KR100812225B1 (ko) 멀티프로세서 SoC 플랫폼에 적합한 크로스바 스위치구조
US6138185A (en) High performance crossbar switch
KR900006791B1 (ko) 패킷 스위치식 다중포트 메모리 n×m 스위치 노드 및 처리 방법
US5430442A (en) Cross point switch with distributed control
US6405299B1 (en) Internal bus system for DFPS and units with two- or multi-dimensional programmable cell architectures, for managing large volumes of data with a high interconnection complexity
US5581767A (en) Bus structure for multiprocessor system having separated processor section and control/memory section
US8503466B2 (en) Network on chip input/output nodes
JP2558393B2 (ja) 多重クラスタ信号プロセッサ
EP1744497B1 (de) Verfahren zum Verwalten einer Vielzahl von virtuellen Verbindungen zur gemeinsamen Nutzung auf einer Verbindungsleitung und Netzwerk zur Implementierung dieses Verfahrens
KR100951856B1 (ko) 멀티미디어 시스템용 SoC 시스템
JP3206126B2 (ja) 分散クロスバー・スイッチ・アーキテクチャにおけるスイッチング・アレイ
JPH03116358A (ja) 多段通信ネットワーク
JPH05236525A (ja) 超大規模モジュラースイッチ
US20060090041A1 (en) Apparatus for controlling a multi-processor system, scalable node, scalable multi-processor system, and method of controlling a multi-processor system
JP3987784B2 (ja) アレイ型プロセッサ
JP2552784B2 (ja) 並列データ処理制御方式
EP1675015B1 (de) Rekonfigurierbares Mehrprozessorsystem besonders zur digitalen Verarbeitung von Radarbildern
US5264842A (en) Generalized usage of switch connections with wait chain
US7032061B2 (en) Multimaster bus system
JP3119130B2 (ja) ネットワーク構成
JP3317678B2 (ja) データの伝送および経路選択を制御する方法
RU2115162C1 (ru) Сеть для маршрутизации сообщений
JP2791764B2 (ja) 演算装置
Rekha et al. Analysis and Design of Novel Secured NoC for High Speed Communications
JPH06338911A (ja) データ通信装置

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU MC NL PL PT RO SE SI SK TR

AX Request for extension of the european patent

Extension state: AL BA HR LV MK YU

17P Request for examination filed

Effective date: 20061205

17Q First examination report despatched

Effective date: 20070117

AKX Designation fees paid

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU MC NL PL PT RO SE SI SK TR

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU MC NL PL PT RO SE SI SK TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

Free format text: LANGUAGE OF EP DOCUMENT: FRENCH

REF Corresponds to:

Ref document number: 602004013458

Country of ref document: DE

Date of ref document: 20080612

Kind code of ref document: P

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080430

NLV1 Nl: lapsed or annulled due to failure to fulfill the requirements of art. 29p and 29m of the patents act
PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080430

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080730

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080930

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080430

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080810

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080430

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080430

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080830

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080430

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080731

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080430

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080430

ET Fr: translation filed
PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080430

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080430

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080430

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed

Effective date: 20090202

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080430

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20081231

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

REG Reference to a national code

Ref country code: IE

Ref legal event code: MM4A

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20081231

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20081222

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20081231

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20081101

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080430

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20081222

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080430

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080731

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 12

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 13

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IT

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20151222

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IT

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20151222

PGRI Patent reinstated in contracting state [announced from national office to epo]

Ref country code: IT

Effective date: 20170710

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 14

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20231227

Year of fee payment: 20

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: IT

Payment date: 20231220

Year of fee payment: 20

Ref country code: FR

Payment date: 20231227

Year of fee payment: 20

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20231229

Year of fee payment: 20