US20040114609A1 - Interconnection system - Google Patents

Interconnection system Download PDF

Info

Publication number
US20040114609A1
US20040114609A1 US10468167 US46816704A US20040114609A1 US 20040114609 A1 US20040114609 A1 US 20040114609A1 US 10468167 US10468167 US 10468167 US 46816704 A US46816704 A US 46816704A US 20040114609 A1 US20040114609 A1 US 20040114609A1
Authority
US
Grant status
Application
Patent type
Prior art keywords
interface
bus
system
data
interconnection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US10468167
Inventor
Ian Swarbrick
Paul Winser
Stuart Ryan
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Clearspeed Solutions Ltd
Original Assignee
Clearspeed Tech Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 – G06F13/00 and G06F21/00
    • G06F1/04Generating or distributing clock signals or signals derived directly therefrom
    • G06F1/10Distribution of clock signals, e.g. skew
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/76Architectures of general purpose stored program computers
    • G06F15/80Architectures of general purpose stored program computers comprising an array of processing units with common control, e.g. single instruction multiple data processors
    • G06F15/8007Architectures of general purpose stored program computers comprising an array of processing units with common control, e.g. single instruction multiple data processors single instruction multiple data [SIMD] multiprocessors
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/50Computer-aided design
    • G06F17/5045Circuit design
    • G06F17/505Logic synthesis, e.g. technology mapping, optimisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems
    • H04L12/56Packet switching systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/74Address processing for routing
    • H04L45/742Route cache and its operation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99931Database or file accessing
    • Y10S707/99933Query processing, i.e. searching
    • Y10S707/99936Pattern matching access

Abstract

An interconnection system (110) interconnects a plurality of reusable functional units (105 a), (105 b), (105 c). The system (110) comprises a plurality of nodes (135), (140), (145), (150), (155), (160) each node communicating with a functional unit. A plurality of data packets are transported between the functional units. Each data packet has routing information associated therewith to enable a node to direct the data packet via the interconnection system.

Description

    TECHNICAL FIELD
  • [0001]
    The present invention relates to an interconnection network. In particular, but not exclusively, it relates to an intra chip interconnection network.
  • BACKGROUND OF THE INVENTION
  • [0002]
    A typical bus system for interconnecting a plurality of functional units (or processing units) consists of either a set of wires with tri-state drivers, or two uni-directional data-paths incorporating multiplexers to get data onto the bus. Access to the bus is controlled by an arbitration unit, which accepts requests to use the bus, and grants one functional unit access to the bus at any one time. The arbiter may be pipelined, and the bus itself may be pipelined in order to achieve a higher clock rate. In order to route data along the bus, the system may comprise a plurality of routers which typically comprise a look up table. The data is then compared with the entries within the routing look up table in order to route the data onto the bus to its correct destination.
  • [0003]
    However, such routing schemes can not be realised on a chip, since the complexity and the size of its components make this infeasible. This has been overcome in existing on-chip bus systems by using a different scheme in which data is broadcasted, that is, transferring the data from one functional units to a plurality of other functional units simultaneously. This avoids the need for routing tables. However, broadcasting data to all functional units on the chip consumes considerable power and is, thus, inefficient. Also it is becoming increasingly difficult to transfer data over relatively long distances in one clock cycle.
  • [0004]
    Furthermore, in a typical bus system, since every request to use the bus (transactor) must connect to the central arbiter, this limits the scalabilty of the system. As bigger systems are built, the functional units are further from the arbiter, so latency increases and the number of concurrent requests that may be handled by a single arbiter is limited. Therefore, in such central arbiter based bus systems, the length of the bus and the number of transactors are normally fixed at the outset. Therefore, it would not be possible to lengthen the bus at a later stage to meet varying system requirements.
  • [0005]
    Another form of interconnection is a direct interconnection network. These types of networks typically comprise a plurality of nodes, each of which is a discrete router chip. Each node (router) may connect to a processor and to a plurality of other nodes to form a network topology.
  • [0006]
    In the past, it has been infeasible to use this network-based approach as a replacement for on-chip buses becauses the individual nodes are too big to be implemented on a chip.
  • [0007]
    Many existing buses are created to work with a specific protocol. Many of the customised wires relate to specific features of that protocol. Conversely many protocols are based around a specific bus implementation, for example having specific data fileds to aid the arbiter in some way.
  • SUMMARY OF THE INVENTION
  • [0008]
    The object of the present is to provide an interconnection network as an on-chip bus system. This achieved by routing data on the bus as opposed to broadcasting data. The routing of the data being achieved by a simple addressing scheme in which each transaction has routing information associated therewith, for example a geographical address, which enables the nodes within the interconnection network to route the transaction to its correct destination.
  • [0009]
    In this way, the routing information contains information on the direction to send the data packet. This routing information is not merely an address of a destination but provides directional information, for example x,y coordinates of a grid to give direction. Thus the nodes do not need routing table(s) or global signals to determine the direction since all the information the node needs is contains in the routing information of the data packets. This enables the circuitry of the node and the interconnection system to be simplified making integrated of the system onto a chip feasible.
  • [0010]
    If each functional unit is connected to a node, and all nodes are connected together, then a pipeline connection will exist between each pair of nodes in the system. The number of intervening nodes will govern the number of pipeline stages. If there is pair of joined nodes where the distance between them is too great to transmit data within a single clock cycle, a repeater block can be inserted between the nodes. This block registers the data, while maintaining the same protocol as the other bus blocks. The inclusion of the repeater blocks allows interconnection of arbitrary length to be created.
  • [0011]
    The interconnection system according the present invention can be utilised in an intra-chip interconnection network. Data transfers are all packetized, and the packets may be of any length that is a multiple of the data-path width. The nodes of the bus used to create the interconnection network (nodes and T-switches) all have registers on the data-path(s).
  • [0012]
    The main advantage of the present invention is that it is inherently re-usable. The implementer need only instantiate enough functional blocks to form an interconnection of the correct length, with the right number of interfaces, and with enough repeater blocks to achieve the desired clock rate.
  • [0013]
    The interconnection system in accordance with the present invention employs distributed arbitration. The arbitration capability grows as more blocks are added. Therefore, if the bus needs to be lengthened, it is a simple matter of instantiating more nodes and possibly repeaters. Since each module manages its own arbitration within itself, the overall arbitration capability of the interconnect increases. This makes the bus system of the present invention more scalable (in length and overall bandwidth) than other conventional bus systems.
  • [0014]
    The arbitration adopted by the system of the present invention is truly distributed and ‘localised’. This has been simplified such that there is no polling to see if the downstream route is free as in conventional distributed systems, instead this information is initiated by the ‘blocked node’ and pipelined back up the interconnection (bus) by upstream nodes.
  • [0015]
    The interconnection in accordance with the present invention is efficient in terms of power consumption. Since packets are routed, rather than broadcast, only the wires between the source and destination node are toggled. The remaining bus drivers are clock-gated. Hence the system of the present invention consumes less power.
  • [0016]
    Furthermore, every node on the bus has a unique address associated with it; an interface address. A field in the packet is reserved to hold a destination interface address. Each node on the bus will interrogate this field of an incoming packet; if it matches its interface address it will route the packet off the interconnection (or bus), if it does not match it will route the packet down the bus. The addressing scheme could be extended to support “wildcards” for broadcast messages; if a subset of the address matches the interface address then the packet is routed off the bus and passed on down the bus, otherwise it is just sent on down the bus.
  • [0017]
    For packets coming on to the bus, each interface unit interrogates the destination interface address of the packet. This is used to decide which direction a packet arriving on the bus from an attached unit is to be routed. In the case of a linear bus, this could be a simple comparison: if the destination address is greater than the interface address of the source of the data then the packets routed “up” the bus, otherwise the packet is routed “down” the bus. This could be extended to each interface unit such that each node maps destination addresses, or ranges of addresses, to directions on the bus.
  • [0018]
    Preferably, the interface unit sets a binary lane signal based on the result of this comparison. In this way functionality is split between the node and interface unit. All “preparation” of the data to be transported (including protocol requirements) is carried out in the interface unit. This allows greater flexibility as the node is unchanging irrespective of the type of data to be transported, allowing the node to be re-used in different circuits. More preferably the node directs the packet off the interconnection system to a functional unit.
  • [0019]
    More preferably, data destined for the interconnection, the interface unit can carry out the following functions: take the packet from the functional unit, ensure a correct destination module ID, head and tail bit; compare the destination module ID to the local module ID and sets a binary lane signal based on the result of this comparison; pack the module ID, data and any high level (non bus) control signals into a flit; implement any protocol change necessary; and pass the lane signal and flit to the node using the inject protocol.
  • [0020]
    A T-junction or switch behaves in a similar way; the decision here is simply whether to route the packet down one branch or the other. This would typically be done for ranges of addresses; if the address is larger than some predefined value then the packets are routed left, otherwise they are routed right. However, more complex routing schemes could be implemented if required.
  • [0021]
    The addressing scheme can be extended to support inter-chip communication. In this case a field in the address is used to define a target chip address with, for example, 0 in this field representing a local address of the chip. When a packet arrives at the chip this field will be compared with the pre-programmed address of the chip. If they match then the field is set to zero and the local routing operates as above. If they do not match, then the packet is routed along the bus to the appropriate inter-chip interface in order to be routed towards its final destination. This scheme could be extended to allow a hierarchical addressing scheme to manage routing across systems, sub-systems, boards, groups of chips, as well as individual chips.
  • [0022]
    The system according to the present invention is not suitable for all bus-type applications. The source and destination transactors are decoupled, since there is no central arbitation point. The advantage of the approach of the present invention is that long buses (networks) can be constructed, with very high aggregrate bandwidth.
  • [0023]
    The system of the present invention is protocol agnostic. The interconnection of the present invention merely transports data packets. Interface unit in accordance with the present invention manage all protocol specific features. This means that it easy to migrate to a new protocol, since only the interface units need to be re-designed.
  • [0024]
    The present invention also provides flexible topology and length.
  • [0025]
    The repeater blocks of the present invention allow very high clock rates in that the overall modular structure of the interconnection prevents the clock rate being limited by long wires. This simplifies the synthesis and layout. The repeater blocks not only pipeline the data as it goes downstream but they implement a flow control protocol; pipeline blockage information up the interconnection (or bus) (rather than a blocking signal being distributed globally). When blocked, a feature of this mechanism is that data compression (inherent buffering) is achieved on the bus at least double the latency figure, i.e. if latency through the repeater is one cycle then two data control flow digits (flits: the basic unit of data transfer over the interconnection of the present invention. It includes n bytes of data, as well as some side-band control signals. The flow control digit size equals the size of the bus data-path) will concatenate when it is blocked. This means that the scope of any blocking is minimised and thus reducing any queuing requirement in a functional block.
  • [0026]
    The flow of data flow control digits is managed by a flow control protocol, in conjunction with double-buffering in a store block (and repeater unit) as described previously.
  • [0027]
    The components of the interconnection of the present invention manage the transportation of packets. Customised interface units handle protocol specific features. These typically involve packing and unpacking of control information and data, and address translation. A customised interface unit can be created to match any specific concurrency.
  • [0028]
    Many packets can be travelling along separate segments of the interconnection of the present invention simultaneously. This allows the achievable bandwidth to be much higher than the raw bandwidth of the wires (width of bus, multiplied by clock rate). If there are, for example, four adjacent on-chip blocks A, B, C and D, then A and B can communicate at the same time that C and D communicate. In this case the achievable bandwidth is twice that of broadcast-based bus.
  • [0029]
    Packets are injected (gated) onto the interconnection at each node, so that each node is allocated a certain amount of the overall bandwidth allocation (e.g. by being able to send, say 10 flow control digits within every 100 cycles). this distributed scheme controls the overall bandwidth allocation.
  • [0030]
    It is possible to keep forcing packets onto the interconnection of the present invention until it saturates. All packets will eventually be delivered. This means the interconnection can be used as a buffer with an in-built flow control mechanism.
  • BRIEF DESCRIPTION OF DRAWINGS
  • [0031]
    [0031]FIG. 1 is a block schematic diagram of the system incorporating the interconnection system according to an embodiment of the present invention;
  • [0032]
    [0032]FIG. 2 is a block schematic diagram illustrating the initiator and target of a virtual component interface system of FIG. 1,
  • [0033]
    [0033]FIG. 3 is a block schematic diagram of the node of the interconnection system shown in FIG. 1;
  • [0034]
    [0034]FIG. 4 is a block schematic diagram of connection over the interconnection according to the present invention between virtual components of the system shown in FIG. 1;
  • [0035]
    [0035]FIG. 5a is a diagram of the typical structure of the T-switch of FIG. 1;
  • [0036]
    [0036]FIG. 5b is a diagram showing the internal connection of the T-switch of FIG. 5a;
  • [0037]
    [0037]FIG. 6 illustrates the Module ID (interface ID) encoding of the system of FIG. 1;
  • [0038]
    [0038]FIG. 7 illustrates handshaking signals in the interconnection system according to an embodiment of the present invention;
  • [0039]
    [0039]FIG. 8 illustrates the blocking behaviour of the interconnection system of an embodiment of the present invention when occup[1:0]=01;
  • [0040]
    [0040]FIG. 9 illustrates blocking for two cycles of the interconnection system according to an embodiment of the present invention:
  • [0041]
    [0041]FIG. 10 illustrates virtual component interface handshake according an embodiment of the present invention;
  • [0042]
    [0042]FIG. 11 illustrates a linear chip arrangement of the system according to an embodiment of the present invention;
  • [0043]
    [0043]FIG. 12 is a schematic block diagram of the interconnection system of the present invention illustrating an alternative topogly;
  • [0044]
    [0044]FIG. 13 is a schematic block diagram of the interconnection system of the present invention illustrating a further alternative topogly;
  • [0045]
    [0045]FIG. 14 illustrates an example of a traffic handling subsystem according to an embodiment of the present invention;
  • [0046]
    [0046]FIG. 15 illustrates a system for locating chips on a virtual grid according to a method of a preferred embodiment of the present invention; and
  • [0047]
    [0047]FIG. 16 illustrates routing a transaction according to the method of a preferred embodiment of the present invention.
  • DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS
  • [0048]
    The basic mechanism for communicating data and control information between functional blocks is that blocks exchange messages using the interconnection system 100 according to the present invention. The bus system can be extended to connect blocks in a multi chip system, and the same mechanism works for blocks within a chip or blocks on different chips.
  • [0049]
    An example of a system 100 incorporating the interconnection system 110 according to an embodiment of the present invention, as shown in FIG. 1, comprises a plurality of reusable on-chip functional blocks or virtual component blocks 105 a, 105 b and 105 c. These functional units interface to the interconnection and can be fixed. They can be re-used at various levels of abstraction (eg. RTL, gate level, GDSII layout data) in different circuit design. The topology can be fixed once the size, aspect ratio and the location of the I/O's to the interconnection are known. Each on-chip functional unit 105 a, 105 b, 105 c are connected to the interconnection system 110 via its interface unit. The interface unit handles address decoding and protocol translation. The on-chip functional block 105 a, for example, is connected to the interconnection system 110 via an associated virtual component interface initiator 115 a and peripheral virtual component interface initiator 120 a.
  • [0050]
    The on-chip functional block 105 b, for example, is connected to the interconnection system 110 via an associated virtual component interface target 125 b and peripheral virtual component interface target 130 b. The on-chip functional block 105 c, for example, is connected to the interconnection system 110 via an associated virtual component interface initiator 115 c and peripheral virtual component interface target 130 c. The associated initiators and targets for each on-chip functional block shown in FIG. 1 are purely illustrative and may vary depending on the associated block requirements. A functional block may have a number of connections to the interconnection system. Each connection has an advanced virtual component interface (extensions forming a superset of basic virtual component interface. This is the protocol used for the main data interfaces in the system of the present invention) or peripheral virtual component interface interface (low bandwidth interface allowing atomic operations, mainly used in the present invention for control register access).
  • [0051]
    One currently accepted protocol for connecting such on-chip functional units as shown in FIG. 1 to a system interconnection according to the embodiment of the present invention is virtual component interface. Virtual component interface is an OCB standard interface to communicate between a bus and/or virtual component, which is independent of any specific bus or virtual component protocol.
  • [0052]
    There are three types of virtual component interfaces, peripheral virtual component interface 120 a, 130 b, 130 c, basic virtual component interface and advanced virtual component interface. The basic virtual component interface is a wider, higher bandwidth interface than the peripheral virtual component interface. The basic virtual component interface allows split transactions. Split transaction are where the request for data and the response are decoupled, so that a request for data does not need to wait for the response to be returned before initiating further transactions. Advanced virtual component interface is a superset of basic virtual component interface; Advanced virtual component interface and peripheral virtual component interface have been adopted in the system according to the embodiment of the present invention.
  • [0053]
    The advanced virtual component interface unit comprises a target and initiator. The target and initiator are virtual components that send request packets and receive response packets. The initiator is the agent that initiates transactions, for example, DMA (or EPU on F150).
  • [0054]
    As shown in FIG. 2, an interface unit that initiates a read or write transaction is called an initiator 210 (issues a request 220), while an interface that receives the transaction is called the target 230 (responds to a request 240). This is the standard virtual component terminology.
  • [0055]
    Communication between each no-chip functional block 105 a, 105 b and 105 c and its associated initiators and targets are made using virtual component interface protocol. Each initiator 115 a, 120 a, 115 c and target 125 b, 130 b, 130 c are connected to a unique node 135, 140, 145, 150, 155 and 160. Communication between each initiator 115 a, 120 a, 115 c and target 125 b, 130 b, 130 c uses the protocol in accordance with the embodiment of the present invention and as described in more detail below.
  • [0056]
    The interconnection system 110 according an embodiment of the present invention comprises three separate buses 165, 170 and 175. The RTL components have parameterisable widths, so these may be three instances of different width. An example might be a 64-bit wide peripheral virtual component bus 170 (32 bit address+32 data bits), a 128-bit advanced virtual component interface bus 165, and a 256-bit advanced virtual component interface bus 175. Although three separate buses are illustrated here, it is appreciated that the interconnection system of the present invention may incorporate any number of separate buses.
  • [0057]
    At regular intervals along the bus length a repeater unit 180 may be inserted for all the buses 165, 170 and 175. There is no restriction on the length of the buses 165, 170 and 175. Variations in the length of the buses 165, 170 and 170 would merely require an increased number of repeater units 180. Repeater units would of course only be required when the timing contraints between two nodes cannot be met due to the length of wire of the interconnection.
  • [0058]
    For complex topologies, T-switches (3-way connectors or the like) 185 can be provided. The interconnection system of the present invention can be used in any topology but care should be taken when the topology contains loops as deadlock may result.
  • [0059]
    Data is transferred on the interconnection network of the present invention in packets. The packets may be of any length that is a multiple of the data-path width. The nodes 135, 140, 145, 150, 155 and 160 according to the present invention used to create the interconnection network (node and T-switch) all have registers on the data-path(s).
  • [0060]
    Each interface unit is connected to a node within the interconnection system itself, and therefore to one particular lane of the bus. Connections may be of initiator or target type, but not both—following from the conventions of virtual component interface. In practise every block is likely to have a peripheral virtual component interface target interface for configuration and control.
  • [0061]
    The bus components according to the embodiment of the present invention use distributed arbitration, where each block in the bus system manages access to its own resources.
  • [0062]
    A node 135 according to the embodiment of the present invention is illustrated in FIG. 3. Each node 135, 140, 145, 150, 155 and 160 are substantially similar. Node 135 is connected to the bus 175 of FIG. 1. Each node comprises a first and second input store 315, 320. The first input store 315 has an input connected to a first bus lane 305. The second input store 320 has an input connected to a second bus lane 310. The output of the first input store 315 is connected to a third bus lane 306 and the output of the second input store 320 is connected to a fourth bus lane 311. Each node further comprises an inject control unit 335 and a consume control unit 325. The node may not require the consume arbitration, for example the node may have an output for each uni-directional lane but the consume handshaking retained. The input of the inject control unit 335 is connected to the output of an interface unit of the respective functional unit for that node. The outputs of the inject control unit 335 are connected to a fifth bus lane 307 arid sixth bus lane 312. The input of the consume control unit 325 is connected to the output of a multiplexer 321. The inputs of the multiplexer 321 are connected to the fourth bus lane 311 and the third bus lane 306. The output of the consume control unit 325 is connected to a bus 330 which is connected to the interface unit of the respective functional unit for that node. The fifth bus lane 307 and the third bus lane 306 are connected to the inputs of a multiplexer 308. The output of the multiplexer 308 is connected to the first bus lane 305. The fourth bus lane 311 and the sixth bus lane 312 are connected to the inputs of a multiplexer 313. The output of the multiplexer 313 is connected to the second bus lane 310.
  • [0063]
    The nodes are the connection points where data leaves or enters the bus. It also forms part of the transport medium. The node forms part of the bus lane which it connects to, including both directions of data path. The node conveys data on the lane to which i connects, with one cycle of latency when not blocked. It also allows the connecting functional block to inject and consume data in either direction, via its interface unit. Arbitration of injected or passing data is performed entirely within the node. Internally, bus 175 consists of a first lane 305 and a second lane 310. The first and seond lanes 305 and 310 are physically separate unidirectional buses that are multiplexed and de-multiplexed to the same interfaces within the node 135. As illustrated in FIG. 3, the direction of data flow of the first lane 305 is in the opposite direction to that of the second lane 310. Each lane 305 and 310 has a lane number. The lane number is a parameter that is passed from the interface unit to the node to determine which lane (and hence which direction) each packet is sent to. Of course it is appreciated that the direction of the data flow of the first and second lanes 305 and 310 can be in the same direction. This would be desirable if the blocks transacting on the bus only need to send packets in one direction.
  • [0064]
    The node 135 is capable of concurrently receiving and injecting data on the same bus lane. At the same time it is possible to pass data through on the other lane. Each uni-directional lane 305, 310 carries a separate stream 306, 307, 311, 312 of data. These streams 306, 307, 311, 312 are multiplexed together at the point 321 where data leaves the node 135 into the on-chip module 105 a (not shown here) via the interface unit 115 a and 120 a (not shown here). The data streams 306, 307, 311, 312 are de-multiplexed from the on-chip block 105 a onto the bus lanes 305 and 310 in order to place data on the interconnection 110.
  • [0065]
    This is an example of local arbitration, where competition for resources is resolved in the block 105 a where those resources reside. In this case, it is competition for access to bus lanes, and for access to the single lane coming off the bus. This approach of using local arbitration is used throughout the interconnection system, and is key to its scalability. An alternative would be that both output buses come from the node to the functional unit and then the arbitration mux would not be needed.
  • [0066]
    Each lane can independently block or pass data through. Data can be consumed from one lane at a time, and injected on one lane at the same time. Concurrent inject and consume on the same lane is also permitted. Which lane each packet is injected on is determined within the interface unit.
  • [0067]
    Each input store (or register) 315 and 320 registers the data as it passes from node to node. Each store 315, 320 contains two flit-wide registers. When there is no competition for bus resources, only one of the registers is used. When the bus blocks, both registers are then used. It also implements the ‘block on header’ feature. This is needed to allow packets to be blocked at the header flit so that a new packet can be injected onto the bus.
  • [0068]
    The output interface unit 321, 325 multiplexes both bus lanes 305, 310 onto one lane 330 that feeds into the on-chip functional unit 105 a via the interface unit which is connected to the node 135. The output interface unit 321, 325 also performs an arbitration function, granting one lane access to the on-chip functional unit, while blocking the other. Each node also comprises an input interface unit 335. The input interface unit 335 performs de-multiplexing of packets onto one of the bus lanes 305, 310. It also performs an arbitration function, blocking the packet that is being input until the requested lane is available.
  • [0069]
    A plurality of repeater units 180 are provided at intervals along the length of the interconnection 110. Each repeater unit 180 is used to introduce extra registers on the data path. It adds an extra cycle of latency, but is only used where there is a difficulty meeting timing constraints. Each repeater unit 180 comprises a store similar to the store unit of the nodes. The store unit merely passes data onwards, and implements blocking behaviour. There is no switching carried out in the repeater unit. The repeater block allows for more freedom in chip layout. For example, it allows long length of wires between nodes or where a block has a single node to connect to a single lane, repeaters may be inserted into the other lanes in order to produce uniform timing characteristics over all lanes. There may be more than one repeater between two nodes.
  • [0070]
    The system according to the embodiment of the present invention is protocol agnostic, that is to say, the data-transport blocks such as the nodes 135, 140, 145, 150, 155, 160, repeater units 180 and T-switch 185 simply route data packets from a source interface to a destination interface. Each packet will contain control information and data. The packing and unpacking of this information is performed in the interface units 115 a, 120 a, 125 b, 130 b, 115 c, 130 c. In respect of the preferred embodiment, these interface units are virtual component interfaces, but it is appreciated that any other protocol could be supported by creating customised interface units.
  • [0071]
    A large on-chip block may have several interfaces to the same bus.
  • [0072]
    The target and initiator 15 a, 120 a, 125 b, 130 b, 115 c, 130 c of the interface units perform conversion between the advanced virtual component interface and bus protocols in the initiator and from the bus to advanced virtual component interface in the target. The protocol is an asynchronous handshake on the advanced virtual component interface side illustrated in FIG. 10. The interface unit initiator comprises a send path. This path performs conversion between the advanced virtual component interface communication protocol to the bus protocol. It extracts a destination module ID or interface ID. Also, a block may be connected to several buses, with a different module (interface) ID on each bus address, packs it into the correct part of the packet, and uses the module ID in conjunction with a hardwired routing table to generate a lane number (e.g. 1 for right, 0 for left). The initiator blocks the data at the advanced virtual component interface when it cannot be sent onto the bus. The interface unit initiator also comprises a response path. The response path receives previously requested data, converting from bus communication protocol to the virtual component interface protocol. It blocks data on the bus if the on-chip virtual component block is unable to receive it.
  • [0073]
    The interface unit target comprises a send path which receives incoming read and write requests. The target converts from bus communication protocol to advanced virtual component interface protocol. It blocks data on the bus if it cannot be accepted across the virtual component interface. The target also comprises a response path which carries read (and for verification purposes, write) requests. It converts advanced virtual component interface communication protocol to bus protocol and blocks data at the advanced virtual component interface if it cannot be sent onto the bus.
  • [0074]
    The other type of interface unit utilised in the embodiment of the present invention is a peripheral virtual component unit. The main differences between the peripheral virtual component interface and the advanced virtual component interface are the data interface of the peripheral virtual component interface is potentially narrower (up to 4 bytes) than the advanced virtual component interface and the peripheral virtual component interface is not split transaction.
  • [0075]
    The peripheral virtual component interface units perform conversion between the peripheral virtual component interface and bus protocols in the initiator, and from the bus protocol to peripheral virtual component interface protocol in the target. The protocol is an asynchronous handshake on the peripheral virtual component interface side.
  • [0076]
    The interface unit initiator comprises a send path. It generates destination module ID and the transport lane number from memory address. The initiator blocks the data at the peripheral virtual component interface when it cannot be sent onto the bus. The initiator also comprises a response path. This path receives previously requested data, converting from bus communication protocol to the peripheral virtual component interface protocol. It also blocks data on the bus if the on-chip block (virtual component block) is unable to receive it.
  • [0077]
    The peripheral virtual component interface unit target comprises a send path which receives incoming read and write requests. It blocks data on the bus if it cannot be accepted across the virtual component interface. The target also comprises a response path which carries read (and for verification purposes, write) requests. It converts peripheral virtual component interface communication protocol to bus protocol and blocks data at the virtual component interface if it cannot be sent onto the bus.
  • [0078]
    The peripheral virtual component interface initiator may comprise a combined initiator and target. This is so that the debug registers (for example) of an initiator can be read from.
  • [0079]
    With reference to FIG. 4, the virtual component (on-chip) blocks can be connected to each other over the interconnection system according to the present invention. A first virtual component (on-chip) block 425 is connected point to point to an interface unit target 430. The interface unit target 430 presents a virtual component initiator interface 440 to the virtual component target 445 of on-chip block 425. The interface unit target 430 uses a bus protocol conversion unit 448 to interface to the bus interconnect 450. The interface unit initiator 460 presents a target interface 470 to the initiator 457 of the second on-chip block 455 and, again, uses a bus protocol conversion unit 468 on the other side.
  • [0080]
    The T-switch 185 of FIG. 1 is a block that joins 3 nodes, allowing more complex interconnects than simple linear ones. At each input port the interface ID of each packet is decoded and translated into a single bit, which represents the two possible outgoing ports. A hardwired table inside the T-Switch performs this decoding. There is one such table for each input port on the T-Switch. Arbitration takes place for the output ports if there is a conflict. The winner may send the current packet, but must yield when the packet has been sent. FIGS. 5a and 5 b show an example of the structure of a T-switch.
  • [0081]
    The T-switch comprises three sets of input/output ports 505, 510, 515 connected to each pair of unidirectional bus lanes 520, 525, 530. Within the T-switch, a T-junction 535, 540, 545 is provided for each pair of bus lanes 520, 525, 530 such that an incoming bus 520 coming into an input port 515 can be output via output port 505 or 510, for example.
  • [0082]
    Packets do not change lanes at any point on the bus, so the T-switch can be viewed as a set of n 3-way switches, where n is the number of uni-directional bus lanes. The T-switch 185 comprises a lane selection unit. The lane selection unit takes in module ID of incoming packets and produces a 1-bit result corresponding to the two possible output ports on the switch. The T-switch also comprises a store block on each input lane. Each store block stores data flow control digits and allows them to block in place if the output port is temporarily unable to receive. It also performs a block on header function, which allows switching to occur at the packet level (rather than the flow control digit level). The T-switch also includes an arbiter for arbitration between requests to use output ports.
  • [0083]
    During initialisation, the interconnection system according to the embodiment of the present invention powers up into a usable state. Routing information is hardcoded into the bus components. A destination module interface ID (mod ID) for example as illustrated in FIG. 6 is all that is required to route a packet to another node. In order for that node to return a response packet, it must have been sent the module interface ID of the sender.
  • [0084]
    There may be more than one interconnection in a processing system. On each bus, every interface (which includes an inject and consume port) has a unique ID. These ID's are hard-coded at silicon compile-time.
  • [0085]
    Units attached to the bus (on-chip blocks) are free to start communicating straight after reset. The interface unit will hold off communications (by not acknowledging them) until it is ready to begin operation.
  • [0086]
    The interconnection system according to the present invention has an internal protocol that is used throughout. At the interfaces to the on-chip blocks this may be converted to some other protocol, for example virtual component interface as described above. The internal protocol will be referred to as the bus protocol This bus protocol allows single cycle latency for packets travelling along the bus when there is no contention for resources, and to allow packets to block in place when contention occurs.
  • [0087]
    The bus protocol is used for all internal (non interface/virtual component interface) data transfers. It consists of five signals: occup 705, head 710, tail 715, data 720 and valid 725 between a sender 735 and a receiver 730. These are shown in FIG. 7.
  • [0088]
    The packets consist of one or more flow control digits. On each cycle that the sender asserts the valid signal, the receiver must accept the data on the next positive clock edge.
  • [0089]
    The receiver 730 informs the sender 735 about its current status using the occup signal 705. This is a two-bit wide signal.
    TABLE I
    Occup signal values and their meaning.
    Occup [1:0] Meaning
    00 Receiver is empty - can send data.
    01 Receiver has one flow control digit - if
    sending a flow control digit on this cycle,
    don't send a flow control digit on the next
    cycle.
    10 The Receiver is full. Don't send any flow
    control digits until Occup decreases.
    11 Unused.
  • [0090]
    The occup signal 705 tells the sender 735 if and when it is able to send data. When the sender 735 is allowed to transmit a data flow control digit, it is qualified with a valid signal 725.
  • [0091]
    The first flow control digit in each packet is marked by head=‘1’. The last flow control digit is marked by tail=‘1’. A single flow control digit packet has signals head=tail=valid=‘1’. Each node and T-Switch use these signals to perform switching at the packet level.
  • [0092]
    [0092]FIG. 8 shows an example of blocking behaviour on the interconnect system according to an embodiment of the present invention. The occup signal is set to ‘1’, meaning ‘if sending a flow control digit this cycle, don't send one on the next cycle’.
  • [0093]
    [0093]FIG. 9 shows an example of the blocking mechanism more completely. The occup signal is set to 01 (binary), then to 10 (binary). The sender can resume transmitting flow control digits when the occup signal is set back to 01—at that point it is not currently sending a flow control digit, so it is able to send one on the next cycle.
  • [0094]
    The protocol at the boundary between the node and the interface unit is different from that just described, and is similar to that used by the virtual component interface. At the sending and receiving interfaces, there is a val and an ack signal. When val=ack=1, a flow control digit is exchanged for the inject protocol. The consume (bus output) protocol is different to the inject protocol but is the minimum logic to allow registered outputs (and thus simplifies synthesis and integration into a system on chip). The consume protocol is defined as: on the rising clock edge, data is blocked on the next clock edge if CON_BLOCK=1; on the rising clock edge, data is unblocked on the next clock edge if CON_BLOCK=0; CON_BLOCK is the flow control (blocking) signal from the functional unit.
  • [0095]
    Of course the protocols at this interface can be varied and not effect the overall operation of the bus.
  • [0096]
    The difference between this and virtual component interface is that the ack signal is high by default, and is only asserted low on a cycle when data cannot be received. Without this restriction, the node would need additional banks of registers.
  • [0097]
    The bus protocol allows exchange packets consisting of one or more flow control digits. Eight bits in the upper part of the first packet carry the destination module ID, and are used by the bus system to deliver the packet. The top 2 bits are also used for internal bus purposes. In all other bit fields, the packing of the flow control digits is independent of the bus system.
  • [0098]
    At each interface unit, virtual component interface protocol is used. The interface control and data fields are packed into bus flow control digits by the sending interface and then unpacked at the receiving interface unit. The main, high-bandwidth, interface to the bus uses the advanced virtual component interface. All features of the advanced virtual component interface are implemented, with the exception of those used to optimise the internal operation of an OCB.
  • [0099]
    The virtual component interface protocol uses an asynchronous handshake as shown in FIG. 8. Data is valid when VAL=ACK=1. The bus interface converts data and control information from the virtual component interface protocol to the bus internal communication protocol.
  • [0100]
    The bus system does not distinguish between control information and data. Instead, the control bits and data are packed up into packets and sent to the destination interface unit, where they are unpacked and separated back into data and control.
  • [0101]
    Although in the preferred embodiment, virtual component interface compliant interface units are utilised, it is appreciated that different interface units may be used instead (e.g. ARM AMBA compliant interfaces).
  • [0102]
    Table II shows the fields within the data flow control digits that are used by the interconnection system according to an embdoiemnt of the present invention. All other information in the flow control digits is simply transported by the bus. The encoding and decoding is performed by the interface units. The interface units also insert the head and tail bits into the flow control digits, and insert the MOD ID in the correct bit fields.
    TABLE II
    Specific fields.
    Name Bit Comments
    Head FLOW CONTROL Set to ‘1’ to indicate first flow control
    DIGIT_WIDTH - 1 digit of packet.
    Tail FLOW CONTROL Set to‘1’ to indicate last flow control digit
    DIGIT_WIDTH - 2 of packet.
    Mod FLOW CONTROL ID of interface to which packet is to be
    ID DIGIT_WIDTH - 3: sent. Virtual component interface calls
    FLOW CONTROL this MOD ID. It is really an interface ID,
    DIGIT_WIDTH - 10 since a large functional unit could have
    multiple bus interfaces, in which case, it is
    necessary to distinguish between them.
  • [0103]
    The advanced virtual component interface packet types are read request, write request and read response. A read request is a single flow control digit packet and all of the relevant virtual component interface control fields are packed into the flow control digit. A write request consists of two or more flow control digits. The first flow control digit contains virtual component interface control information (e.g. address). The subsequent flow control digits contain data and byte enables. A read response consists of one or more flow control digits. The first and subsequent flow control digits all contain data plus virtual component interface response fields (e.g. RSCRID, RTRDID and RPKTID).
  • [0104]
    An example mapping of the advanced virtual component interface onto packets is now described. The example is for a bus with 128-bit wide data paths. It should be noted that the nodes extract the destination module ID from bits 159:152 in the first flow control digit of each packet. In the case of read response packets this corresponds with the virtual component interface RSCRID field.
    TABLE III
    Possible Virtual Component Interface fields for 128 bit wide
    bus
    Header
    AVCI/ Flow Read
    BVCI control Res-
    Signal digit ponses Di-
    Name WIDTH Bits Only? Only? rection Comments
    CLOCK 1 IA
    RESETN 1 IA
    CMDACK 1 TI Handshake
    signal.
    CMDVAL 1 IT Handshake
    signal.
    WDATA 128 127:0 IT Only for
    write
    requests.
    BE 16 143:128 IT Only for
    write
    requests.
    AD- 64  63:0 {haeck over (o)} IT
    DRESS
    CFIXED 1 64 {haeck over (o)} IT
    CLEN 8  72:65 {haeck over (o)} IT ***needs
    update***
    CMD 2  75:74 {haeck over (o)} IT
    CONTIG 1 76 {haeck over (o)} IT
    EOP 1 IT Handshake
    signal.
    CONST 1 77 {haeck over (o)} IT
    PLEN 9  86:78 {haeck over (o)} IT
    WRAP 1 87 {haeck over (o)} IT
    RSPACK 1 IT Handshake
    signal.
    RSPVAL 1 TI Handshake
    signal.
    RDATA 128 127:0 {haeck over (o)} TI Only for
    read
    responses.
    REOP 1 TI Handshake
    signal.
    RERROR 2 143:142 {haeck over (o)} TI Only for
    read
    responses.
    DEFD 1 88 {haeck over (o)} IT
    WRPLEN 5  93:89 {haeck over (o)} IT
    RFLAG 4 141:138 {haeck over (o)} TI Only for
    read
    responses.
    SCRID 8 151:144 {haeck over (o)} IT
    TRDID 2  95:94 {haeck over (o)} IT
    PKTID 8 103:96 {haeck over (o)} IT
    RSCRID 8 159:152 {haeck over (o)} TI Only for
    read
    responses.
    RTRDID 2 137:136 {haeck over (o)} TI Only for
    read
    responses.
    RPKTID 8 135:128 {haeck over (o)} TI Only for
    read
    responses.
  • [0105]
    Peripheral virtual component interface burst-mode read and write transactions are not supported over the bus, as these cannot be efficiently implemented. For the reason, the peripheral virtual component interface EOP signal should be fixed at logic ‘1’. Any additional processing unit or extenal units can be attached to the bus, but the EOP signal should again be fixed at logic ‘1’. With this change, the new unit should work normally.
  • [0106]
    The read request type is a single flow control digit packet carrying the 32-bit address of the data to be read. The read response is a single flow control digit response containing the requested 32 bits of data. The write request is a single flow control digit packet containing the 32-bit address of the location to be written, plus the 32 bits of data, and 4 bits of byte enable. The write response prevents a target responding to a write request in the same way that it would to a read request.
  • [0107]
    With all of the additional signals, 32 bit (data) peripheral virtual component interface occupies 69 bits on the bus.
    TABLE IV
    PVCI
    Signal Bit Read Read Write
    Name Fields Request Response Request Comments
    CLOCK System signal.
    RESETN System signal.
    VAL Handshake signal.
    ACK Handshake signal.
    EOP Handshake signal.
    ADDRESS 63:32 {haeck over (o)} {haeck over (o)}
    RD 100 {haeck over (o)} {haeck over (o)}
    BE 67:64 {haeck over (o)}
    WDATA 31:0  {haeck over (o)}
    RDATA 31:0  {haeck over (o)}
    RERROR  68 {haeck over (o)}
  • [0108]
    The internal addressing mechanism of the bus is based on the assumption that all on-chip blocks in the system have a fixed 8-bit module ID.
  • [0109]
    Virtual component interface specifies the use of a single global address space. Internally the bus delivers packets based on the module ID of each block in the system. The module ID is 8 bits wide. All global addresses will contain the 8 bits module ID, and the interface unit will simply extract the destination module ID from the address. The location of the module ID bits within the address is predetermined.
  • [0110]
    The module IDs in each system are divided into groups. Each group may contain up to 16 modules. The T-switches in the system use the group ID to determine which output port to send each packet to.
  • [0111]
    Within each group, there may be up to sixteen on-chip blocks, each with a unique subID. The inclusion of only sixteen modules within each group does not restrict the bus topology. Within each linear bus section, there may be more than one group, but modules from different groups may not interleave. There may be more than sixteen modules between T-switches. The only purpose of using a group ID and sub ID is to simplify the routing tables inside the T-switch(es). If there are no T-switches being used, the numbering of modules can be arbitrary, If a linear bus topology is used and the interfaces are numbered sequentially, this may simplify lane number generation, as a comparator can be used instead of a table. However, a table may still turn out to be smaller after logic minimisation. Two interfaces on different buses can have the same mod ID (=interface ID).
  • [0112]
    An example to reduce erroneous traffic on the interconnection according to the embodiment of the present invention is described here. When packets that do not have a legal mod ID are presented to the interface unit, it will acknowledge them, but will also generate an error in the rerror virtual component interface field. The packet will not be sent onto the bus. The interface unit will “absorb” it and destroy it.
  • [0113]
    In the preferred embodiment the bus system blocks operate at the same clock rate as synthesisable RTL blocks in any given process. For a 0.13 μm 40 G processor, the target clock rate is 400 MHz. There will be three separate buses. Each bus cpmprises a separate parameterised component. There will be a 64-bit wide peripheral virtual component interface bus connecting to all functional units on the chip. There will be two advanced virtual component interface buses, one with a 128-bit data-path (raw bandwidth 51.2 Gbits/sec on each unidirectional lane), the other with a 256-bit data-path (raw bandwidth 102.4 Gbits/sec on each unidirectional lane). Not all of this bandwidth can be fully utilised due to the overhead of control and request packets, and it is not always possible to achieve efficient packing of data into flow control digits. Some increase in bandwidth will be seen due to concurrent data transfers on the bus, but this can only be determined in system simulations.
  • [0114]
    In the embodiment, latency of packets on the bus is one cycle at the sending interface unit, one cycle per bus block (nodes and repeaters) that data passes through, one or two additional cycle(s) at the node consume unit, one cycle at receiving interface unit and n−1 cycles, where n is the packet length in flow control digits. This figure gives the latency for the entire data transfer, meaning that latency increases with packet size.
  • [0115]
    It is possible to control bandwidth allocation, by introducing programmability into the injection controller in the node.
  • [0116]
    It is not possible for packets to be switched between bus lanes. Once a packet has entered the bus system, it stays on the same bus lane, until it is removed at the destination interface unit. Packets on the bus are never allowed to interleave. It is not necessary to inject new flow control digits on every clock cycle. In other words, it is possible to have gap cycles. These gaps will remain in the packet when it is inside the bus system, and will waste bandwidth. These packets will remain in the packet when it is in the bus system and unblocked, thus wasting bandwidth. If blocked then only validated data will concatenate and thus any intermediate non valid data will be removed. In addition to help minimise the number of gap packets, it is necessary to ensure that enough FIFO buffering is provided to allow the block to keep injecting flow control digits until each packet has been completely sent, or design the block in a manner that does not cause gaps to occur.
  • [0117]
    In the system according to the present invention, the length of the packets (in flow control digits) is unlimited. However, consideration should be taken with excessively large packets, as they will utilises a greater amount of the bus resource. Long packets can be more efficient than a number of shorter ones, due to the overhead of having a header flow control digit.
  • [0118]
    The interconnection system according to the present invention does not guarantee that requested data items would be returned to a module in the order in which they were requested. The block is responsible for re-ordering packets if the order matters. This is achieved using the advanced virtual component interface pktid field, which is used to tag and reorder outstanding transactions. It cannot be assumed that data will arrive at the on-chip block in the same order that it was requested from other blocks in the system. Where ordering is important, the receiving on-chip block must be able to re-order the packets. Failure to adhere to this rule is likely to result in system deadlock.
  • [0119]
    The interconnection system according to the present invention offers considerable flexibility in the choice of interconnect topology. However, it is currently not advisable to have loops in the topology as these will introduce the possibility of deadlock.
  • [0120]
    However, it should be possible to program routing tables in a deadlock-free manner if loops were to be used. This would require some method of proving deadlock freedom, together with software to implement the necessary checks.
  • [0121]
    A further advantage of the interconnection system according to the present invention is that saturating the bus with packets will not cause it to fail. The packets will be delivered eventually, but the average latency will increase significantly. If the congestion is reduced to below the maximum throughput, then it will return to “normal operation”.
  • [0122]
    In this repsect the following rules should be considered, namely there should be no loops in the bus topology, on-chip blocks must not depend on transactions being returned in order and where latency is important, and is multiple transactors need to use the same bus segments, there should be a maximum packet size. As mentioned above, if loops are required in the future, some deadlock prevention strategy must exist. Ideally this will include a formal proof. Further, if ordering is important, the blocks must be able to re-order the transaction. Two transactions from the same target, travelling on the same lane will be returned in the same order in which the target sent them. If requests were made to two different targets, the ordering is non-deterministic.
  • [0123]
    Most of the interconnection components of the present invention will involve straightforward RTL synthesis followed by place & route. The interface units may be incorporated into either the on-chip blocks or an area reserved for the bus, depending on the requirements of the overall design, for example there may be a lot of area free under the bus and so using this area rather than adding the functional block area would make more sense as it would reduce the overall chip area.
  • [0124]
    The interconnection system forms the basis of a system-on-chip platform. In order to accelerate the process of putting a system on-chip together, it has been proposed that the nodes contain the necessary “hooks” to handle distribution of the system clock reset signals. Looking first at an individual chip, the routing of transactions and responses between initiator and target is performed by the interface blocks that connect to the interconnection system, and any intervening T-switch elements in the system. Addressing of blocks is hardwired and geographic, and the routing information is compiled into the interface and T-switch logic at chip integration time. The platform requires some modularity at the chip level as well as the block level on chips. Therefore, knowledge of what other chips or their innards they are connected to can not be hard-coded in the chips themselves, as this may vary on different line cards.
  • [0125]
    However, with the present invention, it is possible to provide flexibility of chip arrangement with hard-wired routing information by giving each chip some simple rules and designing the topology and enumeration of the chips to support this. This has the dual benefit of simplicity and of being a natural extension to the routing mechanisms within chips themselves.
  • [0126]
    [0126]FIG. 11 illustrates an example of a linear chip arrangement. Of course, it is appreciated that different topologies can be realised according to the present invention. In such a linear arrangement, it is easy to number the chips 1100(0) to (2) etc sequentially so that any block in any chip knows that a transaction must be routed “up” or “down” the interconnection 1110 to reach its destination, as indicated in the chip ID-field of the physical address. This is exactly the same process as the block performs to route to another block on the same chip. In this case a 2-level decision is utilised. If in ‘present chip’ then route on Block ID, else route on Chip ID.
  • [0127]
    An alternative topology is shown in FIG. 12. It comprises a first bus lane 1201 and a second bus lane 1202 arranged in parallel. The first and second bus lane correspond to the interconnection system of the embodiment of the present invention. A plurality of multi threaded array processors (MTAPs) 1210 are connected across the two bus lanes 1201, 1202. An network input device 1220, a collector device 1230, a distributor device 1240, an network output device 1250 are connected to the second bus lane 1202 and a table lookup engine 1260 is connected to the first bus lane 1201. Details of operation of the devices connected to the bus lanes is not provided here.
  • [0128]
    As illustarted in FIG. 12, in an alternative topology, the first bus lane 1201 (256 bits wide, for example) is dedicated to fast path packet data. A second bus lane 1202 (128 bits wide, for example) is used for general non-packet data, such as table lookups, instruction fetching, external memory access, etc. Blocks accessing bus lanes 1201, 1202 use AVCI protocol. An additional bus lane may be used (not shown here for reading and writing block configuration and status registers. Blocks accessing this lane use the PVCI protocol.
  • [0129]
    More generally, where the blocks are connected to the interconnection and which lane or lanes they use can be selected. Allowance for floor planning constraints must obviously be taken into account. Blocks can have multiple bus interfaces, for example. Lane widths can be configured to meet the bandwidth requirements of the system.
  • [0130]
    Since the interconnection system of the present invention uses point-to-point connections between interfaces, and uses distributed arbitration, it is possible to have several pairs of functional blocks communicating simultaneously without any contention or interference. Traffic between blocks can only interfere if that traffic travels along a shared bus segment in the same direction. This situation can be avoided by choosing a suitable layout. Thus, bus contention can be avoided in the fast path packet flow. This is important to achieve predictable and reliable performance, and to avoid overprovisioning the interconnection.
  • [0131]
    The example above, avoids bus contention in the fast path, because the packet data flows left to right on bus lane 1201 via NIP-DIS-MTAP-COL-NOP. Since packets do not cross any bus segment more than once, there is no bus contention. There is no interference between the MTAP processors, because only one at a time is sending or receiving. Another way to avoid bus contention is to place the MTAP processors on a “spur” off the main data path, as shown in FIG. 13.
  • [0132]
    This topology uses a T-junction 1305 to exploit the fact that traffic going in opposite directions on the same bus segment 1300 is non-interfering. Using the T-junction block 1305 may ease the design of the bus topology to account for layout and floor planning constraints.
  • [0133]
    At the lowest (hardware) level of abstraction, the interconnection system of the present invention preferrably supports advanced virtual component interface transactions, which are simply variable size messages as defined in the virtual component interface standard, sent from an initiator interface to a target interface, possibly followed by a response at a later time. Because the response may be delayed, this is called a split transaction in the virtual component interface system. The network processing system architecture defines two higher levels of abstraction in the inter-block communication protocol, the chunk and the abstract datagram (frequently simply called a datagram). A chunk is a logical entity that represents a fairly small amount of data to be transferred from one block to another. An abstract datagram is a logical entity that represents the natural unit of data for the application. In network processing applications, abstract datagrams almost always correspond to network datagrams or packets. The distinction is made to allow for using the architecture blocks in other applications besides networking. Chunks are somewhat analogous to CSIX C frames, and are used for similar purposes, that is, to have a convenient, small unit of data transfer. Chunks have a defined maximum size, typically about 512 bytes, while datagrams can be much larger, typically up to 9K bytes; the exact size limits are configurable. When a datagram needs to be transferred from one block to another, the actual transfer is done by sending a sequence of chunks. The chunks are packaged within a series of AVCI transactions at the bus interface.
  • [0134]
    The system addressing scheme according to the embodiment of the present invention will now be described in more detail. The system according to an embodiment of the present invention may span a subsystem that is implemented in more than one chip.
  • [0135]
    Looking first at an individual chip, the routing of transactions and responses between initiators and targets is performed by the interface blocks that connect to the interconnection itself, and the intervening T-switch elements in the interconnection. Addressing according to an embodiment of the present invention of the blocks is hardwired and geographic, and the routing information is compiled into the interface units, T-switch and node elements logic at chip integration.
  • [0136]
    The interface ID, occupies the upper part of the physical 64 bit address, the lower bits being the offset within the block. Additional physical bits are reserved for the chip ID to support multi-chip expanses.
  • [0137]
    The platform according to the embodiment of the present invention requires some modularity at the chip level as well as the block level on chips, knowledge of what other chips or their innards they are connected to can not be hard-coded, as this may vary on different line cards. This prevents the use of the same hard-wired bus routing information scheme as exists in the interface units for transactions within one chip.
  • [0138]
    However, it is possible to provide flexibility of chip arrangement with hardwired routing information by giving each chip some simple rules and designing the topology and enumeration of chips to support this. This has the dual benefits of simplicity and of being a natural extension to the routing mechanisms within the chips themselves.
  • [0139]
    An example of a traffic handler subsystem in which the packet queue memory is implemented around two memory hub chips is shown in FIGS. 14 and 15.
  • [0140]
    In the example, the four chips have four connections to other chips. This results in possible ambiguities about the route that a transaction takes from one chip to the next. Therefore, it is necessary to control the flow of transactions by configuring the hardware and software appropriately, but without having to include: programmable routing functions in the interconnection.
  • [0141]
    This is achieved by making the chip ID an x,y coordinate instead of a single number. For example, chip ID for chip 1401 may be 4,2, chip 1403 may be 5,1, chip 1404 may be 5,3 and chip 1402 may be 6,2. Simple, hardwired rules are applied on about how to route the next hop of a transaction destined for another chip. Thus, locating the chips on a virtual “grid” such that the local rules produce the transaction flows desired. The grid can be “distorted” by leaving gaps or dislocations to achieve the desired effect.
  • [0142]
    Each chip has complete knowledge of itself, including how many external ports it has and their assignments to N,S,E,W compass points. This knowledge is wired into the inetrface units and T-switches. A chip has no knowledge at all of what other chips are connected to it, or their x,y coordinates.
  • [0143]
    The local rules at each chip are this:
  • [0144]
    1. A transaction is routed out on the interface that forms the least angle with its relative grid location.
  • [0145]
    2. In the event of a tie, N-S interfaces are favoured over E-W.
  • [0146]
    Applying these rules to the four chip example above, transactions along the main horizontal axis through Arrivals & Dispatch chips 1401, 1402 are simply routed using up/down on the x coordinate, with y=2.
  • [0147]
    Transactions from Arrivals or Dispatch 1401, 1402 to one of the memory hubs 1403, 1404 have an angle of 45 degrees, and the 2nd rule applies to route these N-S and not E-W.
  • [0148]
    Responses from memory hubs 1403, 1404 to any other chip have a clear E/W choice because no other chip has x=5.
  • [0149]
    The conjecture is that there is no chip topology or transaction flow that cannot be expressed by suitable choice of chip coordinates and application of the above rules.
  • [0150]
    Although a preferred embodiment of the method and system of the present invention has been illustrated in the accompanying drawings and described in the forgoing detailed description, it will be understood that the invention is not limited to the embodiment disclosed, but is capable of numerous variations, modifications without departing from the scope of the invention as set out in the following claims.

Claims (30)

  1. 1. An interconnection system for connecting a plurality of functional units, the interconnection system comprising a plurality of nodes, each node communicating with a functional unit, the interconnection system transporting a plurality of data packets between functional units, each data packet has routing information associated therewith to enable a node to direct the data packet via the interconnection system.
  2. 2. An interconnection system for interconnecting a plurality of functional units and transporting a plurality of data packets, the interconnection system comprising a plurality of nodes, each node communicating with a functional unit wherein, during transportation of the data packets between a first node and a second node, only the portion of the interconnection system between the first node and the second node is activated.
  3. 3. An interconnection system for interconnecting a plurality of functional units, each functional unit connected to the interconnection system via an interface unit, the interconnection system comprising a plurality of nodes, the interconnection system transporting a plurality of data packets wherein each interface unit translate the protocol for transporting the data packets and the protocol of the functional units.
  4. 4. An interconnection system for interconnecting a plurality of functional units and transporting a plurality of data packets between the functional units wherein arbitration is distributed to each functional unit.
  5. 5. An interconnection system according to any one of claims 2 to 4, wherein each data packet has routing information associated therewith to enable a node to direct the data packet via the interconnection system.
  6. 6. An interconnection system according to any one of claims 1, 3 or 4, wherein, during transportation of a data packet between a first node and a second node, only the portion of the interconnection system between the first node and the second node is activated.
  7. 7. An interconnection system according to claims 1, 2 or 4, wherein the interconnection system is protocol agnostic.
  8. 8. An interconnection system according to any one of claims 1 to 3, wherein arbitration is distributed between the functional units.
  9. 9. An interconnection system according to any one of the preceding claims further comprising a plurality of repeater units spaced along the interconnection at predetermined distances such that the data packets are transported between consecutive repeater units and/or nodes in a single clock cycle.
  10. 10. An interconnection system according to claim 9, wherein the data packets are pipelined between the nodes and/or repeater units
  11. 11. An interconnection system according to claim 9 or 10, wherein each repeater unit comprises means to compress data upon blockage of the interconnection.
  12. 12. An interconnection system according to any one the preceding claims, wherein the routing information includes the x,y coordinates of the destination.
  13. 13. An interconnection system according to any one of the preceding claims, wherein the clocking along the length of the interconnection system is distributed.
  14. 14. An interconnection system according to any one of the preceding claims, wherein each node comprises an input buffer, inject control and/or consume control.
  15. 15. An interconnection system according to claim 14, wherein each node can inject and output data at the same time.
  16. 16. An interconnection system according to any one of the preceding claims, wherein the interconnection system comprises a plurality of buses.
  17. 17. An interconnection system according to claim 16, wherein each node is connected to at least one of the plurality of buses.
  18. 18. An interconnection system according to claim 16 or 17, wherein at least a part of each bus comprises a pair of unidirectional bus lanes.
  19. 19. An interconnection system according to claim 15, wherein data is transported on each bus lane in an opposite direction.
  20. 20. An interconnection system according to any one of the preceding claims, further comprising at least one T-switch, the T-switch determining the direction to transport the data packets from the routing information associated with each data packet.
  21. 21. An interconnection system according to any one of the preceding claims, wherein delivery of the data packet is guaranteed.
  22. 22. A method for routing data packet between functional units, each data packet has routing information associated therewith, the method comprising the steps of:
    (a) reading the routing information;
    (b) determining the direction to transport the data packet from the routing information; and
    (c) transporting the data packet in the direction determined in step (b).
  23. 23. A processing system incorporating the interconnection system according to any one of claims 1 to 21.
  24. 24. A system according to claim 23, wherein each functional unit is connected to a node via an interface unit.
  25. 25. A system according to claim 24, wherein each interface unit comprises means to set the protocol for data to be transported to the interconnection system and to be received from the interconnection system.
  26. 26. A system according to any one of claims 24 to 25, wherein the functional units access the interconnection system using distributed arbitration.
  27. 27. A system according to any one of claims 24 to 26, wherein each functional unit comprises a reusable system on chip functional unit.
  28. 28. An integrated circuit incorporating the interconnection system according to any one of claim 1 to 21.
  29. 29. An integrated system comprising a plurality of chips, each chip incorporating the interconnection system according to any one of claims 1 to 21, wherein the interconnection system interconnects the plurality of chips.
  30. 30. A method for transporting a plurality of data packets via an interconnection system, the interconnection system comprising a plurality of nodes, the method comprising the steps of:
    transporting a data packet between a first node and a second node; and
    during transportation, only activating the portion of the interconnection system between the first node and the second node.
US10468167 2001-02-14 2002-02-14 Interconnection system Abandoned US20040114609A1 (en)

Priority Applications (7)

Application Number Priority Date Filing Date Title
GB0103678A GB0103678D0 (en) 2001-02-14 2001-02-14 Network processing
GB0103687.0 2001-02-14
GB0103678.9 2001-02-14
GB0103687A GB0103687D0 (en) 2001-02-14 2001-02-14 Network processing-architecture II
GB0121790.0 2001-09-10
GB0121790A GB0121790D0 (en) 2001-02-14 2001-09-10 Network processing systems
PCT/GB2002/000662 WO2002065700A3 (en) 2001-02-14 2002-02-14 An interconnection system

Publications (1)

Publication Number Publication Date
US20040114609A1 true true US20040114609A1 (en) 2004-06-17

Family

ID=27256074

Family Applications (10)

Application Number Title Priority Date Filing Date
US10074019 Abandoned US20020161926A1 (en) 2001-02-14 2002-02-14 Method for controlling the order of datagrams
US10468167 Abandoned US20040114609A1 (en) 2001-02-14 2002-02-14 Interconnection system
US10073948 Active 2023-04-24 US7856543B2 (en) 2001-02-14 2002-02-14 Data processing architectures for packet handling wherein batches of data packets of unpredictable size are distributed across processing elements arranged in a SIMD array operable to process different respective packet protocols at once while executing a single common instruction stream
US10468168 Active 2022-07-09 US7290162B2 (en) 2001-02-14 2002-02-14 Clock distribution system
US10074022 Abandoned US20020159466A1 (en) 2001-02-14 2002-02-14 Lookup engine
US11151271 Active 2025-10-11 US8200686B2 (en) 2001-02-14 2005-06-14 Lookup engine
US11151292 Abandoned US20050242976A1 (en) 2001-02-14 2005-06-14 Lookup engine
US11752299 Active 2022-08-01 US7818541B2 (en) 2001-02-14 2007-05-23 Data processing architectures
US11752300 Active 2022-06-20 US7917727B2 (en) 2001-02-14 2007-05-23 Data processing architectures for packet handling using a SIMD array
US12965673 Active US8127112B2 (en) 2001-02-14 2010-12-10 SIMD array operable to process different respective packet protocols simultaneously while executing a single common instruction stream

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US10074019 Abandoned US20020161926A1 (en) 2001-02-14 2002-02-14 Method for controlling the order of datagrams

Family Applications After (8)

Application Number Title Priority Date Filing Date
US10073948 Active 2023-04-24 US7856543B2 (en) 2001-02-14 2002-02-14 Data processing architectures for packet handling wherein batches of data packets of unpredictable size are distributed across processing elements arranged in a SIMD array operable to process different respective packet protocols at once while executing a single common instruction stream
US10468168 Active 2022-07-09 US7290162B2 (en) 2001-02-14 2002-02-14 Clock distribution system
US10074022 Abandoned US20020159466A1 (en) 2001-02-14 2002-02-14 Lookup engine
US11151271 Active 2025-10-11 US8200686B2 (en) 2001-02-14 2005-06-14 Lookup engine
US11151292 Abandoned US20050242976A1 (en) 2001-02-14 2005-06-14 Lookup engine
US11752299 Active 2022-08-01 US7818541B2 (en) 2001-02-14 2007-05-23 Data processing architectures
US11752300 Active 2022-06-20 US7917727B2 (en) 2001-02-14 2007-05-23 Data processing architectures for packet handling using a SIMD array
US12965673 Active US8127112B2 (en) 2001-02-14 2010-12-10 SIMD array operable to process different respective packet protocols simultaneously while executing a single common instruction stream

Country Status (5)

Country Link
US (10) US20020161926A1 (en)
JP (2) JP2004525449A (en)
CN (2) CN100367730C (en)
GB (5) GB2389689B (en)
WO (2) WO2002065700A3 (en)

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040042496A1 (en) * 2002-08-30 2004-03-04 Intel Corporation System including a segmentable, shared bus
US20050216625A1 (en) * 2004-03-09 2005-09-29 Smith Zachary S Suppressing production of bus transactions by a virtual-bus interface
US7055123B1 (en) * 2001-12-31 2006-05-30 Richard S. Norman High-performance interconnect arrangement for an array of discrete functional modules
US20070017694A1 (en) * 2005-07-20 2007-01-25 Tomoyuki Kubo Wiring board and manufacturing method for wiring board
US20070047584A1 (en) * 2005-08-24 2007-03-01 Spink Aaron T Interleaving data packets in a packet-based communication system
US20080276116A1 (en) * 2005-06-01 2008-11-06 Tobias Bjerregaard Method and an Apparatus for Providing Timing Signals to a Number of Circuits, an Integrated Circuit and a Node
US20090089030A1 (en) * 2007-09-28 2009-04-02 Rockwell Automation Technologies, Inc. Distributed simulation and synchronization
US20090089234A1 (en) * 2007-09-28 2009-04-02 Rockwell Automation Technologies, Inc. Automated code generation for simulators
US20090089027A1 (en) * 2007-09-28 2009-04-02 Rockwell Automation Technologies, Inc. Simulation controls for model variablity and randomness
US20090089227A1 (en) * 2007-09-28 2009-04-02 Rockwell Automation Technologies, Inc. Automated recommendations from simulation
US20090089031A1 (en) * 2007-09-28 2009-04-02 Rockwell Automation Technologies, Inc. Integrated simulation of controllers and devices
US20090089029A1 (en) * 2007-09-28 2009-04-02 Rockwell Automation Technologies, Inc. Enhanced execution speed to improve simulation performance
US20090268736A1 (en) * 2008-04-24 2009-10-29 Allison Brian D Early header CRC in data response packets with variable gap count
US20090271532A1 (en) * 2008-04-24 2009-10-29 Allison Brian D Early header CRC in data response packets with variable gap count
US20090268727A1 (en) * 2008-04-24 2009-10-29 Allison Brian D Early header CRC in data response packets with variable gap count
US20100241746A1 (en) * 2005-02-23 2010-09-23 International Business Machines Corporation Method, Program and System for Efficiently Hashing Packet Keys into a Firewall Connection Table
US20100278195A1 (en) * 2009-04-29 2010-11-04 Mahesh Wagh Packetized Interface For Coupling Agents
US7995618B1 (en) * 2007-10-01 2011-08-09 Teklatech A/S System and a method of transmitting data from a first device to a second device
US20130038427A1 (en) * 2010-03-12 2013-02-14 Zte Corporation Sight Spot Guiding System and Implementation Method Thereof
US20130229290A1 (en) * 2012-03-01 2013-09-05 Eaton Corporation Instrument panel bus interface
US20150012679A1 (en) * 2013-07-03 2015-01-08 Iii Holdings 2, Llc Implementing remote transaction functionalities between data processing nodes of a switched interconnect fabric

Families Citing this family (169)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7549056B2 (en) 1999-03-19 2009-06-16 Broadcom Corporation System and method for processing and protecting content
US7174452B2 (en) * 2001-01-24 2007-02-06 Broadcom Corporation Method for processing multiple security policies applied to a data packet structure
GB2389689B (en) 2001-02-14 2005-06-08 Clearspeed Technology Ltd Clock distribution system
US20030078997A1 (en) * 2001-10-22 2003-04-24 Franzel Kenneth S. Module and unified network backplane interface for local networks
FI113113B (en) 2001-11-20 2004-02-27 Nokia Corp A method and apparatus for synchronizing time-integrated circuit
US6836808B2 (en) * 2002-02-25 2004-12-28 International Business Machines Corporation Pipelined packet processing
US7415723B2 (en) * 2002-06-11 2008-08-19 Pandya Ashish A Distributed network security system and a hardware processor therefor
US7487264B2 (en) * 2002-06-11 2009-02-03 Pandya Ashish A High performance IP processor
US20050108518A1 (en) * 2003-06-10 2005-05-19 Pandya Ashish A. Runtime adaptable security processor
US7408957B2 (en) * 2002-06-13 2008-08-05 International Business Machines Corporation Selective header field dispatch in a network processing system
US8015303B2 (en) * 2002-08-02 2011-09-06 Astute Networks Inc. High data rate stateful protocol processing
US7684400B2 (en) * 2002-08-08 2010-03-23 Intel Corporation Logarithmic time range-based multifield-correlation packet classification
US20040066779A1 (en) * 2002-10-04 2004-04-08 Craig Barrack Method and implementation for context switchover
US8478811B2 (en) 2002-10-08 2013-07-02 Netlogic Microsystems, Inc. Advanced processor with credit based scheme for optimal packet flow in a multi-processor system on a chip
US7346757B2 (en) 2002-10-08 2008-03-18 Rmi Corporation Advanced processor translation lookaside buffer management in a multithreaded system
US8037224B2 (en) 2002-10-08 2011-10-11 Netlogic Microsystems, Inc. Delegating network processor operations to star topology serial bus interfaces
US7627721B2 (en) 2002-10-08 2009-12-01 Rmi Corporation Advanced processor with cache coherency
US8015567B2 (en) 2002-10-08 2011-09-06 Netlogic Microsystems, Inc. Advanced processor with mechanism for packet distribution at high line rate
US8176298B2 (en) 2002-10-08 2012-05-08 Netlogic Microsystems, Inc. Multi-core multi-threaded processing systems with instruction reordering in an in-order pipeline
US7334086B2 (en) * 2002-10-08 2008-02-19 Rmi Corporation Advanced processor with system on a chip interconnect technology
US7984268B2 (en) 2002-10-08 2011-07-19 Netlogic Microsystems, Inc. Advanced processor scheduling in a multithreaded system
US7961723B2 (en) * 2002-10-08 2011-06-14 Netlogic Microsystems, Inc. Advanced processor with mechanism for enforcing ordering between information sent on two independent networks
US7924828B2 (en) * 2002-10-08 2011-04-12 Netlogic Microsystems, Inc. Advanced processor with mechanism for fast packet queuing operations
US20050033831A1 (en) * 2002-10-08 2005-02-10 Abbas Rashid Advanced processor with a thread aware return address stack optimally used across active threads
US20050044324A1 (en) * 2002-10-08 2005-02-24 Abbas Rashid Advanced processor with mechanism for maximizing resource usage in an in-order pipeline with multiple threads
US9088474B2 (en) * 2002-10-08 2015-07-21 Broadcom Corporation Advanced processor with interfacing messaging network to a CPU
US7814218B1 (en) 2002-10-17 2010-10-12 Astute Networks, Inc. Multi-protocol and multi-format stateful processing
US7596621B1 (en) * 2002-10-17 2009-09-29 Astute Networks, Inc. System and method for managing shared state using multiple programmed processors
US8151278B1 (en) 2002-10-17 2012-04-03 Astute Networks, Inc. System and method for timer management in a stateful protocol processing system
DE60233145D1 (en) * 2002-10-31 2009-09-10 Alcatel Lucent A method for processing data packets at layer three in a telecommunications device
US7107478B2 (en) * 2002-12-05 2006-09-12 Connex Technology, Inc. Data processing system having a Cartesian Controller
US7383421B2 (en) * 2002-12-05 2008-06-03 Brightscale, Inc. Cellular engine for a data processing system
US7715392B2 (en) * 2002-12-12 2010-05-11 Stmicroelectronics, Inc. System and method for path compression optimization in a pipelined hardware bitmapped multi-bit trie algorithmic network search engine
JP4157403B2 (en) * 2003-03-19 2008-10-01 株式会社日立製作所 Packet communication device
US8477780B2 (en) * 2003-03-26 2013-07-02 Alcatel Lucent Processing packet information using an array of processing elements
US8539089B2 (en) * 2003-04-23 2013-09-17 Oracle America, Inc. System and method for vertical perimeter protection
CN100422974C (en) 2003-05-07 2008-10-01 皇家飞利浦电子股份有限公司 Processing system and method for transmitting data
US7558268B2 (en) * 2003-05-08 2009-07-07 Samsung Electronics Co., Ltd. Apparatus and method for combining forwarding tables in a distributed architecture router
US7500239B2 (en) * 2003-05-23 2009-03-03 Intel Corporation Packet processing system
US7349958B2 (en) * 2003-06-25 2008-03-25 International Business Machines Corporation Method for improving performance in a computer storage system by regulating resource requests from clients
US7174398B2 (en) * 2003-06-26 2007-02-06 International Business Machines Corporation Method and apparatus for implementing data mapping with shuffle algorithm
US7702882B2 (en) * 2003-09-10 2010-04-20 Samsung Electronics Co., Ltd. Apparatus and method for performing high-speed lookups in a routing table
US7886307B1 (en) * 2003-09-26 2011-02-08 The Mathworks, Inc. Object-oriented data transfer system for data sharing
CA2442803A1 (en) * 2003-09-26 2005-03-26 Ibm Canada Limited - Ibm Canada Limitee Structure and method for managing workshares in a parallel region
US7120815B2 (en) * 2003-10-31 2006-10-10 Hewlett-Packard Development Company, L.P. Clock circuitry on plural integrated circuits
US7634500B1 (en) 2003-11-03 2009-12-15 Netlogic Microsystems, Inc. Multiple string searching using content addressable memory
EP1687998B1 (en) * 2003-11-26 2017-09-20 Cisco Technology, Inc. Method and apparatus to inline encryption and decryption for a wireless station
US6954450B2 (en) * 2003-11-26 2005-10-11 Cisco Technology, Inc. Method and apparatus to provide data streaming over a network connection in a wireless MAC processor
US7340548B2 (en) * 2003-12-17 2008-03-04 Microsoft Corporation On-chip bus
US7058424B2 (en) * 2004-01-20 2006-06-06 Lucent Technologies Inc. Method and apparatus for interconnecting wireless and wireline networks
GB0403237D0 (en) * 2004-02-13 2004-03-17 Imec Inter Uni Micro Electr A method for realizing ground bounce reduction in digital circuits adapted according to said method
US7903777B1 (en) * 2004-03-03 2011-03-08 Marvell International Ltd. System and method for reducing electromagnetic interference and ground bounce in an information communication system by controlling phase of clock signals among a plurality of information communication devices
US7478109B1 (en) * 2004-03-15 2009-01-13 Cisco Technology, Inc. Identification of a longest matching prefix based on a search of intervals corresponding to the prefixes
US20050254486A1 (en) * 2004-05-13 2005-11-17 Ittiam Systems (P) Ltd. Multi processor implementation for signals requiring fast processing
DE102004035843B4 (en) * 2004-07-23 2010-04-15 Infineon Technologies Ag Router Network Processor
GB2417105B (en) 2004-08-13 2008-04-09 Clearspeed Technology Plc Processor memory system
US7913206B1 (en) * 2004-09-16 2011-03-22 Cadence Design Systems, Inc. Method and mechanism for performing partitioning of DRC operations
US7508397B1 (en) * 2004-11-10 2009-03-24 Nvidia Corporation Rendering of disjoint and overlapping blits
US8170019B2 (en) * 2004-11-30 2012-05-01 Broadcom Corporation CPU transmission of unmodified packets
US20060156316A1 (en) * 2004-12-18 2006-07-13 Gray Area Technologies System and method for application specific array processing
US20060212426A1 (en) * 2004-12-21 2006-09-21 Udaya Shakara Efficient CAM-based techniques to perform string searches in packet payloads
US7818705B1 (en) * 2005-04-08 2010-10-19 Altera Corporation Method and apparatus for implementing a field programmable gate array architecture with programmable clock skew
WO2006127596A3 (en) * 2005-05-20 2007-03-22 Hillcrest Lab Inc Dynamic hyperlinking approach
US7373475B2 (en) * 2005-06-21 2008-05-13 Intel Corporation Methods for optimizing memory unit usage to maximize packet throughput for multi-processor multi-threaded architectures
US20070086456A1 (en) * 2005-08-12 2007-04-19 Electronics And Telecommunications Research Institute Integrated layer frame processing device including variable protocol header
US7904852B1 (en) 2005-09-12 2011-03-08 Cadence Design Systems, Inc. Method and system for implementing parallel processing of electronic design automation tools
US8218770B2 (en) * 2005-09-13 2012-07-10 Agere Systems Inc. Method and apparatus for secure key management and protection
US7353332B2 (en) * 2005-10-11 2008-04-01 Integrated Device Technology, Inc. Switching circuit implementing variable string matching
US7551609B2 (en) * 2005-10-21 2009-06-23 Cisco Technology, Inc. Data structure for storing and accessing multiple independent sets of forwarding information
US7451293B2 (en) * 2005-10-21 2008-11-11 Brightscale Inc. Array of Boolean logic controlled processing elements with concurrent I/O processing and instruction sequencing
US7835359B2 (en) * 2005-12-08 2010-11-16 International Business Machines Corporation Method and apparatus for striping message payload data over a network
CN101371264A (en) * 2006-01-10 2009-02-18 光明测量公司 Method and apparatus for processing sub-blocks of multimedia data in parallel processing systems
US20070162531A1 (en) * 2006-01-12 2007-07-12 Bhaskar Kota Flow transform for integrated circuit design and simulation having combined data flow, control flow, and memory flow views
US8301885B2 (en) * 2006-01-27 2012-10-30 Fts Computertechnik Gmbh Time-controlled secure communication
KR20070088190A (en) * 2006-02-24 2007-08-29 삼성전자주식회사 Subword parallelism for processing multimedia data
WO2007116560A1 (en) * 2006-03-30 2007-10-18 Nec Corporation Parallel image processing system control method and apparatus
US7617409B2 (en) * 2006-05-01 2009-11-10 Arm Limited System for checking clock-signal correspondence
US8390354B2 (en) * 2006-05-17 2013-03-05 Freescale Semiconductor, Inc. Delay configurable device and methods thereof
US8041929B2 (en) * 2006-06-16 2011-10-18 Cisco Technology, Inc. Techniques for hardware-assisted multi-threaded processing
JP2008004046A (en) * 2006-06-26 2008-01-10 Toshiba Corp Resource management device, and program for the same
US7584286B2 (en) * 2006-06-28 2009-09-01 Intel Corporation Flexible and extensible receive side scaling
US8448096B1 (en) 2006-06-30 2013-05-21 Cadence Design Systems, Inc. Method and system for parallel processing of IC design layouts
US7516437B1 (en) * 2006-07-20 2009-04-07 Xilinx, Inc. Skew-driven routing for networks
CN1909418B (en) 2006-08-01 2010-05-12 华为技术有限公司 Clock distributing equipment for universal wireless interface and method for realizing speed switching
US20080040214A1 (en) * 2006-08-10 2008-02-14 Ip Commerce System and method for subsidizing payment transaction costs through online advertising
JP4846486B2 (en) * 2006-08-18 2011-12-28 富士通株式会社 The information processing apparatus and control method thereof
CA2557343C (en) * 2006-08-28 2015-09-22 Ibm Canada Limited-Ibm Canada Limitee Runtime code modification in a multi-threaded environment
US20080059763A1 (en) * 2006-09-01 2008-03-06 Lazar Bivolarski System and method for fine-grain instruction parallelism for increased efficiency of processing compressed multimedia data
US9563433B1 (en) 2006-09-01 2017-02-07 Allsearch Semi Llc System and method for class-based execution of an instruction broadcasted to an array of processing elements
US20080059762A1 (en) * 2006-09-01 2008-03-06 Bogdan Mitu Multi-sequence control for a data parallel system
US20080244238A1 (en) * 2006-09-01 2008-10-02 Bogdan Mitu Stream processing accelerator
US20080055307A1 (en) * 2006-09-01 2008-03-06 Lazar Bivolarski Graphics rendering pipeline
US20080059764A1 (en) * 2006-09-01 2008-03-06 Gheorghe Stefan Integral parallel machine
US20080059467A1 (en) * 2006-09-05 2008-03-06 Lazar Bivolarski Near full motion search algorithm
US7657856B1 (en) 2006-09-12 2010-02-02 Cadence Design Systems, Inc. Method and system for parallel processing of IC design layouts
US7783654B1 (en) 2006-09-19 2010-08-24 Netlogic Microsystems, Inc. Multiple string searching using content addressable memory
JP4377899B2 (en) * 2006-09-20 2009-12-02 株式会社東芝 Resource management apparatus and program
US8010966B2 (en) * 2006-09-27 2011-08-30 Cisco Technology, Inc. Multi-threaded processing using path locks
US8179896B2 (en) 2006-11-09 2012-05-15 Justin Mark Sobaje Network processors and pipeline optimization methods
US9141557B2 (en) 2006-12-08 2015-09-22 Ashish A. Pandya Dynamic random access memory (DRAM) that comprises a programmable intelligent search memory (PRISM) and a cryptography processing engine
US7996348B2 (en) 2006-12-08 2011-08-09 Pandya Ashish A 100GBPS security and search architecture using programmable intelligent search memory (PRISM) that comprises one or more bit interval counters
JP4249780B2 (en) * 2006-12-26 2009-04-08 株式会社東芝 To manage the resources unit, and the program
US7917486B1 (en) 2007-01-18 2011-03-29 Netlogic Microsystems, Inc. Optimizing search trees by increasing failure size parameter
EP2132645B1 (en) * 2007-03-06 2011-05-04 NEC Corporation A data transfer network and control apparatus for a system with an array of processing elements each either self- or common controlled
JP2009086733A (en) * 2007-09-27 2009-04-23 Toshiba Corp Information processor, control method of information processor and control program of information processor
US9596324B2 (en) 2008-02-08 2017-03-14 Broadcom Corporation System and method for parsing and allocating a plurality of packets to processor core threads
US8250578B2 (en) * 2008-02-22 2012-08-21 International Business Machines Corporation Pipelining hardware accelerators to computer systems
US8726289B2 (en) * 2008-02-22 2014-05-13 International Business Machines Corporation Streaming attachment of hardware accelerators to computer systems
WO2009134223A1 (en) * 2008-04-30 2009-11-05 Hewlett-Packard Development Company, L.P. Intentionally skewed optical clock signal distribution
JP2009271724A (en) * 2008-05-07 2009-11-19 Toshiba Corp Hardware engine controller
KR101474478B1 (en) * 2008-05-30 2014-12-19 어드밴스드 마이크로 디바이시즈, 인코포레이티드 Local and global data share
US8958419B2 (en) * 2008-06-16 2015-02-17 Intel Corporation Switch fabric primitives
US8566487B2 (en) 2008-06-24 2013-10-22 Hartvig Ekner System and method for creating a scalable monolithic packet processing engine
US8311057B1 (en) 2008-08-05 2012-11-13 Xilinx, Inc. Managing formatting of packets of a communication protocol
US7804844B1 (en) * 2008-08-05 2010-09-28 Xilinx, Inc. Dataflow pipeline implementing actions for manipulating packets of a communication protocol
US7949007B1 (en) 2008-08-05 2011-05-24 Xilinx, Inc. Methods of clustering actions for manipulating packets of a communication protocol
US8160092B1 (en) 2008-08-05 2012-04-17 Xilinx, Inc. Transforming a declarative description of a packet processor
CN102112983A (en) * 2008-08-06 2011-06-29 Nxp股份有限公司 SIMD parallel processor architecture
CN101355482B (en) 2008-09-04 2011-09-21 中兴通讯股份有限公司 Equipment, method and system for implementing identification of embedded device address sequence
US8493979B2 (en) * 2008-12-30 2013-07-23 Intel Corporation Single instruction processing of network packets
JP5238525B2 (en) * 2009-01-13 2013-07-17 株式会社東芝 To manage the resources unit, and the program
KR101553652B1 (en) * 2009-02-18 2015-09-16 삼성전자 주식회사 Instructions compiled for heterogeneous processors ring device and method
US8140792B2 (en) * 2009-02-25 2012-03-20 International Business Machines Corporation Indirectly-accessed, hardware-affine channel storage in transaction-oriented DMA-intensive environments
US9461930B2 (en) 2009-04-27 2016-10-04 Intel Corporation Modifying data streams without reordering in a multi-thread, multi-flow network processor
US8874878B2 (en) * 2010-05-18 2014-10-28 Lsi Corporation Thread synchronization in a multi-thread, multi-flow network communications processor architecture
US8743877B2 (en) * 2009-12-21 2014-06-03 Steven L. Pope Header processing engine
US8332460B2 (en) * 2010-04-14 2012-12-11 International Business Machines Corporation Performing a local reduction operation on a parallel computer
EP2596470A1 (en) * 2010-07-19 2013-05-29 Advanced Micro Devices, Inc. Data processing using on-chip memory in multiple processing units
US8880507B2 (en) * 2010-07-22 2014-11-04 Brocade Communications Systems, Inc. Longest prefix match using binary search tree
US8904115B2 (en) * 2010-09-28 2014-12-02 Texas Instruments Incorporated Cache with multiple access pipelines
RU2436151C1 (en) * 2010-11-01 2011-12-10 Федеральное государственное унитарное предприятие "Российский Федеральный ядерный центр - Всероссийский научно-исследовательский институт экспериментальной физики" (ФГУП "РФЯЦ-ВНИИЭФ") Method of determining structure of hybrid computer system
US9667539B2 (en) * 2011-01-17 2017-05-30 Alcatel Lucent Method and apparatus for providing transport of customer QoS information via PBB networks
US8869162B2 (en) 2011-04-26 2014-10-21 Microsoft Corporation Stream processing on heterogeneous hardware devices
US9020892B2 (en) * 2011-07-08 2015-04-28 Microsoft Technology Licensing, Llc Efficient metadata storage
US20130036083A1 (en) 2011-08-02 2013-02-07 Cavium, Inc. System and Method for Storing Lookup Request Rules in Multiple Memories
US8923306B2 (en) 2011-08-02 2014-12-30 Cavium, Inc. Phased bucket pre-fetch in a network processor
US8910178B2 (en) 2011-08-10 2014-12-09 International Business Machines Corporation Performing a global barrier operation in a parallel computer
US9154335B2 (en) * 2011-11-08 2015-10-06 Marvell Israel (M.I.S.L) Ltd. Method and apparatus for transmitting data on a network
US9542236B2 (en) * 2011-12-29 2017-01-10 Oracle International Corporation Efficiency sequencer for multiple concurrently-executing threads of execution
US20150033001A1 (en) * 2011-12-29 2015-01-29 Intel Corporation Method, device and system for control signalling in a data path module of a data stream processing engine
US9495135B2 (en) 2012-02-09 2016-11-15 International Business Machines Corporation Developing collective operations for a parallel computer
US9178730B2 (en) 2012-02-24 2015-11-03 Freescale Semiconductor, Inc. Clock distribution module, synchronous digital system and method therefor
WO2013141290A1 (en) * 2012-03-23 2013-09-26 株式会社Mush-A Data processing device, data processing system, data structure, recording medium, storage device and data processing method
JP2013222364A (en) * 2012-04-18 2013-10-28 Renesas Electronics Corp Signal processing circuit
US9082078B2 (en) 2012-07-27 2015-07-14 The Intellisis Corporation Neural processing engine and architecture using the same
CN103631315A (en) * 2012-08-22 2014-03-12 上海华虹集成电路有限责任公司 Clock design method facilitating timing sequence repair
US8775727B2 (en) 2012-08-31 2014-07-08 Lsi Corporation Lookup engine with pipelined access, speculative add and lock-in-hit function
US9185057B2 (en) * 2012-12-05 2015-11-10 The Intellisis Corporation Smart memory
US9639371B2 (en) 2013-01-29 2017-05-02 Advanced Micro Devices, Inc. Solution to divergent branches in a SIMD core using hardware pointers
US9391893B2 (en) * 2013-02-26 2016-07-12 Dell Products L.P. Lookup engine for an information handling system
US20140269690A1 (en) * 2013-03-13 2014-09-18 Qualcomm Incorporated Network element with distributed flow tables
US9185003B1 (en) * 2013-05-02 2015-11-10 Amazon Technologies, Inc. Distributed clock network with time synchronization and activity tracing between nodes
US20150120224A1 (en) * 2013-10-29 2015-04-30 C3 Energy, Inc. Systems and methods for processing data relating to energy usage
CN105794234A (en) * 2013-11-29 2016-07-20 日本电气株式会社 Apparatus, system and method for mtc
US9690713B1 (en) 2014-04-22 2017-06-27 Parallel Machines Ltd. Systems and methods for effectively interacting with a flash memory
US9547553B1 (en) 2014-03-10 2017-01-17 Parallel Machines Ltd. Data resiliency in a shared memory pool
US9372724B2 (en) * 2014-04-01 2016-06-21 Freescale Semiconductor, Inc. System and method for conditional task switching during ordering scope transitions
US9372723B2 (en) * 2014-04-01 2016-06-21 Freescale Semiconductor, Inc. System and method for conditional task switching during ordering scope transitions
US9781027B1 (en) 2014-04-06 2017-10-03 Parallel Machines Ltd. Systems and methods to communicate with external destinations via a memory network
US9733981B2 (en) 2014-06-10 2017-08-15 Nxp Usa, Inc. System and method for conditional task switching during ordering scope transitions
US9639473B1 (en) 2014-12-09 2017-05-02 Parallel Machines Ltd. Utilizing a cache mechanism by copying a data set from a cache-disabled memory location to a cache-enabled memory location
US9781225B1 (en) 2014-12-09 2017-10-03 Parallel Machines Ltd. Systems and methods for cache streams
US9753873B1 (en) 2014-12-09 2017-09-05 Parallel Machines Ltd. Systems and methods for key-value transactions
US9690705B1 (en) 2014-12-09 2017-06-27 Parallel Machines Ltd. Systems and methods for processing data sets according to an instructed order
US9594688B1 (en) 2014-12-09 2017-03-14 Parallel Machines Ltd. Systems and methods for executing actions using cached data
US9594696B1 (en) 2014-12-09 2017-03-14 Parallel Machines Ltd. Systems and methods for automatic generation of parallel data processing code
US9552327B2 (en) 2015-01-29 2017-01-24 Knuedge Incorporated Memory controller for a network on a chip device
US20160381136A1 (en) * 2015-06-24 2016-12-29 Futurewei Technologies, Inc. System, method, and computer program for providing rest services to fine-grained resources based on a resource-oriented network
US9595308B1 (en) 2016-03-31 2017-03-14 Altera Corporation Multiple-die synchronous insertion delay measurement circuit and methods

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5659781A (en) * 1994-06-29 1997-08-19 Larson; Noble G. Bidirectional systolic ring network
US5828858A (en) * 1996-09-16 1998-10-27 Virginia Tech Intellectual Properties, Inc. Worm-hole run-time reconfigurable processor field programmable gate array (FPGA)
US5923660A (en) * 1996-01-31 1999-07-13 Galileo Technologies Ltd. Switching ethernet controller
US6009488A (en) * 1997-11-07 1999-12-28 Microlinc, Llc Computer having packet-based interconnect channel
US6208619B1 (en) * 1997-03-27 2001-03-27 Kabushiki Kaisha Toshiba Packet data flow control method and device
US6366584B1 (en) * 1999-02-06 2002-04-02 Triton Network Systems, Inc. Commercial network based on point to point radios

Family Cites Families (149)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE2727180A1 (en) * 1976-06-23 1978-01-05 Lolli & C Spa Diffuser for air conditioning
GB8401805D0 (en) * 1984-01-24 1984-02-29 Int Computers Ltd Data processing apparatus
JPS6362010B2 (en) * 1984-12-27 1988-12-01
US4641571A (en) * 1985-07-15 1987-02-10 Enamel Products & Plating Co. Turbo fan vent
US4850027A (en) * 1985-07-26 1989-07-18 International Business Machines Corporation Configurable parallel pipeline image processing system
JP2564805B2 (en) * 1985-08-08 1996-12-18 日本電気株式会社 The information processing apparatus
US4755986A (en) * 1985-09-13 1988-07-05 Nec Corporation Packet switching system
US5021947A (en) * 1986-03-31 1991-06-04 Hughes Aircraft Company Data-flow multiprocessor architecture with three dimensional multistage interconnection network for efficient signal and data processing
GB8618943D0 (en) * 1986-08-02 1986-09-10 Int Computers Ltd Data processing apparatus
DE3751412T2 (en) * 1986-09-02 1995-12-14 Fuji Photo Film Co Ltd Method and apparatus for image processing of the image signal with gradation correction.
US5418970A (en) * 1986-12-17 1995-05-23 Massachusetts Institute Of Technology Parallel processing system with processor array with processing elements addressing associated memories using host supplied address value and base register content
GB8723203D0 (en) * 1987-10-02 1987-11-04 Crosfield Electronics Ltd Interactive image modification
DE3742941C2 (en) * 1987-12-18 1989-11-16 Standard Elektrik Lorenz Ag, 7000 Stuttgart, De
JP2559262B2 (en) * 1988-10-13 1996-12-04 富士写真フイルム株式会社 Magnetic disk
JPH02105910A (en) * 1988-10-14 1990-04-18 Hitachi Ltd Logic integrated circuit
DE69033517T2 (en) * 1989-07-12 2000-12-21 Cabletron Systems Inc Search within a database by compressed Präfixassoziation
US5212777A (en) * 1989-11-17 1993-05-18 Texas Instruments Incorporated Multi-processor reconfigurable in single instruction multiple data (SIMD) and multiple instruction multiple data (MIMD) modes and method of operation
US5218709A (en) * 1989-12-28 1993-06-08 The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration Special purpose parallel computer architecture for real-time control and simulation in robotic applications
US5426610A (en) * 1990-03-01 1995-06-20 Texas Instruments Incorporated Storage circuitry using sense amplifier with temporary pause for voltage supply isolation
JPH04219859A (en) * 1990-03-12 1992-08-10 Hewlett Packard Co <Hp> Hardware distributor that distributes series instruction stream data to parallel processors
US5327159A (en) * 1990-06-27 1994-07-05 Texas Instruments Incorporated Packed bus selection of multiple pixel depths in palette devices, systems and methods
US5121198A (en) * 1990-06-28 1992-06-09 Eastman Kodak Company Method of setting the contrast of a color video picture in a computer controlled photographic film analyzing system
US5765011A (en) 1990-11-13 1998-06-09 International Business Machines Corporation Parallel processing system having a synchronous SIMD processing with processing elements emulating SIMD operation using individual instruction streams
US5590345A (en) 1990-11-13 1996-12-31 International Business Machines Corporation Advanced parallel array processor(APAP)
US5708836A (en) 1990-11-13 1998-01-13 International Business Machines Corporation SIMD/MIMD inter-processor communication
US5625836A (en) 1990-11-13 1997-04-29 International Business Machines Corporation SIMD/MIMD processing memory element (PME)
US5963746A (en) 1990-11-13 1999-10-05 International Business Machines Corporation Fully distributed processing memory element
US5367643A (en) * 1991-02-06 1994-11-22 International Business Machines Corporation Generic high bandwidth adapter having data packet memory configured in three level hierarchy for temporary storage of variable length data packets
US5285528A (en) * 1991-02-22 1994-02-08 International Business Machines Corporation Data structures and algorithms for managing lock states of addressable element ranges
WO1992015960A1 (en) 1991-03-05 1992-09-17 Hajime Seki Electronic computer system and processor elements used for this system
US5313582A (en) * 1991-04-30 1994-05-17 Standard Microsystems Corporation Method and apparatus for buffering data within stations of a communication network
US5224100A (en) * 1991-05-09 1993-06-29 David Sarnoff Research Center, Inc. Routing technique for a hierarchical interprocessor-communication network between massively-parallel processors
JPH07500702A (en) * 1991-07-01 1995-01-19
US5404550A (en) * 1991-07-25 1995-04-04 Tandem Computers Incorporated Method and apparatus for executing tasks by following a linked list of memory packets
US5155484A (en) * 1991-09-13 1992-10-13 Salient Software, Inc. Fast data compressor with direct lookup table indexing into history buffer
JP2750968B2 (en) * 1991-11-18 1998-05-18 シャープ株式会社 Data driven information processor
US5307381A (en) * 1991-12-27 1994-04-26 Intel Corporation Skew-free clock signal distribution network in a microprocessor
US5603028A (en) * 1992-03-02 1997-02-11 Mitsubishi Denki Kabushiki Kaisha Method and apparatus for data distribution
JPH0696035A (en) 1992-09-16 1994-04-08 Sanyo Electric Co Ltd Processing element and parallel processing computer using the same
EP0601715A1 (en) * 1992-12-11 1994-06-15 National Semiconductor Corporation Bus of CPU core optimized for accessing on-chip memory devices
US5579223A (en) * 1992-12-24 1996-11-26 Microsoft Corporation Method and system for incorporating modifications made to a computer program into a translated version of the computer program
GB2277235B (en) * 1993-04-14 1998-01-07 Plessey Telecomm Apparatus and method for the digital transmission of data
US5640551A (en) * 1993-04-14 1997-06-17 Apple Computer, Inc. Efficient high speed trie search process
US5420858A (en) * 1993-05-05 1995-05-30 Synoptics Communications, Inc. Method and apparatus for communications from a non-ATM communication medium to an ATM communication medium
JP2629568B2 (en) * 1993-07-30 1997-07-09 日本電気株式会社 Atm cell exchange system
US5918061A (en) * 1993-12-29 1999-06-29 Intel Corporation Enhanced power managing unit (PMU) in a multiprocessor chip
US5524223A (en) 1994-01-31 1996-06-04 Motorola, Inc. Instruction accelerator for processing loop instructions with address generator using multiple stored increment values
DE69428186D1 (en) * 1994-04-28 2001-10-11 Hewlett Packard Co Multicast device
EP0681236B1 (en) * 1994-05-05 2000-11-22 Conexant Systems, Inc. Space vector data path
CA2165076C (en) 1994-05-06 2000-11-21 Michael J. Schellinger Call routing system for a wireless data device
US5463732A (en) * 1994-05-13 1995-10-31 David Sarnoff Research Center, Inc. Method and apparatus for accessing a distributed data buffer
US5682480A (en) * 1994-08-15 1997-10-28 Hitachi, Ltd. Parallel computer system for performing barrier synchronization by transferring the synchronization packet through a path which bypasses the packet buffer in response to an interrupt
US5949781A (en) * 1994-08-31 1999-09-07 Brooktree Corporation Controller for ATM segmentation and reassembly
US5586119A (en) * 1994-08-31 1996-12-17 Motorola, Inc. Method and apparatus for packet alignment in a communication system
US5754584A (en) * 1994-09-09 1998-05-19 Omnipoint Corporation Non-coherent spread-spectrum continuous-phase modulation communication system
WO1996014617A1 (en) * 1994-11-07 1996-05-17 Temple University - Of The Commonwealth System Higher Education Multicomputer system and method
WO1999014893A3 (en) * 1997-09-17 1999-07-29 Sony Electronics Inc Multi-port bridge with triplet architecture and periodical update of address look-up table
US5651099A (en) * 1995-01-26 1997-07-22 Hewlett-Packard Company Use of a genetic algorithm to optimize memory space
JPH08249306A (en) * 1995-03-09 1996-09-27 Sharp Corp Data driven type information processor
US5634068A (en) * 1995-03-31 1997-05-27 Sun Microsystems, Inc. Packet switched cache coherent multiprocessor system
US5835095A (en) * 1995-05-08 1998-11-10 Intergraph Corporation Visible line processor
JP3515263B2 (en) * 1995-05-18 2004-04-05 株式会社東芝 Router device, a data communication network system, node device, the data transfer method and a network connection method
US5689677A (en) 1995-06-05 1997-11-18 Macmillan; David C. Circuit for enhancing performance of a computer for personal use
US6147996A (en) * 1995-08-04 2000-11-14 Cisco Technology, Inc. Pipelined multiple issue packet switch
US6115802A (en) * 1995-10-13 2000-09-05 Sun Mircrosystems, Inc. Efficient hash table for use in multi-threaded environments
US5612956A (en) * 1995-12-15 1997-03-18 General Instrument Corporation Of Delaware Reformatting of variable rate data for fixed rate communication
US5822606A (en) * 1996-01-11 1998-10-13 Morton; Steven G. DSP having a plurality of like processors controlled in parallel by an instruction word, and a control processor also controlled by the instruction word
DE69627893T2 (en) 1996-02-06 2004-05-13 International Business Machines Corp. Parallel immediate processing of fixed length cells
US5781549A (en) * 1996-02-23 1998-07-14 Allied Telesyn International Corp. Method and apparatus for switching data packets in a data network
US6035193A (en) 1996-06-28 2000-03-07 At&T Wireless Services Inc. Telephone system having land-line-supported private base station switchable into cellular network
US6101176A (en) 1996-07-24 2000-08-08 Nokia Mobile Phones Method and apparatus for operating an indoor CDMA telecommunications system
US6088355A (en) * 1996-10-11 2000-07-11 C-Cube Microsystems, Inc. Processing system with pointer-based ATM segmentation and reassembly
US6791947B2 (en) * 1996-12-16 2004-09-14 Juniper Networks In-line packet processing
JP3000961B2 (en) * 1997-06-06 2000-01-17 日本電気株式会社 The semiconductor integrated circuit
US5969559A (en) * 1997-06-09 1999-10-19 Schwartz; David M. Method and apparatus for using a power grid for clock distribution in semiconductor integrated circuits
US5828870A (en) * 1997-06-30 1998-10-27 Adaptec, Inc. Method and apparatus for controlling clock skew in an integrated circuit
JP3469046B2 (en) * 1997-07-08 2003-11-25 東芝情報システム株式会社 Function blocks, and a semiconductor integrated circuit device
US6047304A (en) * 1997-07-29 2000-04-04 Nortel Networks Corporation Method and apparatus for performing lane arithmetic to perform network processing
JPH11194850A (en) 1997-09-19 1999-07-21 Lsi Logic Corp Clock distribution network for integrated circuit, and clock distribution method
US5872993A (en) * 1997-12-01 1999-02-16 Advanced Micro Devices, Inc. Communications system with multiple, simultaneous accesses to a memory
US6081523A (en) * 1997-12-05 2000-06-27 Advanced Micro Devices, Inc. Arrangement for transmitting packet data segments from a media access controller across multiple physical links
US6219796B1 (en) * 1997-12-23 2001-04-17 Texas Instruments Incorporated Power reduction for processors by software control of functional units
US6301603B1 (en) * 1998-02-17 2001-10-09 Euphonics Incorporated Scalable audio processing on a heterogeneous processor array
JP3490286B2 (en) 1998-03-13 2004-01-26 株式会社東芝 Router device and a frame transfer method
JPH11272629A (en) * 1998-03-19 1999-10-08 Hitachi Ltd Data processor
US6052769A (en) * 1998-03-31 2000-04-18 Intel Corporation Method and apparatus for moving select non-contiguous bytes of packed data in a single instruction
US6275508B1 (en) * 1998-04-21 2001-08-14 Nexabit Networks, Llc Method of and system for processing datagram headers for high speed computer network interfaces at low clock speeds, utilizing scalable algorithms for performing such network header adaptation (SAPNA)
WO1999057858A1 (en) * 1998-05-07 1999-11-11 Cabletron Systems, Inc. Multiple priority buffering in a computer network
US6131102A (en) * 1998-06-15 2000-10-10 Microsoft Corporation Method and system for cost computation of spelling suggestions and automatic replacement
US6305001B1 (en) * 1998-06-18 2001-10-16 Lsi Logic Corporation Clock distribution network planning and method therefor
EP0991231B1 (en) 1998-09-10 2009-07-01 International Business Machines Corporation Packet switch adapter for variable length packets
US6393026B1 (en) * 1998-09-17 2002-05-21 Nortel Networks Limited Data packet processing system and method for a router
EP0992895A1 (en) * 1998-10-06 2000-04-12 Texas Instruments France Hardware accelerator for data processing systems
JP3504510B2 (en) * 1998-10-12 2004-03-08 日本電信電話株式会社 Packet switch
JP3866425B2 (en) * 1998-11-12 2007-01-10 株式会社日立コミュニケーションテクノロジー Packet switch
US6272522B1 (en) * 1998-11-17 2001-08-07 Sun Microsystems, Incorporated Computer data packet switching and load balancing system using a general-purpose multiprocessor architecture
US6256421B1 (en) * 1998-12-07 2001-07-03 Xerox Corporation Method and apparatus for simulating JPEG compression
JP3704438B2 (en) * 1998-12-09 2005-10-12 株式会社日立インフォメーションテクノロジー Variable-length packet communication device
US6338078B1 (en) * 1998-12-17 2002-01-08 International Business Machines Corporation System and method for sequencing packets for multiprocessor parallelization in a computer network system
US20030093613A1 (en) * 2000-01-14 2003-05-15 David Sherman Compressed ternary mask system and method
JP3587076B2 (en) 1999-03-05 2004-11-10 松下電器産業株式会社 Packet receiving apparatus
WO2000064545A8 (en) * 1999-04-23 2001-07-26 Dice Inc Z Gaming apparatus and method
GB9917127D0 (en) * 1999-07-21 1999-09-22 Element 14 Ltd Conditional instruction execution in a computer
GB2352595B (en) * 1999-07-27 2003-10-01 Sgs Thomson Microelectronics Data processing device
US6631422B1 (en) 1999-08-26 2003-10-07 International Business Machines Corporation Network adapter utilizing a hashing function for distributing packets to multiple processors for parallel processing
US6404752B1 (en) * 1999-08-27 2002-06-11 International Business Machines Corporation Network switch using network processor and methods
US6631419B1 (en) * 1999-09-22 2003-10-07 Juniper Networks, Inc. Method and apparatus for high-speed longest prefix and masked prefix table search
US6963572B1 (en) * 1999-10-22 2005-11-08 Alcatel Canada Inc. Method and apparatus for segmentation and reassembly of data packets in a communication switch
WO2001046777A3 (en) * 1999-10-26 2008-09-25 Pyxsys Corp Mimd arrangement of simd machines
JP2001177574A (en) 1999-12-20 2001-06-29 Kddi Corp Transmission controller in packet exchange network
GB2357601B (en) * 1999-12-23 2004-03-31 Ibm Remote power control
US6661794B1 (en) 1999-12-29 2003-12-09 Intel Corporation Method and apparatus for gigabit packet assignment for multithreaded packet processing
DE60015186D1 (en) * 2000-01-07 2004-11-25 Ibm Method and system for frame-and protocol-ordination
JP2001202345A (en) * 2000-01-21 2001-07-27 Hitachi Ltd Parallel processor
DE60026229T2 (en) * 2000-01-27 2006-12-14 International Business Machines Corp. Method and apparatus for classifying data packets
US6704794B1 (en) * 2000-03-03 2004-03-09 Nokia Intelligent Edge Routers Inc. Cell reassembly for packet based networks
JP2001251349A (en) * 2000-03-06 2001-09-14 Fujitsu Ltd Packet processor
US7139282B1 (en) * 2000-03-24 2006-11-21 Juniper Networks, Inc. Bandwidth division for packet processing
US7089240B2 (en) * 2000-04-06 2006-08-08 International Business Machines Corporation Longest prefix match lookup using hash function
US7107265B1 (en) * 2000-04-06 2006-09-12 International Business Machines Corporation Software management tree implementation for a network processor
US6718326B2 (en) * 2000-08-17 2004-04-06 Nippon Telegraph And Telephone Corporation Packet classification search device and method
US20020107903A1 (en) * 2000-11-07 2002-08-08 Richter Roger K. Methods and systems for the order serialization of information in a network processing environment
DE10059026A1 (en) 2000-11-28 2002-06-13 Infineon Technologies Ag Unit for distributing and processing data packets
GB2370381B (en) * 2000-12-19 2003-12-24 Picochip Designs Ltd Processor architecture
USD453960S1 (en) * 2001-01-30 2002-02-26 Molded Products Company Shroud for a fan assembly
US6832261B1 (en) 2001-02-04 2004-12-14 Cisco Technology, Inc. Method and apparatus for distributed resequencing and reassembly of subdivided packets
GB2389689B (en) 2001-02-14 2005-06-08 Clearspeed Technology Ltd Clock distribution system
GB2407673B (en) 2001-02-14 2005-08-24 Clearspeed Technology Plc Lookup engine
JP4475835B2 (en) * 2001-03-05 2010-06-09 富士通株式会社 Input line interface device and a packet communication device
CA97495S (en) * 2001-03-20 2003-05-07 Flettner Ventilator Ltd Rotor
USD471971S1 (en) * 2001-03-20 2003-03-18 Flettner Ventilator Limited Ventilation cover
US6687715B2 (en) * 2001-06-28 2004-02-03 Intel Corporation Parallel lookups that keep order
US6922716B2 (en) 2001-07-13 2005-07-26 Motorola, Inc. Method and apparatus for vector processing
EP1423957A1 (en) * 2001-08-29 2004-06-02 Nokia Corporation Method and system for classifying binary strings
US7283538B2 (en) * 2001-10-12 2007-10-16 Vormetric, Inc. Load balanced scalable network gateway processor architecture
US7317730B1 (en) * 2001-10-13 2008-01-08 Greenfield Networks, Inc. Queueing architecture and load balancing for parallel packet processing in communication networks
US6941446B2 (en) 2002-01-21 2005-09-06 Analog Devices, Inc. Single instruction multiple data array cell
US7382782B1 (en) 2002-04-12 2008-06-03 Juniper Networks, Inc. Packet spraying for load balancing across multiple packet processors
US20030235194A1 (en) * 2002-06-04 2003-12-25 Mike Morrison Network processor with multiple multi-threaded packet-type specific engines
US7200137B2 (en) * 2002-07-29 2007-04-03 Freescale Semiconductor, Inc. On chip network that maximizes interconnect utilization between processing elements
US8015567B2 (en) 2002-10-08 2011-09-06 Netlogic Microsystems, Inc. Advanced processor with mechanism for packet distribution at high line rate
US7656799B2 (en) 2003-07-29 2010-02-02 Citrix Systems, Inc. Flow control system architecture
GB0226249D0 (en) * 2002-11-11 2002-12-18 Clearspeed Technology Ltd Traffic handling system
US7620050B2 (en) 2004-09-10 2009-11-17 Canon Kabushiki Kaisha Communication control device and communication control method
US7787454B1 (en) 2007-10-31 2010-08-31 Gigamon Llc. Creating and/or managing meta-data for data storage devices using a packet switch appliance
JP5231926B2 (en) 2008-10-06 2013-07-10 キヤノン株式会社 The information processing apparatus, a control method, and computer program
US8493979B2 (en) * 2008-12-30 2013-07-23 Intel Corporation Single instruction processing of network packets
US8014295B2 (en) 2009-07-14 2011-09-06 Ixia Parallel packet processor with session active checker
JP6096035B2 (en) 2013-03-29 2017-03-15 株式会社タムラ製作所 The method of manufacturing an electronic substrate using soldering flux composition and it

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5659781A (en) * 1994-06-29 1997-08-19 Larson; Noble G. Bidirectional systolic ring network
US5923660A (en) * 1996-01-31 1999-07-13 Galileo Technologies Ltd. Switching ethernet controller
US5828858A (en) * 1996-09-16 1998-10-27 Virginia Tech Intellectual Properties, Inc. Worm-hole run-time reconfigurable processor field programmable gate array (FPGA)
US6208619B1 (en) * 1997-03-27 2001-03-27 Kabushiki Kaisha Toshiba Packet data flow control method and device
US6009488A (en) * 1997-11-07 1999-12-28 Microlinc, Llc Computer having packet-based interconnect channel
US6366584B1 (en) * 1999-02-06 2002-04-02 Triton Network Systems, Inc. Commercial network based on point to point radios

Cited By (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7055123B1 (en) * 2001-12-31 2006-05-30 Richard S. Norman High-performance interconnect arrangement for an array of discrete functional modules
US7360007B2 (en) * 2002-08-30 2008-04-15 Intel Corporation System including a segmentable, shared bus
US20040042496A1 (en) * 2002-08-30 2004-03-04 Intel Corporation System including a segmentable, shared bus
US20050216625A1 (en) * 2004-03-09 2005-09-29 Smith Zachary S Suppressing production of bus transactions by a virtual-bus interface
US20100241746A1 (en) * 2005-02-23 2010-09-23 International Business Machines Corporation Method, Program and System for Efficiently Hashing Packet Keys into a Firewall Connection Table
US8112547B2 (en) * 2005-02-23 2012-02-07 International Business Machines Corporation Efficiently hashing packet keys into a firewall connection table
US20080276116A1 (en) * 2005-06-01 2008-11-06 Tobias Bjerregaard Method and an Apparatus for Providing Timing Signals to a Number of Circuits, an Integrated Circuit and a Node
US8112654B2 (en) 2005-06-01 2012-02-07 Teklatech A/S Method and an apparatus for providing timing signals to a number of circuits, and integrated circuit and a node
US20070017694A1 (en) * 2005-07-20 2007-01-25 Tomoyuki Kubo Wiring board and manufacturing method for wiring board
US8885673B2 (en) 2005-08-24 2014-11-11 Intel Corporation Interleaving data packets in a packet-based communication system
US8325768B2 (en) * 2005-08-24 2012-12-04 Intel Corporation Interleaving data packets in a packet-based communication system
US20070047584A1 (en) * 2005-08-24 2007-03-01 Spink Aaron T Interleaving data packets in a packet-based communication system
US20090089031A1 (en) * 2007-09-28 2009-04-02 Rockwell Automation Technologies, Inc. Integrated simulation of controllers and devices
US20090089029A1 (en) * 2007-09-28 2009-04-02 Rockwell Automation Technologies, Inc. Enhanced execution speed to improve simulation performance
US8548777B2 (en) 2007-09-28 2013-10-01 Rockwell Automation Technologies, Inc. Automated recommendations from simulation
US8417506B2 (en) 2007-09-28 2013-04-09 Rockwell Automation Technologies, Inc. Simulation controls for model variablity and randomness
US7801710B2 (en) * 2007-09-28 2010-09-21 Rockwell Automation Technologies, Inc. Simulation controls for model variability and randomness
US20090089227A1 (en) * 2007-09-28 2009-04-02 Rockwell Automation Technologies, Inc. Automated recommendations from simulation
US20090089027A1 (en) * 2007-09-28 2009-04-02 Rockwell Automation Technologies, Inc. Simulation controls for model variablity and randomness
US20090089234A1 (en) * 2007-09-28 2009-04-02 Rockwell Automation Technologies, Inc. Automated code generation for simulators
US20090089030A1 (en) * 2007-09-28 2009-04-02 Rockwell Automation Technologies, Inc. Distributed simulation and synchronization
US8069021B2 (en) 2007-09-28 2011-11-29 Rockwell Automation Technologies, Inc. Distributed simulation and synchronization
US20100318339A1 (en) * 2007-09-28 2010-12-16 Rockwell Automation Technologies, Inc. Simulation controls for model variablity and randomness
US7995618B1 (en) * 2007-10-01 2011-08-09 Teklatech A/S System and a method of transmitting data from a first device to a second device
US20090268727A1 (en) * 2008-04-24 2009-10-29 Allison Brian D Early header CRC in data response packets with variable gap count
US20090268736A1 (en) * 2008-04-24 2009-10-29 Allison Brian D Early header CRC in data response packets with variable gap count
US20090271532A1 (en) * 2008-04-24 2009-10-29 Allison Brian D Early header CRC in data response packets with variable gap count
US20140307748A1 (en) * 2009-04-29 2014-10-16 Mahesh Wagh Packetized Interface For Coupling Agents
US20100278195A1 (en) * 2009-04-29 2010-11-04 Mahesh Wagh Packetized Interface For Coupling Agents
US20120176909A1 (en) * 2009-04-29 2012-07-12 Mahesh Wagh Packetized Interface For Coupling Agents
US8811430B2 (en) * 2009-04-29 2014-08-19 Intel Corporation Packetized interface for coupling agents
US8170062B2 (en) * 2009-04-29 2012-05-01 Intel Corporation Packetized interface for coupling agents
US9736276B2 (en) * 2009-04-29 2017-08-15 Intel Corporation Packetized interface for coupling agents
US8823495B2 (en) * 2010-03-12 2014-09-02 Zte Corporation Sight spot guiding system and implementation method thereof
US20130038427A1 (en) * 2010-03-12 2013-02-14 Zte Corporation Sight Spot Guiding System and Implementation Method Thereof
US20130229290A1 (en) * 2012-03-01 2013-09-05 Eaton Corporation Instrument panel bus interface
CN104144827A (en) * 2012-03-01 2014-11-12 伊顿公司 Instrument panel bus interface
US20150012679A1 (en) * 2013-07-03 2015-01-08 Iii Holdings 2, Llc Implementing remote transaction functionalities between data processing nodes of a switched interconnect fabric

Also Published As

Publication number Publication date Type
GB2389689A (en) 2003-12-17 application
GB2389689B (en) 2005-06-08 grant
US20020161926A1 (en) 2002-10-31 application
WO2002065700A3 (en) 2002-11-21 application
GB2374443B (en) 2005-06-08 grant
GB2390506A (en) 2004-01-07 application
CN100367730C (en) 2008-02-06 grant
GB0321186D0 (en) 2003-10-08 grant
US8200686B2 (en) 2012-06-12 grant
GB0203634D0 (en) 2002-04-03 grant
GB0203633D0 (en) 2002-04-03 grant
US7818541B2 (en) 2010-10-19 grant
US20110083000A1 (en) 2011-04-07 application
JP2004525449A (en) 2004-08-19 application
US7917727B2 (en) 2011-03-29 grant
US20020159466A1 (en) 2002-10-31 application
GB2374443A (en) 2002-10-16 application
GB2377519B (en) 2005-06-15 grant
GB2390506B (en) 2005-03-23 grant
US20070220232A1 (en) 2007-09-20 application
US8127112B2 (en) 2012-02-28 grant
US20040130367A1 (en) 2004-07-08 application
US20030041163A1 (en) 2003-02-27 application
US20050242976A1 (en) 2005-11-03 application
CN1504035A (en) 2004-06-09 application
WO2002065259A1 (en) 2002-08-22 application
WO2002065700A2 (en) 2002-08-22 application
GB2377519A (en) 2003-01-15 application
US7856543B2 (en) 2010-12-21 grant
US7290162B2 (en) 2007-10-30 grant
CN1613041A (en) 2005-05-04 application
GB0319801D0 (en) 2003-09-24 grant
GB2374442B (en) 2005-03-23 grant
US20050243827A1 (en) 2005-11-03 application
US20070217453A1 (en) 2007-09-20 application
GB2374442A (en) 2002-10-16 application
JP2004524617A (en) 2004-08-12 application
GB0203632D0 (en) 2002-04-03 grant

Similar Documents

Publication Publication Date Title
Karim et al. An interconnect architecture for networking systems on chips
Moraes et al. HERMES: an infrastructure for low area overhead packet-switching networks on chip
Zeferino et al. SoCIN: a parametric and scalable network-on-chip
US4985830A (en) Interprocessor bus switching system for simultaneous communication in plural bus parallel processing system
US5583990A (en) System for allocating messages between virtual channels to avoid deadlock and to optimize the amount of message traffic on each type of virtual channel
US5991817A (en) Apparatus and method for a network router
US7412588B2 (en) Network processor system on chip with bridge coupling protocol converting multiprocessor macro core local bus to peripheral interfaces coupled system bus
Tamir et al. Dynamically-allocated multi-queue buffers for VLSI communication switches
US6460120B1 (en) Network processor, memory organization and methods
US6985431B1 (en) Network switch and components and method of operation
US7761687B2 (en) Ultrascalable petaflop parallel supercomputer
Tamir et al. High-performance multi-queue buffers for VLSI communications switches
US6272621B1 (en) Synchronization and control system for an arrayed processing engine
US7886084B2 (en) Optimized collectives using a DMA on a parallel computer
Galles Spider: A high-speed network interconnect
US5630162A (en) Array processor dotted communication network based on H-DOTs
US6842443B2 (en) Network switch using network processor and methods
US5161156A (en) Multiprocessing packet switching connection system having provision for error correction and recovery
US6751698B1 (en) Multiprocessor node controller circuit and method
Alverson et al. The gemini system interconnect
US6766381B1 (en) VLSI network processor and methods
Bainbridge et al. Chain: a delay-insensitive chip area interconnect
US6499079B1 (en) Subordinate bridge structure for a point-to-point computer interconnection bus
US20100158005A1 (en) System-On-a-Chip and Multi-Chip Systems Supporting Advanced Telecommunication Functions
US6769033B1 (en) Network processor processing complex and methods

Legal Events

Date Code Title Description
AS Assignment

Owner name: CLEARSPEED TECHNOLOGY LIMITED, UNITED KINGDOM

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SWARBRICK, IAN;WINSER, PAUL;RYAN, STUART;REEL/FRAME:015007/0932;SIGNING DATES FROM 20030911 TO 20040116

AS Assignment

Owner name: CLEARSPEED SOLUTIONS LIMITED, UNITED KINGDOM

Free format text: CHANGE OF NAME;ASSIGNOR:CLEARSPEED TECHNOLOGY LIMITED;REEL/FRAME:015317/0484

Effective date: 20040701