WO2015107344A1 - Reconfigurable computing system - Google Patents

Reconfigurable computing system Download PDF

Info

Publication number
WO2015107344A1
WO2015107344A1 PCT/GB2015/050070 GB2015050070W WO2015107344A1 WO 2015107344 A1 WO2015107344 A1 WO 2015107344A1 GB 2015050070 W GB2015050070 W GB 2015050070W WO 2015107344 A1 WO2015107344 A1 WO 2015107344A1
Authority
WO
WIPO (PCT)
Prior art keywords
rack
programmable
functional units
computing system
unit
Prior art date
Application number
PCT/GB2015/050070
Other languages
French (fr)
Inventor
Yan Yan
Georgios ZERVAS
Dimitra Simeonidou
Original Assignee
The University Of Bristol
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by The University Of Bristol filed Critical The University Of Bristol
Publication of WO2015107344A1 publication Critical patent/WO2015107344A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/35Switches specially adapted for specific applications
    • H04L49/356Switches specially adapted for specific applications for storage area networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/0813Configuration setting characterised by the conditions triggering a change of settings
    • H04L41/082Configuration setting characterised by the conditions triggering a change of settings the condition being updates or upgrades of network functionality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0895Configuration of virtualised networks or elements, e.g. virtualised network function or OpenFlow elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/65Re-configuration of fast packet switches
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/70Virtual switches

Definitions

  • the present application relates to a reconfigurable computing system, such as a reconfigurable network interface system.
  • a typical data centre architecture comprises a plurality of racks 12, each housing a plurality of servers 14.
  • Each of the plurality of racks 12 is provided with an electronic top-of-rack switch module 16, whilst each of the plurality of servers 14 includes a network interface card 18, by means of which the server 16 communicates with the top-of-rack switch module 16 associated with the rack 12 in which that particular server 14 is installed.
  • the racks 12 are arranged into a plurality of clusters, with each rack 12 of a cluster being connected to one of a plurality of aggregation switches 20 that has been allocated to that cluster, via the top-of-rack switch module 16 associated with the rack 12.
  • the aggregation switches 20 are connected to a core switch 22, which in turn connects to the internet. The more racks 12 the data centre has, the more tiers of aggregation switches 20 need to be employed.
  • One feature of this architecture is that there is no direct connection between the individual servers 14 in a rack 12. Instead, any communications between servers 14 in a rack 12 must pass through the electronic top-of-rack switch module 16 associated with the rack.
  • the servers 12 may have optical data input and output ports, whereas the top- of-rack switch module 16 is an electronic device, and so communication between individual servers 14 in a rack 12 via the top-of-rack switch module 16 involves an optical to electrical signal conversion followed by an electrical to optical signal conversion.
  • electronic top-of-rack switch modules 16 are limited in that each port operates at a specific bit rate.
  • top-of-rack switch module 16 is often implemented as a 48 port Ethernet or Infiniband® switch. Therefore, 83 per cent of the switching bandwidth of the top-of-rack switching module 16 is occupied by interconnecting servers 14 within the rack 12.
  • a reconfigurable computing system comprising a plurality of programmable functional units, wherein the plurality of programmable functional units comprises one or more programmable electrical functional units and one or more programmable optical functional units; and a controller, wherein the programmable device is operative to program or reprogram at least one of the plurality of programmable functional units in accordance with configuration information received from the controller in order to implement desired functionality of the reconfigurable network interface system.
  • the reconfigurable computing system of the first aspect can be used in a wide variety of applications, as it permits rapid, simple and seamless configuration and reconfiguration to accommodate the desired functionality.
  • the reconfigurable computing system is used in a data centre context, for example as a network interface card associated with a server, significant improvements in efficiency, latency and energy consumption can be achieved, as the flexibility of the system permits improved interconnections between individual racks and servers in the data centre.
  • the reconfigurable computing system may further comprise a global look up table associated with the controller, the global look up table containing configuration data for each of the plurality of programmable functional units relating particular settings of each of the plurality of programmable functional units to particular functional requirements.
  • the reconfigurable computing interface system may further comprise a plurality of interfaces permitting communication between the reconfigurable computing system and other components.
  • the plurality of interfaces may include one or more optical data interfaces.
  • the one or more optical data interfaces may comprise a dense wavelength division multiplexing (DWDM) data interface unit.
  • DWDM dense wavelength division multiplexing
  • the plurality of interfaces may include one or more electrical data interfaces.
  • the one or more electrical data interfaces may comprise a peripheral component interconnect express (PCIe) data interface.
  • PCIe peripheral component interconnect express
  • the plurality of programmable functional units may comprise one or more functional units for implementing network functions.
  • the plurality of programmable functional units may comprise one or more functional units for implementing intra-network functions.
  • the controller may comprise a software controller running on a PC or on an embedded processor.
  • a network interface card comprising a reconfigurable computing system according to the first aspect.
  • the plurality of functional units may include one or more units selected from the group consisting of: a traffic virtualisation unit; an inter/intra-rack switching unit; an inter rack transmit/receive unit; an intra-rack transmit/receive unit; an optical transport protocol switching unit; and a label interface.
  • a network architecture comprising: a plurality of racks, each of the plurality of racks housing a plurality of blades and a top-of-rack switch unit, wherein: each of the plurality of blades is provided with a network interface card according to the second aspect; each of the plurality of blades in each rack is directly connected to each of the other blades in that rack by means of its network interface card; and each of the plurality of racks is directly connected to each of the other racks by means of its top-of-rack switch unit.
  • the top-of-rack switch unit may be an optical top-of-rack switch unit.
  • the optical top-of-rack switch unit may be implemented as an arrayed waveguide grating (AWG) switch unit.
  • the optical top-of-rack switch unit may be implemented as an NxN reconfigurable arrayed waveguide grating (R-AWG) switch unit.
  • the optical top-of-rack switch unit may be implemented as a programmable lxN or NxM wavelength selective switch (WSS) unit.
  • WSS wavelength selective switch
  • the plurality of blades may comprise one or more blades selected from the group comprising the following: servers; CPU arrays; memory arrays; and storage arrays.
  • Figure 1 is a schematic representation of a prior art data centre architecture
  • Figure 2 is a schematic functional block diagram of a reconfigurable optical and electronic network interface system
  • Figure 3 is a schematic functional block diagram of a programmable device used in the reconfigurable network interface system of Figure 2;
  • Figure 4 is a schematic representation illustrating functionality of the reconfigurable network interface system of Figure 2;
  • Figure 5 is a schematic representation showing a data centre architecture that can be implemented using the reconfigurable network interface system of Figure 2;
  • Figure 6 is a schematic representation showing an alternative data centre architecture that can be implemented using the reconfigurable network interface system of Figure 2
  • Figure 7 is a schematic representation showing an alternative data centre architecture that can be implemented using the reconfigurable network interface system of Figure 2;
  • Figure 8 is a further schematic representation showing an alternative data centre architecture that can be implemented using the reconfigurable network interface system of Figure 2.
  • a reconfigurable computing system is shown generally at 50.
  • the reconfigurable computing system is arranged as a reconfigurable network interface system.
  • the reconfigurable network interface system 50 may be used, for example, in place of the top-of-rack switch module 16, and/or to implement an improved network interface card 18 in the prior art data centre architecture 10 shown in Figure 1 , as will be explained in more detail below.
  • the reconfigurable network interface system 50 is based around a programmable electronic device 52 which may be, for example, a field programmable gate array (FPGA), programmable logic controller (PLC) or System on Chip (SoC).
  • the programmable electronic device 52 includes a plurality of functional units.
  • the plurality of functional units are configured to implement network functions such as time division multiplexing (TDM) or Ethernet over wavelength division multiplexing (WDM) transport and switching, in accordance with configuration instructions received from a device controller 54.
  • These functional units may include, for example, an Ethernet/TDM (time division multiplexing) function unit 522, an Ethernet switch unit 524 and an elastic TDM switch 526.
  • the programmable electronic device 52 also includes a plurality of functional units for implementing intra-network functions, such as frame and time-slice lengths, aggregation mechanisms, overhead management for achieving quality of transmission (QoT) requirements and the like, again in accordance with configuration instructions received from the device controller 54.
  • These functional units may include, for example, an aggregation unit 528, a TDM framing unit 530 and a quality of transmission (QoT) overhead unit 532.
  • Additional functional units may be incorporated to support any type of signal or data processing function, as well as algorithm calculations, to support any layer of the network, or to aid highly reconfigurable computation. This can be complemented by optical processing functions provided by an optical backplane 57 and a reconfigurable optical system 58, to deliver programmable opto-electronic functions.
  • the device controller 54 itself includes a plurality of functional units such as a resource controller 542, a control and communications interface 544 and a node synchronisation unit 546.
  • the device controller 54 may be, for example, a dedicated hardware controller.
  • the device controller 54 may be a software controller running on a PC or on an embedded processor.
  • the reconfigurable computing system 50 may therefore be regarded as being software defined, since, as will be explained in more detail below, the device controller 54 defines the functionality of the reconfigurable computing system 50.
  • the programmable device 52 communicates with a driver sub-system 56 which controls the reconfigurable optical system 58 via the optical backplane 57.
  • the driver sub-system 56 contains functional units such as a switch control unit 562 and a multicasting unit 564.
  • the programmable device 52 not only controls the reconfigurable optical system 58, but also transports data through a set of transceivers to and from the optical reconfigurable optical system 58.
  • the reconfigurable optical system 58 includes functional units such as amplifiers 582, multicasting units 584, multiplexers/demulitiplexers 586, switches 588 and the like which are used to implement desired optical functionality of the reconfigurable network interface system 50.
  • the interconnection of the programmable device 52 with the reconfigurable optical system 58 permits interconnection of electronic functions through the optical backplane 57, as well as optical processing if necessary.
  • Figure 3 is a schematic block diagram showing exemplary functional blocks of the programmable device 52.
  • Figure 3 shows particular exemplary functional blocks, to aid understanding of the programmable device 52, but it is to be understood that the programmable device 52 may include additional and/or alternative functional blocks, to permit any desired functionality to be implemented.
  • the programmable device 52 includes a control interface unit 70, to permit bidirectional communication between the programmable device 52 and the device controller 54 for exchanging control information between the programmable device 52 and the device controller 54.
  • the programmable device 52 also includes an electrical data interface, which in the example illustrated in Figure 3 is a peripheral component interconnect express (PCIe) data interface unit 72, to allow the programmable device 52 to communicate with a server 14, by transmitting and receiving Internet Protocol (IP) data packets to/from the server 14 via a PCIe interface of the server 14.
  • PCIe peripheral component interconnect express
  • IP Internet Protocol
  • the programmable device 52 includes an optical data interface, which in the example shown is a dense wavelength division multiplexing (DWDM) data interface unit 74 supporting ten DWDM CFP channels, with a capacity of lOOGbps, to permit the device 52 to communicate using a DWDM optical communications protocol, e.g. with other racks 12 (inter-rack communications) through a top-of-rack switch module 16.
  • the programmable device 52 also includes further optical data interface units 76, which in this example are vertical cavity surface emitting laser (VCSEL) interfaces supporting 12 channels, each having a capacity of lOGbps, to permit intra-rack communications between servers 14 housed in a single rack 12.
  • VCSEL vertical cavity surface emitting laser
  • the data interface units 72, 74, 76 are each associated with a respective OpenFlow® look up table 73, 75, 77, which enable the well-known OpenFlow® functionality in their associated data interface units 72, 74, 76. Additionally, the control interface unit 70 is associated with a global look up table 78.
  • the programmable device 52 in this example includes a programmable traffic virtualisation unit 80, a programmable inter/intra-rack switch unit 82, one or more programmable inter-rack transmit/receive units 84, one or more programmable intra-rack transmit/receive units 86, a programmable optical communications protocol unit 88 and a programmable label unit 90.
  • Each of these programmable functional units 80, 82, 84, 86, 88, 90 is connected to the control interface unit 70, to permit programming (configuration) and reprogramming (reconfiguration) of the units in accordance with configuration information received via the control interface unit 70.
  • the programmable traffic virtualisation unit 80 is configured to perform traffic virtualisation operations that are appropriate to the desired configuration of the reconfigurable network interface system 50, as defined by the configuration information received via the control interface 70.
  • the traffic virtualisation unit 80 is configured to sort or isolate traffic data packets incoming from a server 14 based on the nature of the traffic. For example, the traffic virtualisation unit 80 may sort traffic based on a virtual machine ID associated with a virtual machine from which the traffic originated, or based on the destination IP address of the traffic, or based on other parameters such as a virtual local area network (VLAN) from which the traffic originated.
  • VLAN virtual local area network
  • the sorted traffic can then be associated by the traffic virtualisation unit 80 with the relevant functional blocks 82, 84, 86, 88.
  • the programmable traffic virtualisation unit 80 is connected for bidirectional communication with the programmable inter/intra-rack switch 82.
  • the programmable inter/intra-rack switch 82 is configured to switch between inter-rack traffic (i.e. transmission of data between racks 12 in a cluster or a data centre) and intra-rack traffic (i.e. transmission of data between servers 14 within a rack 12) in accordance with the configuration information received via the control interface 70.
  • the configuration information indicates that the reconfigurable network device should be configured for inter-rack traffic
  • the programmable inter/inter-rack switch 82 is set to select inter-rack traffic.
  • the programmable inter/inter- rack switch 82 is set to select intra-rack traffic.
  • the programmable inter/inter-rack switch 82 is connected for bidirectional communication with both the programmable inter-rack transmit/receive unit(s) 84 and the programmable intra-rack transmit/receive unit(s) 86, to pass virtualised traffic to the appropriate transmit/receive unit 84, 86 for transmission either to another server 14 within the same rack 12 in the case of inter-rack transmission or to another rack 12 in the case of intra-rack transmission.
  • the programmable inter-rack transmit/receive unit(s) 84 and the programmable intra-rack transmit/receive unit(s) 86 also receive inter-rack and intra-rack traffic respectively, which is passed on, via the switch unit 82 to any one of the output interfaces 72, 74, 76, e.g.
  • the reprogrammable optical communications protocol unit 88 is connected for bidirectional communication with the reprogrammable optical communications protocol unit 88.
  • the reprogrammable optical communications protocol unit 88 is configured to select either an optical packet switched (OPS) protocol or an optical circuit switched protocol (OCS) for data transmission, in accordance with the configuration information received.
  • OPS optical packet switched
  • OCS optical circuit switched protocol
  • OPS groups data to be transmitted into packets, and is primarily used for short-lived packet flows and to allow for point to multi-point connectivity and statistical multiplexing, whereas OCS sets up a dedicated channel for data transmission between a transmitting unit and a receiving unit, which remains open until all of the data has been transmitted from the transmitting unit to the receiving unit. OCS is therefore primarily used for longer-lived high capacity data flows. Where OPS is used, a label must be applied to each data packet. For this reason, the reprogrammable inter-rack transmit/receive unit 84 is connected for bidirectional communication with the programmable label unit 90, which provides appropriate labelling of data packets.
  • a user of the reconfigurable network interface system 50 sets the functional and performance requirements of the device 50, for example bit rate, connectivity type, quality of service and quality of transmission.
  • the controller 54 evaluates these requirements and identifies the programmable functional units 80, 82, 84, 86, 88, 90 and data interface units 72, 74, 76 required to fulfil the requirements.
  • Configuration information is then generated and transmitted to the programmable device 52. This configuration information is received via the control interface unit 70, and passed to the global look up table 78 associated with the control interface unit 70.
  • the global look up table 78 associated with the control interface unit 70 contains configuration data for each of the programmable functional units 80, 82, 84, 86, 88, 90 and the data interface units 72, 74, 76, relating particular settings of those units to particular functional requirements.
  • the configuration data contained in the global look up table 78 associated with the control interface unit 70 is compared to the configuration information received from the controller 54, and the appropriate settings for each of the programmable functional units 80, 82, 84, 86, 88, 90 and the data interface units 72, 74, 76 are retrieved.
  • Each of the programmable functional units 80, 82, 84, 86, 88, 90 and the data interface units 72, 74, 76 are programmed with the retrieved settings to implement the required functionality and performance.
  • the reconfigurable network interface system 50 can be configured or programmed to perform multiple different functions, by configuring different "slices" to implement different functionality, as will now be explained with reference to Figure 4.
  • a first "slice" 100 of the reconfigurable network interface system 50 is configured to implement an Ethernet node, using the Ethernet switch functional unit 524 and the optical multiplexer/demultiplexer unit 586.
  • a second “slice” 102 uses the elastic TDM switch unit 526, the aggregation unit 528, the QoT overhead unit 532, the switch control and multicasting units 562, 564, the optical multiplexer/demultiplexer unit 586, the optical multicasting unit 584, the amplifier unit 582 and the switch unit 588 to implement a point to multipoint node with a traffic capacity of 2 Gbps.
  • This second slice 102 is configured by appropriate configuration information from the controller 54 and isolated from the first slice 100.
  • a third "slice" 104 implements a multipoint to multipoint node with a traffic capacity of 3 Gbps, using the elastic TDM switch unit 526, the aggregation unit 528, TDM framing unit 530, the switch control and multicasting units 562, 564, the optical multiplexer/demultiplexer unit 586, the optical multicasting unit 584 and the switch unit 588.
  • this third slice 104 is configured by appropriate configuration information from the controller 54 and is isolated from the first and second slices 100, 102.
  • the first, second and third slices 100, 102, 104 are operative at different times, depending on the incoming traffic to the reconfigurable network interface system 50.
  • the programmable device 52 determines the type of data packet that has been received and determines the configuration required to process the particular type of data packet received.
  • the programmable device 52 consults the global look up table 78 associated with the control interface unit 70 to determine the settings required for the programmable functional units 80, 82, 84, 86, 88, 90 and data interface units 72, 74, 76 to permit the received data packet to be processed, and the controller 54 programs the appropriate ones of the programmable functional units 80, 82, 84, 86, 88, 90 and data interface units 72, 74, 76 to enable a path for the received data packet.
  • reconfigurable network interface system 50 can be altered dynamically to meet changing requirements simply by providing new configuration information.
  • the reconfigurable network interface system 50 provides exceptional flexibility in dealing with changing network performance requirements.
  • reconfigurable network interface system 50 Whilst the reconfigurable network interface system 50 has been described above in the context of a network interface card used in conjunction with a server, it will be appreciated that the reconfigurable network interface system 50 can be configured to perform a wide variety of functions for use in different applications.
  • the reconfigurable network interface system 50 may be configured for use as an IP router, Ethernet switch, top-of-rack switch module, or any other network device.
  • FIGs 5 to 8 illustrate data centre network architectures that are enabled by the use of the reconfigurable network interface system 50 with blades in a rack.
  • the blades may be, for example, servers, CPU arrays, memory arrays, storage arrays or the like.
  • the exemplary architectures illustrated in Figures 5 - 8 are described below as housing servers, but it will be appreciated that other blades could equally be housed in the racks.
  • FIG 5 illustrates, schematically, the replacement of the electronic top-of-rack switch module 16 in the prior art architecture illustrated in Figure 1 with an optical top-of rack switch
  • Figures 6-8 illustrate, schematically, different types of optical top-of rack switch, which enable different network architectures.
  • racks 112, 114 each house a plurality of servers 116.
  • the servers 116 in each rack 112, 114 are directly optically interconnected using the optical interfaces of the reconfigurable network interface units 50 installed in each server 116.
  • direct intra-rack communication between servers 116 in a rack 112, 114 is possible, which reduces the latency of such communications.
  • the racks 112, 114 are each provided with an optical top of rack switch module 118, which may use active or passive optical devices.
  • the optical top of rack switch modules 118 are directly optically connected to each other, to enable direct inter-rack communications.
  • the optical top-of-rack switch module 118 enables transparent switching and transport of data from any server 116 of any rack 112, 114 to any other server 116 housed in the same or a different rack 112, 114.
  • the architecture 120 illustrated in Figure 6 is similar to that shown in Figure 5.
  • the top-of rack switch unit is implemented as an arrayed waveguide grating (AWG) top-of-rack switch unit 122. Traffic from each server 116 in the same rack 112, 114 can be simply groomed using the AWG top of rack switch unit 122 and transported to an optical switching node 124, e.g. an OPS or OCS node.
  • an optical switching node 124 e.g. an OPS or OCS node.
  • the optical switching implemented by the AWG top-of rack switch unit 122 in the architecture 120 illustrated in Figure 6 provides limited connectivity, which can lead to latency in communications between racks 112, 114.
  • a rack 112 can only be connected to one other rack 114 at one time using OCS, and so traffic from the rack 112 intended for other destinations must therefore wait or be transmitted using OPS.
  • reconfiguring the top-of-rack switch 122 in the architecture 120 illustrated in Figure 6 requires significant control time.
  • FIG. 7 is a schematic illustration of an improvement of the architecture 120 shown in Figure 6.
  • an NxN reconfigurable AWG (R-AWG) top-of-rack switch unit 132 is used in place of the AWG top-of-rack switch 122, and direct connections are established between racks 112, 144.
  • the R-AWG top-of-rack switch unit 132 is an NxN passive device which guides each input wavelength of a particular input port to a specific output port. Some output channels of the R-AWG top-of-rack switch 132 are connected to an optical switch node 134, whilst others of the output channels are used for direct interconnections between different racks 112, 114.
  • the input wavelength for each channel of the can be tuned according to the transport route to be followed. In this way, channels connected between racks 112, 114 or from racks 112, 114 to the optical switch node 134 are flexible and reconfigurable.
  • FIG 8 schematically illustrates an alternative way to improve the architecture illustrated in Figure 6.
  • the architecture 140 of Figure 8 is similar to the architecture 130 illustrated in Figure 7, with the exception that in the architecture 140 of Figure 8, lxN or NxM wavelength selective switch (WSS) units 142 are used in place of the R-AWG optical top-of rack units 122.
  • the WSS units 142 are able to assign any input wavelength(s) to any output port, which provides even further flexibility for intra-cluster architecture construction without having to use a tuneable interface in a reconfigurable network interface system 50 used in the servers 116. This arrangement can also allow for additional transparent intra-rack communication.
  • FIG. 5 to 8 are merely examples of architectures that can be implemented using servers and top-of-rack switch modules that employ the reconfigurable network interface system 50 described above.
  • the reconfigurable nature of the reconfigurable network interface system 50 permits many alternative implementations, to accommodate any network configuration or performance requirements.
  • the choice of top-of-rack switch module used will depend on factors such as the bandwidth required, cost and power consumption constraints.
  • AWG technology is capable of supporting 50 to 100 GHz per port, and as such any optical signal that falls within the supported range can be accommodated.
  • WSS technology is more flexible and can accommodate any bandwidth, and so offers greater flexibility and bit rate scalability.
  • WSS technology uses active components, which increases cost and power consumption as compared to passive AWG technology.
  • the reconfigurable network interface system 50 described above permits significant improvements in efficiency, latency and energy consumption. Moreover, the reconfigurable nature of the device 50 permits fast, simple and seamless reconfiguration and upgrading to accommodate increased network performance demands, and allows for ever-evolvable systems that allow the deployment of functions where and when needed.
  • the software/hardware functional blocks can be deployed in existing and future technology solutions.
  • reconfigurable network interface system 50 has been described above in the exemplary context of data and computer communications systems, it will be appreciated that the reconfigurable network interface system concept described is equally applicable to other applications and use cases. Thus, reconfigurable network interface system concept may be applied in sectors such as wireless communications, satellite communications, automotive, robotics, high performance computing, health or any other sector that uses electronic and/or optical systems.

Abstract

The present application relates to a reconfigurable computing system (50) which may be used, for example, in network interface cards associated with servers in a data centre, the reconfigurable network interface system comprising: a programmable electronic device (52) which includes a plurality of programmable functional units, wherein the plurality of programmable functional units comprises one or more programmable electrical functional units and one or more programmable optical functional units; and a controller (54), wherein the programmable electronic device is operative to program or reprogram at least one of the plurality of programmable functional units in accordance with configuration information received from the controller in order to implement desired functionality of the reconfigurable computing system.

Description

RECONFIGURABLE COMPUTING SYSTEM
Technical Field
The present application relates to a reconfigurable computing system, such as a reconfigurable network interface system.
Background to the Invention
A typical data centre architecture, as shown generally at 10 in Figure 1, comprises a plurality of racks 12, each housing a plurality of servers 14. Each of the plurality of racks 12 is provided with an electronic top-of-rack switch module 16, whilst each of the plurality of servers 14 includes a network interface card 18, by means of which the server 16 communicates with the top-of-rack switch module 16 associated with the rack 12 in which that particular server 14 is installed. The racks 12 are arranged into a plurality of clusters, with each rack 12 of a cluster being connected to one of a plurality of aggregation switches 20 that has been allocated to that cluster, via the top-of-rack switch module 16 associated with the rack 12. The aggregation switches 20 are connected to a core switch 22, which in turn connects to the internet. The more racks 12 the data centre has, the more tiers of aggregation switches 20 need to be employed.
One feature of this architecture is that there is no direct connection between the individual servers 14 in a rack 12. Instead, any communications between servers 14 in a rack 12 must pass through the electronic top-of-rack switch module 16 associated with the rack. The servers 12 may have optical data input and output ports, whereas the top- of-rack switch module 16 is an electronic device, and so communication between individual servers 14 in a rack 12 via the top-of-rack switch module 16 involves an optical to electrical signal conversion followed by an electrical to optical signal conversion. This leads to high inefficiency, high latency and high energy consumption in the data centre architecture illustrated in Figure 1. Moreover, electronic top-of-rack switch modules 16 are limited in that each port operates at a specific bit rate. This means that any time a server 14 or network interface card 18 is upgraded to increase its capacity, a corresponding upgrade must be made to the electronic top-of-rack switch module 16. As will be appreciated, upgrading the top-of-rack switch module 16 involves considerable cost and disruption, and is thus undesirable.
It is estimated that by 2014 more than 80 per cent of traffic in a data centre network will be between servers 14. In a typical rack 12 containing 40 servers 14, the top-of-rack switch module 16 is often implemented as a 48 port Ethernet or Infiniband® switch. Therefore, 83 per cent of the switching bandwidth of the top-of-rack switching module 16 is occupied by interconnecting servers 14 within the rack 12.
Therefore a need exists for an improved means of facilitating intra-rack server communications to reduce latency, improve efficiency and flexibility, as well as reducing power consumption within data centres.
Summary of Invention
According to a first aspect of the present invention there is provided a reconfigurable computing system comprising a plurality of programmable functional units, wherein the plurality of programmable functional units comprises one or more programmable electrical functional units and one or more programmable optical functional units; and a controller, wherein the programmable device is operative to program or reprogram at least one of the plurality of programmable functional units in accordance with configuration information received from the controller in order to implement desired functionality of the reconfigurable network interface system.
The reconfigurable computing system of the first aspect can be used in a wide variety of applications, as it permits rapid, simple and seamless configuration and reconfiguration to accommodate the desired functionality. For example, where the reconfigurable computing system is used in a data centre context, for example as a network interface card associated with a server, significant improvements in efficiency, latency and energy consumption can be achieved, as the flexibility of the system permits improved interconnections between individual racks and servers in the data centre.
The reconfigurable computing system may further comprise a global look up table associated with the controller, the global look up table containing configuration data for each of the plurality of programmable functional units relating particular settings of each of the plurality of programmable functional units to particular functional requirements.
The use of a global look up table in this manner enhances the speed and ease of configuration and reconfiguration of the system, permitting dynamic reconfiguration and upgrading to meet changing requirements.
The reconfigurable computing interface system may further comprise a plurality of interfaces permitting communication between the reconfigurable computing system and other components.
The plurality of interfaces may include one or more optical data interfaces.
For example, the one or more optical data interfaces may comprise a dense wavelength division multiplexing (DWDM) data interface unit.
The plurality of interfaces may include one or more electrical data interfaces.
For example, the one or more electrical data interfaces may comprise a peripheral component interconnect express (PCIe) data interface. The plurality of programmable functional units may comprise one or more functional units for implementing network functions.
The plurality of programmable functional units may comprise one or more functional units for implementing intra-network functions.
The controller may comprise a software controller running on a PC or on an embedded processor.
According to a second aspect of the invention, there is provided a network interface card comprising a reconfigurable computing system according to the first aspect.
In the network interface card of the second aspect, the plurality of functional units may include one or more units selected from the group consisting of: a traffic virtualisation unit; an inter/intra-rack switching unit; an inter rack transmit/receive unit; an intra-rack transmit/receive unit; an optical transport protocol switching unit; and a label interface.
According to a third aspect of the invention there is provided a network architecture comprising: a plurality of racks, each of the plurality of racks housing a plurality of blades and a top-of-rack switch unit, wherein: each of the plurality of blades is provided with a network interface card according to the second aspect; each of the plurality of blades in each rack is directly connected to each of the other blades in that rack by means of its network interface card; and each of the plurality of racks is directly connected to each of the other racks by means of its top-of-rack switch unit.
The top-of-rack switch unit may be an optical top-of-rack switch unit.
For example, the optical top-of-rack switch unit may be implemented as an arrayed waveguide grating (AWG) switch unit. Alternatively, the optical top-of-rack switch unit may be implemented as an NxN reconfigurable arrayed waveguide grating (R-AWG) switch unit.
Alternatively, the optical top-of-rack switch unit may be implemented as a programmable lxN or NxM wavelength selective switch (WSS) unit.
The plurality of blades may comprise one or more blades selected from the group comprising the following: servers; CPU arrays; memory arrays; and storage arrays.
Brief Description of the Drawings
Embodiments of the invention will now be described, strictly by way of example only, with reference to the accompanying drawings, of which:
Figure 1 is a schematic representation of a prior art data centre architecture;
Figure 2 is a schematic functional block diagram of a reconfigurable optical and electronic network interface system;
Figure 3 is a schematic functional block diagram of a programmable device used in the reconfigurable network interface system of Figure 2;
Figure 4 is a schematic representation illustrating functionality of the reconfigurable network interface system of Figure 2;
Figure 5 is a schematic representation showing a data centre architecture that can be implemented using the reconfigurable network interface system of Figure 2;
Figure 6 is a schematic representation showing an alternative data centre architecture that can be implemented using the reconfigurable network interface system of Figure 2; Figure 7 is a schematic representation showing an alternative data centre architecture that can be implemented using the reconfigurable network interface system of Figure 2;
Figure 8 is a further schematic representation showing an alternative data centre architecture that can be implemented using the reconfigurable network interface system of Figure 2.
Description of the Embodiments
Referring now to Figure 2, a reconfigurable computing system is shown generally at 50. In this example, the reconfigurable computing system is arranged as a reconfigurable network interface system. The reconfigurable network interface system 50 may be used, for example, in place of the top-of-rack switch module 16, and/or to implement an improved network interface card 18 in the prior art data centre architecture 10 shown in Figure 1 , as will be explained in more detail below.
The reconfigurable network interface system 50 is based around a programmable electronic device 52 which may be, for example, a field programmable gate array (FPGA), programmable logic controller (PLC) or System on Chip (SoC). The programmable electronic device 52 includes a plurality of functional units. In the exemplary device 52 illustrated in Figure 2, the plurality of functional units are configured to implement network functions such as time division multiplexing (TDM) or Ethernet over wavelength division multiplexing (WDM) transport and switching, in accordance with configuration instructions received from a device controller 54. These functional units may include, for example, an Ethernet/TDM (time division multiplexing) function unit 522, an Ethernet switch unit 524 and an elastic TDM switch 526. However, it will be appreciated that additional and/or alternative functional units may be provided, to enable any desired system to be synthesised by the reconfigurable network interface system 50. The programmable electronic device 52 also includes a plurality of functional units for implementing intra-network functions, such as frame and time-slice lengths, aggregation mechanisms, overhead management for achieving quality of transmission (QoT) requirements and the like, again in accordance with configuration instructions received from the device controller 54. These functional units may include, for example, an aggregation unit 528, a TDM framing unit 530 and a quality of transmission (QoT) overhead unit 532. Additional functional units may be incorporated to support any type of signal or data processing function, as well as algorithm calculations, to support any layer of the network, or to aid highly reconfigurable computation. This can be complemented by optical processing functions provided by an optical backplane 57 and a reconfigurable optical system 58, to deliver programmable opto-electronic functions.
The device controller 54 itself includes a plurality of functional units such as a resource controller 542, a control and communications interface 544 and a node synchronisation unit 546. The device controller 54 may be, for example, a dedicated hardware controller. Alternatively, the device controller 54 may be a software controller running on a PC or on an embedded processor. The reconfigurable computing system 50 may therefore be regarded as being software defined, since, as will be explained in more detail below, the device controller 54 defines the functionality of the reconfigurable computing system 50. The programmable device 52 communicates with a driver sub-system 56 which controls the reconfigurable optical system 58 via the optical backplane 57. The driver sub-system 56 contains functional units such as a switch control unit 562 and a multicasting unit 564. The programmable device 52 not only controls the reconfigurable optical system 58, but also transports data through a set of transceivers to and from the optical reconfigurable optical system 58. The reconfigurable optical system 58 includes functional units such as amplifiers 582, multicasting units 584, multiplexers/demulitiplexers 586, switches 588 and the like which are used to implement desired optical functionality of the reconfigurable network interface system 50. The interconnection of the programmable device 52 with the reconfigurable optical system 58 permits interconnection of electronic functions through the optical backplane 57, as well as optical processing if necessary.
Figure 3 is a schematic block diagram showing exemplary functional blocks of the programmable device 52. Figure 3 shows particular exemplary functional blocks, to aid understanding of the programmable device 52, but it is to be understood that the programmable device 52 may include additional and/or alternative functional blocks, to permit any desired functionality to be implemented.
In the example illustrated in Figure 3, the programmable device 52 includes a control interface unit 70, to permit bidirectional communication between the programmable device 52 and the device controller 54 for exchanging control information between the programmable device 52 and the device controller 54.
The programmable device 52 also includes an electrical data interface, which in the example illustrated in Figure 3 is a peripheral component interconnect express (PCIe) data interface unit 72, to allow the programmable device 52 to communicate with a server 14, by transmitting and receiving Internet Protocol (IP) data packets to/from the server 14 via a PCIe interface of the server 14. It will be appreciated that additional and/or alternative electrical data interfaces may be provided in the programmable device 52, depending upon the interface functionality required.
Similarly, the programmable device 52 includes an optical data interface, which in the example shown is a dense wavelength division multiplexing (DWDM) data interface unit 74 supporting ten DWDM CFP channels, with a capacity of lOOGbps, to permit the device 52 to communicate using a DWDM optical communications protocol, e.g. with other racks 12 (inter-rack communications) through a top-of-rack switch module 16. The programmable device 52 also includes further optical data interface units 76, which in this example are vertical cavity surface emitting laser (VCSEL) interfaces supporting 12 channels, each having a capacity of lOGbps, to permit intra-rack communications between servers 14 housed in a single rack 12. Again, it is to be understood that additional and/or alternative interface units may be provided, to permit implementation of any desired functionality.
The data interface units 72, 74, 76 are each associated with a respective OpenFlow® look up table 73, 75, 77, which enable the well-known OpenFlow® functionality in their associated data interface units 72, 74, 76. Additionally, the control interface unit 70 is associated with a global look up table 78.
The programmable device 52 in this example includes a programmable traffic virtualisation unit 80, a programmable inter/intra-rack switch unit 82, one or more programmable inter-rack transmit/receive units 84, one or more programmable intra-rack transmit/receive units 86, a programmable optical communications protocol unit 88 and a programmable label unit 90. Each of these programmable functional units 80, 82, 84, 86, 88, 90 is connected to the control interface unit 70, to permit programming (configuration) and reprogramming (reconfiguration) of the units in accordance with configuration information received via the control interface unit 70.
The programmable traffic virtualisation unit 80 is configured to perform traffic virtualisation operations that are appropriate to the desired configuration of the reconfigurable network interface system 50, as defined by the configuration information received via the control interface 70. The traffic virtualisation unit 80 is configured to sort or isolate traffic data packets incoming from a server 14 based on the nature of the traffic. For example, the traffic virtualisation unit 80 may sort traffic based on a virtual machine ID associated with a virtual machine from which the traffic originated, or based on the destination IP address of the traffic, or based on other parameters such as a virtual local area network (VLAN) from which the traffic originated. The sorted traffic can then be associated by the traffic virtualisation unit 80 with the relevant functional blocks 82, 84, 86, 88.
The programmable traffic virtualisation unit 80 is connected for bidirectional communication with the programmable inter/intra-rack switch 82. The programmable inter/intra-rack switch 82 is configured to switch between inter-rack traffic (i.e. transmission of data between racks 12 in a cluster or a data centre) and intra-rack traffic (i.e. transmission of data between servers 14 within a rack 12) in accordance with the configuration information received via the control interface 70. Thus, if the configuration information indicates that the reconfigurable network device should be configured for inter-rack traffic, the programmable inter/inter-rack switch 82 is set to select inter-rack traffic. Conversely, if the configuration information indicates that the reconfigurable network device should be configured for intra-rack traffic, the programmable inter/inter- rack switch 82 is set to select intra-rack traffic.
The programmable inter/inter-rack switch 82 is connected for bidirectional communication with both the programmable inter-rack transmit/receive unit(s) 84 and the programmable intra-rack transmit/receive unit(s) 86, to pass virtualised traffic to the appropriate transmit/receive unit 84, 86 for transmission either to another server 14 within the same rack 12 in the case of inter-rack transmission or to another rack 12 in the case of intra-rack transmission. The programmable inter-rack transmit/receive unit(s) 84 and the programmable intra-rack transmit/receive unit(s) 86 also receive inter-rack and intra-rack traffic respectively, which is passed on, via the switch unit 82 to any one of the output interfaces 72, 74, 76, e.g. through the reprogrammable optical communications protocol unit 88 to the inter-rack communications interface 74, or to the intra-rack communications interface 76 or, via the traffic virtualisation unit 80 to the server through the PCIe data interface unit 72. The reprogrammable inter-rack transmit/receive unit 84 is connected for bidirectional communication with the reprogrammable optical communications protocol unit 88. The reprogrammable optical communications protocol unit 88 is configured to select either an optical packet switched (OPS) protocol or an optical circuit switched protocol (OCS) for data transmission, in accordance with the configuration information received. OPS groups data to be transmitted into packets, and is primarily used for short-lived packet flows and to allow for point to multi-point connectivity and statistical multiplexing, whereas OCS sets up a dedicated channel for data transmission between a transmitting unit and a receiving unit, which remains open until all of the data has been transmitted from the transmitting unit to the receiving unit. OCS is therefore primarily used for longer-lived high capacity data flows. Where OPS is used, a label must be applied to each data packet. For this reason, the reprogrammable inter-rack transmit/receive unit 84 is connected for bidirectional communication with the programmable label unit 90, which provides appropriate labelling of data packets.
During initial configuration of the reconfigurable network interface system 50, a user of the reconfigurable network interface system 50 (e.g. an operator of a data centre or a service provider) sets the functional and performance requirements of the device 50, for example bit rate, connectivity type, quality of service and quality of transmission. The controller 54 evaluates these requirements and identifies the programmable functional units 80, 82, 84, 86, 88, 90 and data interface units 72, 74, 76 required to fulfil the requirements. Configuration information is then generated and transmitted to the programmable device 52. This configuration information is received via the control interface unit 70, and passed to the global look up table 78 associated with the control interface unit 70.
The global look up table 78 associated with the control interface unit 70 contains configuration data for each of the programmable functional units 80, 82, 84, 86, 88, 90 and the data interface units 72, 74, 76, relating particular settings of those units to particular functional requirements. Thus, the configuration data contained in the global look up table 78 associated with the control interface unit 70 is compared to the configuration information received from the controller 54, and the appropriate settings for each of the programmable functional units 80, 82, 84, 86, 88, 90 and the data interface units 72, 74, 76 are retrieved. Each of the programmable functional units 80, 82, 84, 86, 88, 90 and the data interface units 72, 74, 76 are programmed with the retrieved settings to implement the required functionality and performance.
The reconfigurable network interface system 50 can be configured or programmed to perform multiple different functions, by configuring different "slices" to implement different functionality, as will now be explained with reference to Figure 4.
As can be seen from Figure 4, a first "slice" 100 of the reconfigurable network interface system 50 is configured to implement an Ethernet node, using the Ethernet switch functional unit 524 and the optical multiplexer/demultiplexer unit 586.
A second "slice" 102 uses the elastic TDM switch unit 526, the aggregation unit 528, the QoT overhead unit 532, the switch control and multicasting units 562, 564, the optical multiplexer/demultiplexer unit 586, the optical multicasting unit 584, the amplifier unit 582 and the switch unit 588 to implement a point to multipoint node with a traffic capacity of 2 Gbps. This second slice 102 is configured by appropriate configuration information from the controller 54 and isolated from the first slice 100.
A third "slice" 104 implements a multipoint to multipoint node with a traffic capacity of 3 Gbps, using the elastic TDM switch unit 526, the aggregation unit 528, TDM framing unit 530, the switch control and multicasting units 562, 564, the optical multiplexer/demultiplexer unit 586, the optical multicasting unit 584 and the switch unit 588. Again, this third slice 104 is configured by appropriate configuration information from the controller 54 and is isolated from the first and second slices 100, 102.
The first, second and third slices 100, 102, 104 are operative at different times, depending on the incoming traffic to the reconfigurable network interface system 50.
In use of the reconfigurable network interface system 50, when one or more data packets are received at the programmable device 52 through one of the data interface units 72, 74, 76, the programmable device 52 determines the type of data packet that has been received and determines the configuration required to process the particular type of data packet received. The programmable device 52 consults the global look up table 78 associated with the control interface unit 70 to determine the settings required for the programmable functional units 80, 82, 84, 86, 88, 90 and data interface units 72, 74, 76 to permit the received data packet to be processed, and the controller 54 programs the appropriate ones of the programmable functional units 80, 82, 84, 86, 88, 90 and data interface units 72, 74, 76 to enable a path for the received data packet.
It will be appreciated that the functionality implemented by the reconfigurable network interface system 50, or by any of multiple slices of the system 50, can be altered dynamically to meet changing requirements simply by providing new configuration information. Thus, the reconfigurable network interface system 50 provides exceptional flexibility in dealing with changing network performance requirements.
Whilst the reconfigurable network interface system 50 has been described above in the context of a network interface card used in conjunction with a server, it will be appreciated that the reconfigurable network interface system 50 can be configured to perform a wide variety of functions for use in different applications. For example, the reconfigurable network interface system 50 may be configured for use as an IP router, Ethernet switch, top-of-rack switch module, or any other network device.
Figures 5 to 8 illustrate data centre network architectures that are enabled by the use of the reconfigurable network interface system 50 with blades in a rack. The blades may be, for example, servers, CPU arrays, memory arrays, storage arrays or the like. For simplicity the exemplary architectures illustrated in Figures 5 - 8 are described below as housing servers, but it will be appreciated that other blades could equally be housed in the racks.
Figure 5 illustrates, schematically, the replacement of the electronic top-of-rack switch module 16 in the prior art architecture illustrated in Figure 1 with an optical top-of rack switch, whilst Figures 6-8 illustrate, schematically, different types of optical top-of rack switch, which enable different network architectures.
In the architecture 110 illustrated in Figure 5, racks 112, 114 each house a plurality of servers 116. The servers 116 in each rack 112, 114 are directly optically interconnected using the optical interfaces of the reconfigurable network interface units 50 installed in each server 116. Thus, direct intra-rack communication between servers 116 in a rack 112, 114 is possible, which reduces the latency of such communications. The racks 112, 114 are each provided with an optical top of rack switch module 118, which may use active or passive optical devices. The optical top of rack switch modules 118 are directly optically connected to each other, to enable direct inter-rack communications. The optical top-of-rack switch module 118 enables transparent switching and transport of data from any server 116 of any rack 112, 114 to any other server 116 housed in the same or a different rack 112, 114. The architecture 120 illustrated in Figure 6 is similar to that shown in Figure 5. In the architecture 120, the top-of rack switch unit is implemented as an arrayed waveguide grating (AWG) top-of-rack switch unit 122. Traffic from each server 116 in the same rack 112, 114 can be simply groomed using the AWG top of rack switch unit 122 and transported to an optical switching node 124, e.g. an OPS or OCS node. Once an optical path between two different racks 112, 114 is established by appropriate optical switching in the top-of-rack switch unit 122, communication between servers 116 in different racks 112, 114 is implemented by multiplexing/demultiplexing channels for each server 116.
The optical switching implemented by the AWG top-of rack switch unit 122 in the architecture 120 illustrated in Figure 6 provides limited connectivity, which can lead to latency in communications between racks 112, 114. For example, a rack 112 can only be connected to one other rack 114 at one time using OCS, and so traffic from the rack 112 intended for other destinations must therefore wait or be transmitted using OPS. Additionally, reconfiguring the top-of-rack switch 122 in the architecture 120 illustrated in Figure 6 requires significant control time.
Figure 7 is a schematic illustration of an improvement of the architecture 120 shown in Figure 6. In the architecture 130 of Figure 7, an NxN reconfigurable AWG (R-AWG) top-of-rack switch unit 132 is used in place of the AWG top-of-rack switch 122, and direct connections are established between racks 112, 144.
The R-AWG top-of-rack switch unit 132 is an NxN passive device which guides each input wavelength of a particular input port to a specific output port. Some output channels of the R-AWG top-of-rack switch 132 are connected to an optical switch node 134, whilst others of the output channels are used for direct interconnections between different racks 112, 114. By introducing a tuneable interface in the reconfigurable network interface unit 50 used in the servers 116 in the architecture 130, the input wavelength for each channel of the can be tuned according to the transport route to be followed. In this way, channels connected between racks 112, 114 or from racks 112, 114 to the optical switch node 134 are flexible and reconfigurable.
Figure 8 schematically illustrates an alternative way to improve the architecture illustrated in Figure 6. The architecture 140 of Figure 8 is similar to the architecture 130 illustrated in Figure 7, with the exception that in the architecture 140 of Figure 8, lxN or NxM wavelength selective switch (WSS) units 142 are used in place of the R-AWG optical top-of rack units 122. The WSS units 142 are able to assign any input wavelength(s) to any output port, which provides even further flexibility for intra-cluster architecture construction without having to use a tuneable interface in a reconfigurable network interface system 50 used in the servers 116. This arrangement can also allow for additional transparent intra-rack communication.
It is to be understood that the network architectures illustrated in Figures 5 to 8 are merely examples of architectures that can be implemented using servers and top-of-rack switch modules that employ the reconfigurable network interface system 50 described above. Those skilled in the art will recognise that the reconfigurable nature of the reconfigurable network interface system 50 permits many alternative implementations, to accommodate any network configuration or performance requirements. The choice of top-of-rack switch module used will depend on factors such as the bandwidth required, cost and power consumption constraints. For example, AWG technology is capable of supporting 50 to 100 GHz per port, and as such any optical signal that falls within the supported range can be accommodated. WSS technology is more flexible and can accommodate any bandwidth, and so offers greater flexibility and bit rate scalability. However, WSS technology uses active components, which increases cost and power consumption as compared to passive AWG technology.
It will be appreciated from the foregoing description that the reconfigurable network interface system 50 described above permits significant improvements in efficiency, latency and energy consumption. Moreover, the reconfigurable nature of the device 50 permits fast, simple and seamless reconfiguration and upgrading to accommodate increased network performance demands, and allows for ever-evolvable systems that allow the deployment of functions where and when needed. The software/hardware functional blocks can be deployed in existing and future technology solutions.
Although the reconfigurable network interface system 50 has been described above in the exemplary context of data and computer communications systems, it will be appreciated that the reconfigurable network interface system concept described is equally applicable to other applications and use cases. Thus, reconfigurable network interface system concept may be applied in sectors such as wireless communications, satellite communications, automotive, robotics, high performance computing, health or any other sector that uses electronic and/or optical systems.

Claims

1. A reconfigurable computing system comprising:
a plurality of programmable functional units, wherein the plurality of programmable functional units comprises one or more programmable electrical functional units and one or more programmable optical functional units; and
a controller, wherein
the programmable device is operative to program or reprogram at least one of the plurality of programmable functional units in accordance with configuration information received from the controller in order to implement desired functionality of the reconfigurable computing system.
2. A reconfigurable computing system according to claim 1, further comprising a global look up table associated with the controller, the global look up table containing configuration data for each of the plurality of programmable functional units relating particular settings of each of the plurality of programmable functional units to particular functional requirements.
3. A reconfigurable computing system according to claim 1 or claim 2 further comprising a plurality of interfaces permitting communication between the reconfigurable computing system and other components.
4. A reconfigurable computing system according to claim 3 wherein the plurality of interfaces includes one or more optical data interfaces.
5. A reconfigurable computing system according to claim 4 wherein the one or more optical data interfaces comprises a dense wavelength division multiplexing (DWDM) data interface unit.
6. A reconfigurable computing system according to any one claims 3 - 5 wherein the plurality of interfaces incudes one or more electrical data interfaces.
7. A reconfigurable computing system according to claim 6 wherein the one or more electrical data interfaces comprises a peripheral component interconnect express (PCIe) data interface.
8. A reconfigurable computing system according to any one of the preceding claims wherein the plurality of programmable functional units comprises one or more functional units for implementing network functions.
9. A reconfigurable computing system according to any one of the preceding claims wherein the plurality of programmable functional units comprises one or more functional units for implementing intra-network functions.
10. A reconfigurable computing system according to any one of the preceding claims wherein the controller comprises a software controller running on a PC or on an embedded processor.
11. A network interface card comprising a reconfigurable computing system according to any one of the preceding claims.
12. A network interface card according to claim 11 wherein the plurality of functional units includes one or more units selected from the group consisting of: a traffic virtualisation unit; an inter/intra-rack switching unit; an inter rack transmit/receive unit; an intra-rack transmit/receive unit; an optical transport protocol switching unit; and a label interface.
A network architecture comprising: a plurality of racks, each of the plurality of racks housing a plurality of blades and a top-of-rack switch unit, wherein:
each of the plurality of blades is provided with a network interface card according to claim 11 or claim 12;
each of the plurality of blades in each rack is directly connected to each of the other blades in that rack by means of its network interface card; and
each of the plurality of racks is directly connected to each of the other racks by means of its top-of-rack switch unit.
14. A network architecture according to claim 13 wherein the top-of-rack switch unit is an optical top-of-rack switch unit.
15. A network architecture according to claim 14 wherein the optical top-of-rack switch unit is implemented as an arrayed waveguide grating (AWG) switch unit.
16. A network architecture according to claim 14 wherein the optical top-of-rack switch unit is implemented as an NxN reconfigurable arrayed waveguide grating (R- AWG) switch unit.
17. A network architecture according to claim 14 wherein the optical top-of-rack switch unit is implemented as a programmable IxN or NxM wavelength selective switch (WSS) unit.
18. A network architecture according to any one of claims 13 to 17 wherein the plurality of blades comprises one or more blades selected from the group comprising the following: servers; CPU arrays; memory arrays; and storage arrays.
PCT/GB2015/050070 2014-01-15 2015-01-15 Reconfigurable computing system WO2015107344A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB1400666.2 2014-01-15
GBGB1400666.2A GB201400666D0 (en) 2014-01-15 2014-01-15 Reconfigurable network interface system

Publications (1)

Publication Number Publication Date
WO2015107344A1 true WO2015107344A1 (en) 2015-07-23

Family

ID=50238991

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2015/050070 WO2015107344A1 (en) 2014-01-15 2015-01-15 Reconfigurable computing system

Country Status (2)

Country Link
GB (1) GB201400666D0 (en)
WO (1) WO2015107344A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023062914A1 (en) * 2021-10-12 2023-04-20 日本電信電話株式会社 Optical communication device, optical communication system, and transfer method
WO2023062915A1 (en) * 2021-10-12 2023-04-20 日本電信電話株式会社 Optical communication device, optical communication system, and transfer method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130235870A1 (en) * 2010-05-03 2013-09-12 Sunay Tripathi Methods, Systems, and Fabrics Implementing a Distributed Network Operating System
US20130329548A1 (en) * 2012-06-06 2013-12-12 Harshad Bhaskar Nakil Re-routing network traffic after link failure

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130235870A1 (en) * 2010-05-03 2013-09-12 Sunay Tripathi Methods, Systems, and Fabrics Implementing a Distributed Network Operating System
US20130329548A1 (en) * 2012-06-06 2013-12-12 Harshad Bhaskar Nakil Re-routing network traffic after link failure

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
CHANNEGOWDA MAYUR ET AL: "Software-defined optical networks technology and infrastructure: Enabling software-defined optical network operations [invited]", IEEE/OSA JOURNAL OF OPTICAL COMMUNICATIONS AND NETWORKING, IEEE, USA, vol. 5, no. 10, 1 October 2013 (2013-10-01), XP011531156, ISSN: 1943-0620, [retrieved on 20131022], DOI: 10.1364/JOCN.5.00A274 *
OHLEN PETER ET AL: "Software-defined networking in a multi-purpose DWDM-centric metro/aggregation network", 2013 IEEE GLOBECOM WORKSHOPS (GC WKSHPS), IEEE, 9 December 2013 (2013-12-09), pages 1233 - 1238, XP032599999, DOI: 10.1109/GLOCOMW.2013.6825162 *
SIQUEIRA MARCOS ET AL: "An optical SDN Controller for Transport Network virtualization and autonomic operation", 2013 IEEE GLOBECOM WORKSHOPS (GC WKSHPS), IEEE, 9 December 2013 (2013-12-09), pages 1198 - 1203, XP032599848, DOI: 10.1109/GLOCOMW.2013.6825156 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023062914A1 (en) * 2021-10-12 2023-04-20 日本電信電話株式会社 Optical communication device, optical communication system, and transfer method
WO2023062915A1 (en) * 2021-10-12 2023-04-20 日本電信電話株式会社 Optical communication device, optical communication system, and transfer method

Also Published As

Publication number Publication date
GB201400666D0 (en) 2014-03-05

Similar Documents

Publication Publication Date Title
US9654852B2 (en) Scalable hybrid packet/circuit switching network architecture
US8503879B2 (en) Hybrid optical/electrical switching system for data center networks
US9247325B2 (en) Hybrid electro-optical distributed software-defined data center architecture
US7324537B2 (en) Switching device with asymmetric port speeds
US10250351B2 (en) Efficient network utilization using optically switched superchannels
WO2010127716A1 (en) Synchronous packet switches
EP3207647B1 (en) An optical wavelength selective switch, an optical network node, an optical network and methods therein
Hammadi et al. Resource provisioning for cloud PON AWGR-based data center architecture
US9693123B2 (en) Optical switch
Peng et al. A novel SDN enabled hybrid optical packet/circuit switched data centre network: The LIGHTNESS approach
Xue et al. Flexibility assessment of the reconfigurable OPSquare for virtualized data center networks under realistic traffics
EP2521326B1 (en) Run-time scalable switching fabric
US6185021B1 (en) Cross-connecting optical translator array
WO2015107344A1 (en) Reconfigurable computing system
Hammadi Future PON data centre networks
US6417943B1 (en) Low-latency high-bandwidth TDM-WDM network area network architecture
US10917707B2 (en) Network and method for a data center
Shu et al. Programmable OPS/OCS hybrid data centre network
Barry et al. Optical switching in datacenters: architectures based on optical circuit switching
US9706274B2 (en) Distributed control of a modular switching system
WO2016164769A1 (en) Data center endpoint network device with built in switch
WO2015147840A1 (en) Modular input/output aggregation zone
Cai et al. Software defined data center network architecture with hybrid optical wavelength routing and electrical packet switching
Saridis et al. Lightness: All-optical SDN-enabled intra-DCN with optical circuit and packet switching
Yang Design of optical data vortex cluster network for large data center network

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15700779

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15700779

Country of ref document: EP

Kind code of ref document: A1