WO2012143942A2 - Network interface sharing in multi host computing systems - Google Patents

Network interface sharing in multi host computing systems Download PDF

Info

Publication number
WO2012143942A2
WO2012143942A2 PCT/IN2012/000272 IN2012000272W WO2012143942A2 WO 2012143942 A2 WO2012143942 A2 WO 2012143942A2 IN 2012000272 W IN2012000272 W IN 2012000272W WO 2012143942 A2 WO2012143942 A2 WO 2012143942A2
Authority
WO
WIPO (PCT)
Prior art keywords
host
network interface
interface controller
network
hosts
Prior art date
Application number
PCT/IN2012/000272
Other languages
French (fr)
Other versions
WO2012143942A3 (en
Inventor
Balaji Kanigicherla
Siva Raghuram VOLETI
Krishna Mohan Tandaboina
Suman KOPPARAPU
Sarveshwar BANDI
Kapil Hali
Original Assignee
Ineda Systems Pvt. Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ineda Systems Pvt. Ltd filed Critical Ineda Systems Pvt. Ltd
Publication of WO2012143942A2 publication Critical patent/WO2012143942A2/en
Publication of WO2012143942A3 publication Critical patent/WO2012143942A3/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L61/00Network arrangements, protocols or services for addressing or naming
    • H04L61/09Mapping addresses
    • H04L61/25Mapping addresses of the same type
    • H04L61/2503Translation of Internet protocol [IP] addresses
    • H04L61/2521Translation architectures other than single NAT servers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L61/00Network arrangements, protocols or services for addressing or naming
    • H04L61/50Address allocation
    • H04L61/5007Internet protocol [IP] addresses
    • H04L61/5014Internet protocol [IP] addresses using dynamic host configuration protocol [DHCP] or bootstrap protocol [BOOTP]

Abstract

The present subject matter discloses methods and systems of network interface sharing in multiple host computing system (100) running multiple operating systems. In one embodiment, the multi-host computing system (100) comprises a network interface controller (118) configured to provide access to at least one communication network to a plurality of hosts of the multi-host computing system (100), a device interconnect logic (DIL) unit (120), communicatively coupled to the network interface controller (118), wherein the DIL unit (120) is configured to determine a first host, from amongst the plurality of hosts, wherein the first host controls the network interface controller (118), and a peripheral component interconnect express (PCIe)-to-PCIe network redirection engine (134) configured to transmit data between the plurality of hosts, based in part on control signals generated by the first host.

Description

NETWORK INTERFACE SHARING IN MULTI HOST COMPUTING SYSTEMS
FIELD OF INVENTION
[0001] The present subject matter relates to network interface and, particularly but not exclusively, to network sharing in multi host computing systems running multiple operating systems.
BACKGROUND
[0002] Computing systems, such as laptops, netbooks, workstations, etc., usually have one or more network interface controllers such as ethernet interface controller, wireless interface controller, etc., to connect to various other computing systems and/ or networks such as Local Area Network (LAN), Wide Area Network (WAN), internet, etc. Conventionally computing systems include an ethernet interface controller to enable them to communicate with wired networks whereas some computing systems include a wireless interface controller to connect to a wireless network.
[0003] Conventionally network interface controllers implement the electronic circuitry required to communicate using a specific physical layer and a data link layer standard such as ethernet, Wi-Fi, or Token Ring thus facilitating communication among computing systems on the same LAN and large-scale network communications through routable protocols, such as internet protocol (IP).
[0004] Some network interface controllers can behave both as an Open System
Interconnections (OS I) layer 1 (physical layer) and layer 2 (data link layer) device, as it provides physical access to a networking medium as well as provides a low-level addressing system through the use of Media Access Control (MAC) addresses.
[0005] Every network interface controller has a unique serial number called the media access control (MAC) address, which is stored in read-only memory carried on the interface controller. Every computing device on a network is identified by the unique MAC address of its network interface controller. Network interface controller vendors purchase blocks of addresses from the Institute of Electrical and Electronics Engineers (IEEE) and assign a unique address to each network interface controller at the time of manufacture thus ensuring the uniqueness of the MAC address. SUMMARY
0006] This summary is provided to introduce concepts related to network sharing in nulti host computing systems running multiple operating systems. This summary is not intended o identify essential features of the claimed subject matter nor is it intended for use in letermining or limiting the scope of the claimed subject matter.
0007] In one embodiment, the multi host computing system comprises a network nterface controller configured to provide access to at least one communication network to a )lurality of hosts of the multi -host computing system, a device interconnect logic (DIL) unit 120), communicatively coupled to the network interface controller, wherein the DIL unit is configured to determine a first host, from amongst the plurality of hosts, wherein the first host controls the network interface controller, and a peripheral component interconnect express PCIe)-to-PCIe network redirection engine configured to transmit data between the plurality of osts, based in part on control signals generated by the first host.
BRIEF DESCRIPTION OF THE FIGURES
0008] The above features and aspects and other features and aspects of the subject natter will be better understood with regard to the following description, appended claims, and iccompanying figures. In the figures, the left-most digit(s) of a reference number identifies the iigure in which the reference number first appears. The use of the same reference number in iifferent figures indicates similar or identical items:
[0009] Fig... 1 ...shows exemplary components of a multi host computing system, in iccordance with an embodiment of the present subject matter.
[0010] Fig. 2 shows exemplary components of a network redirection engine, according to m embodiment of the present subject matter.
[0011] Fig. 3 illustrates an exemplary method for sharing of a network interface controller on system boot, according to an embodiment of the present subject matter.
[0012] Fig. 4 illustrates an exemplary method for transfer of ownership of the network interface controller, according to an embodiment of the present subject matter.
DETAILED DESCRIPTION
[0013] Systems and methods for network sharing in multi host computing systems running multiple operating systems are described herein. The systems and methods can be implemented in a variety of computing devices such as laptops, desktops, workstations, tablet- PCs, smart phones, etc. Although the description herein is with reference to certain computing systems, the systems and methods may be implemented in other electronic devices, albeit with a few variations, as will be understood by a person skilled in the art.
[0014] Conventional computing systems usually run one operating system which uses a network interface controller to communicate with other computing devices or with a network, rhe network interface controller is used exclusively by the single operating system. In multi host computing systems, multiple hosts concurrently run multiple operating systems. For example a first host may run OS-A, a second host may run OS-B and so on. Further the multiple operating systems may be the same say Microsoft® Windows® XP® or may be homogeneous such as, Microsoft® Windows® XP® may run on the first host, Microsoft® Windows7® may run on a second host and so on. Further the multiple operating systems may be totally different such as, Microsoft® Windows® XP® runs on the first host, Red Hat Enterprise Linux runs on the second host and so on. In multi host systems, the hardware platform and the peripheral devices are shared among the multiple hosts. Thus the network interface controller(s) are also shared among the multiple hosts.
[0015] Conventional techniques of sharing the network interface controller(s) include using a hypervisor. A hypervisor is a virtualization technique which enables multiple operating systems running concurrently to share one or more peripheral devices of a computing system. In one conventional implementation, the hypervisor virtualizes the peripheral devices and allocates the ownership of each peripheral device, say a physical network interface controller, to one of the multiple operating systems which are running concurrently. The other concurrently running operating systems access and use the peripheral device using a split or a pseudo device driver. In such implementation, the operating system which has been allocated the ownership of the physical network interface controller acts as a L2 bridge or L3 router and directs incoming data from other computing/communication systems on the network to the operating system for which the data is intended.
[0016] In another conventional implementation, each of the multiple operating systems is assigned a pseudo network interface controller and a unique MAC address associated with the pseudo network interface controller. In said implementation, the hypervisor includes a switching module which acts as a L2 switch and directs incoming data to one of the concurrently running operating systems based on the MAC address. If the physical network interface controller does not support receiving data in form of packets destined to multiple MAC addresses, the physical network interface controller is configured to run in a promiscuous mode. In promiscuous mode, the physical network interface controller passes all data it receives to the processor rather than just frames addressed to the MAC address of the physical network interface controller. Each frame of the data includes the MAC address it is intended for. By default, the physical network interface controller is configured to drop a frame that is not addressed to that physical network interface controller. However, in the promiscuous mode, the physical network interface controller receives all frames, including the frames intended for other computing/ communication systems.
[0017] In another implementation, the multi host computing system uses a multi -root input output virtualization (MRIOV) switch electronically connected to at least one of the plurality of processors, a peripheral and interface virtualization unit (PIVU) connected to the MRIOV switch to enable peripheral sharing between multiple operating systems running on multiple processors. Various types of peripheral devices may be connected to the multi host computing system. For example, the multi host computing system may include or may be connected to various storage controllers, like Serial Advanced Technology Attachments (SAT A), NAND flash memory, multimedia cards (MMC), Consumer Electronics Advanced Technology Attachment (CEATA); connectivity modules like baseband interfaces, Serial Peripheral Interfaces (SPI), Inter-integrated Circuit (I2C), infrared data association (IrDA) compliant devices; media controllers like cameras, integrated inter chip sound (12 S); media accelerators like audio encode-decode engines, video encode-decode engines, graphics accelerators; security modules like encryption engines, key generators; communication modules like Bluetooth, Wi-Fi, Ethernet; and universal serial bus (USB) connected devices like pen drives, memory sticks, etc.
[0018] The conventional techniques involving hypervisor to share a physical network interface controller result in a high increase of processing load on the processor of the computing system, since every frame of data received by the computing system has to be processed by the routing module before being dropped or forwarded to one of the concurrently running operating systems. Further the conventional techniques also increase the latency of the network, thus affecting performance of the network adversely. Moreover, these conventional techniques are not applicable to a multi host computing system, where two or more hosts share the hardware platform. Moreover, the conventional techniques involving MR-IOV require specialized system nfrastructure and multi-root aware network devices, which are not commonly available. These echniques do not work with legacy, non-multi-root network devices
0019] The present subject matter discloses methods and systems of sharing a network nterface controller, which may not necessarily be multi-root aware, in multiple host computing systems running multiple operating systems. The multi host computing system is a multi processor computing system which has a plurality of processors, of same or varying processing rower and capable of running multiple operating systems simultaneously. Further, the multi host computing systems are capable of sharing the hardware platform such as peripheral devices like iisplay devices, audio devices, input devices such as keyboard, mouse, touchpad, etc. among the Dlurality of processors running the multiple operating systems simultaneously.
0020] In one embodiment, the multi host computing system includes a device nterconnect logic unit, henceforth refereed to as DIL unit. The DIL unit determines to which lost the ownership of the network interface controller should be assigned to. For example, say in i multi host computing system comprising of Host A, Host B and Host C, when the multi host computing system is initially booted, the DIL unit may assign the ownership of the network nterface controller to any of the host or may be configured to assign the ownership of the letwork interface controller to a particular host. In one implementation, when the multi host computing system is booted, the DIL unit may assign the ownership of the network interface controller to Host A. Now if Host A is shutdown or powered off or put into sleep or hibernate node, the DIL unit seamlessly transfers the ownership of the network interface controller from Host A to any of the active hosts based on the configuration.
[0021] In one implementation, the DIL unit is configured to be controlled by a network virtualization processor. The network virtualization processor communicates with the hosts and detects whenever there is a change in the power state of any of the hosts. For example, say when Host A is powered off, the network virtualization processor detects the change and sends an instruction to the DIL unit to assign the ownership of the network interface controller to any of the active hosts. When there are multiple active hosts in the multi host computing system, one of the host, say Host A, is assigned the ownership of the network interface controller, whereas other tiosts such as Host B and Host C access the network interface controller through the Host A using a network redirection engine, henceforth referred to as NERD. In one implementation, the NERD is a low latency, high bandwidth data communication mechanism which allows fast data xansfer between the hosts. Each of the hosts sees the NERD as a second network interface controller. Thus in said implementation, one of the hosts is assigned the ownership of the letwork interface and the other hosts access the network interface through the host which has ?een assigned the ownership of the network interface controller using the NERD.
[0022] In another implementation, the operating system running on the host, assigned the
Dwnership of the network interface controller, is configured to implement a bridge between the iriver of the network interface controller and the driver for the NERD. For example, in said embodiment, the operating system running on any host, say Host A, has been assigned the Dwnership of the network interface controller. The operating system running on Host A may implement an L2 bridge between the driver of the network interface controller and the driver for :he NERD. If the network interface controller is not capable of receiving packets destined for multiple MAC addresses, it is put in a promiscuous mode. The packets intended for other hosts are received by the Host A and forwarded to the other hosts through the NERD.
[0023] Yet in another implementation, the host, assigned the ownership of the network interface controller, i.e. Host A, provides network connectivity to other hosts using IP forwarding. IP forwarding is a process used to determine the network path in which a packet or datagram can be sent. IP forwarding uses routing information to transfer data packets and can be configured to send a data packet over multiple networks. In said implementation, the Host A runs a dynamic host configuration protocol (DHCP) server service and allocates unique internet protocol (IP), addresses to the other active hosts. Further Host A also runs a network . address translation (NAT) service so as to modify IP address information in IP packet headers while the [P packets are in transit across the Host A. In this implementation, Host A receives the data packets addressed to any of the hosts and acts as a router for the other hosts. The Host A forwards the data to the hosts based on the IP address allocated to the hosts by the DHCP server service.
[0024] Thus the disclosed methods of sharing network interface controllers enable multiple hosts of a multi host computing system to connect to and access a network concurrently without compromising on performance due to the use of NERD for forwarding packets between the hosts. These and other features and advantages will be described in greater detail in conjunction with the following figures. [0025] Fig. 1 shows the exemplary components of a multi host computing system 100, henceforth referred to as the system 100, according to an embodiment of the present subject matter. The system 100 can either be a portable electronic device, like laptop, notebook, netbook, tablet computer, etc., or a non-portable electronic device like desktop, workstation, server, etc. The system 100 includes a first processor 102 and a second processor 104. The first processor 102 and the second processor 104 are coupled to a first memory 106-1 and a second memory 106-2 respectively.
[0026] The first processor 102 and the second processor 104 can be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, or any devices that manipulate signals based on operational instructions. Among other capabilities, the first processor 102 and the second processor 104 can be configured to fetch and execute computer-readable instructions and data stored in the first memory 106-1 and the second memory 106-2 respectively.
[0027] The first memory 106-1 and the second memory 106-2 can include any computer- readable medium known in the art including, for example, volatile memory (e.g., RAM) and/or non-volatile memory (e.g., FLASH, etc.). The first memory 106-1 and the second memory 106-2 include one or more modules which provide various functionalities to the system 100. The modules usually includes routines, programs, objects, components, data structure, etc!, that perform particular task or implement particular abstract data types. Further the first memory 106- 1 and. the second memory 106-2 also include one or more data repository for storing data. The system 100 may also include other components 1 16 required to provide additional functionalities to the system 100.
[0028] The peripherals devices connected to the system 100 can be configured to be used exclusively by either the first processor 102 or the second processor 104 or by both the first processor 102 and the second processor 104 simultaneously. Additionally the system 100 has a network interface controller 1 18 to connect to external network, systems, peripherals, devices, etc. The network interface controller 118 may be an ethernet controller or any other controller which can be used to access and connect to a network.
[0029] Further, the system 100 includes a device interconnect logic unit 120, henceforth referred to as the DIL unit 120. The DIL unit 120 is controlled by a network virtualization processor 122. The network virtualization processor 122 can be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, or any devices that manipulate signals based on operational instructions.
[0030] The system 100, for ease of explanation, has been depicted as having two hosts, viz. a first host 124 and a second host 126. However, it will be known to those skilled in the art that the same concepts may be extended to any number of hosts. The first host 124 includes the first processor 102, the first memory 106-1, a first control driver 128-1, a first network interface driver 130-1. Similarly, the second host 126 includes the second processor 104, the second memory 106-2, a second control driver 128-2, a second network interface driver 130-2. The other components of the system 100 are concurrently shared by the two hosts 124 and 126.
[0031] The first control driver 128-1 and the second control driver 128-2, collectively referred to as control drivers 128, facilitate the interaction of the hosts 124 and 126 with the network virtualization processor 122. The network virtualization processor 122 detects whenever there is a change in the power state of any of the hosts 124 and 126. Whenever there is a power change in any of the hosts 124 and 126, the network virtualization processor 122 informs the DIL unit 120 of the same. The DIL unit 120 determines to which host the ownership of the network interface controller 118 should be assigned to. When the first host 124 is assigned the ownership of the network interface controller 1 18, the first host 124 communicates with the network interface controller 118 using the first network interface driver 130-1. Similarly when the second host 126 is. assigned the ownership of the network interface controller 1 18, the second host 126 communicates with the network interface controller 118 using the second network interface driver 130-2. Each of the hosts 124 and 126 communicate with the network interface controller 118 through the DIL unit 120.
[0032] When the system 100 is booted, a primary operating system is loaded. In one example, a first operating system, referred to as OS-A, running on the first processor 102 may be designated as the primary operating system while a second operating system, referred to as OSES, running on the second processor 104 is treated as the secondary operating system. If multiple operating systems are present, the system 100 allows the user to designate any of the operating systems as the primary operating system. The user can change the primary operating system according to user's choice and/or requirement. The system 100 also allows the user to switch from one operating system to another operating system seamlessly. The system 100 can concurrently run multiple operating systems on the first processor 102 and the second processor 104.
[0033] When the system 100 is booted, the ownership of the network interface controller
118 is assigned to any one of the operating system, say the primary operating system OS-A. The OS-A views the network interface controller 1 18 as a native device and access and communicates with it using the first network interface controller driver 130-1. In this scenario, the second host 126 accesses the network interface controller 118 using a PCIe-to-PCIe network redirection engine 134, henceforth referred to as NERD 134. In one implementation, the NERD 134 is a low latency, high bandwidth data communication means. The first host 124 and the second host 126 access the NERD 134 using a first NERD driver 132-1 and a second NERD driver 132-2. When the second host 126 needs to access the network interface controller 118, the second host 126 transmits the data to the NERD 134 with the help of the second NERD driver 132-2. The NERD 134 forwards this data to the first NERD driver 132-1 of the first host 124. The first host 124 then forwards this data to the network interface controller 118 using the first network interface driver 130-1. In one embodiment, the data transfer between the two hosts 122 and 124 through the NERD 134 is facilitated by the network virtualization processor 122.
[0034] When the first host 124 is switched off or moved to a low power state, the first control driver 128-1 sends a signal to the network virtualization processor 122. The network virtualization processor 122 then configures the DIL unit 120 to seamlessly transfer the ownership of the network interface controller 1 18 to the second host 126. In one implementation, the DIL unit 120 generates a PCIe hot-unplug event for the first host 124 and a PCIe hot-plug event for the second host 126 to seamlessly transfer the ownership of the network interface controller 118 from the first host 124 to the second host 126. Thus the system 100 facilitates sharing of the network interface controller 1 18 among multiple hosts concurrently. It should be appreciated by those skilled in the art, that even though the system 100 has been described with respect to two hosts, the first host 124 and the second host 126, the same concept can be applied to a multi host computing system having any number of hosts albeit little or no modification.
[0035] Fig. 2 shows exemplary components of the NERD 134 according to an embodiment of the present subject matter. In one embodiment, the NERD 134 includes a local memory 202. The local memory 202 can include any computer-readable medium known in the art including, for example, volatile memory (e.g., RAM) and/or non-volatile memory (e.g., FLASH, etc.). Whenever a host, say the second host 126, transfers data to the first host 124, the second NERD driver 132-2 initializes the allocation of a transmit circular buffer.
[0036] For the sake of explanation, the operation of NERD 134 is elaborated in context of data transfer from the second host 126 to the first host 124. The second host 126 communicates to the NERD 134 to initiate and establish data communication channel with the first host 124. The head and tail pointers of a transmit data buffer descriptor circular buffer (TDBDCB) are initialized in NERD 134. The TDBDCB includes pointers which define the location of the transmit data buffers. The NERD 134 reads the TDBDCB and enables a first direct memory access module 204-1, henceforth referred to as the first DMA module 204-1, and receives the transmit data buffer descriptor located at the tail pointer. On completion of the transmission of the transmit data buffer descriptor, the NERD 134 requests for the next transmit data buffer descriptor. The NERD 134 then analyzes the transmit data buffer descriptors, which among other things include an end of packet field, from the TDBDCB and configures the direct memory access module 204-1 to read the transmit data buffer from the memory, say the second memory 106-2 of the second host 126. The NERD 134 configures an interrupt generator 206 to trigger the completion of the data transmission when it detects the end of packet field in the transmit data buffer descriptor.
[0037] The NERD 134 now signals the first host 124 that the NERD 134 has data sent by the other host, i.e. the second host 126. The first host 124 initializes the allocation of a receive circular buffer in the first memory 106-1. for the purpose of receiving data transmitted from the second host 126 to the first host 124. The head and tail pointers of the receive empty buffer descriptor circular buffer (REBDCB) are initialized in the NERD 134. Among other things, the REBDCB includes descriptors to indicate the location of the receive empty buffers in first memory 106-1. The NERD 134 reads the REBDCB and enables a second direct memory access module 204-2, henceforth referred to as the second DMA module 204-2, to fetch the receive empty buffer descriptor present at the location defined by the tail pointer of the REBDCB. On completion of the fetching of one receive empty buffer descriptor, the NERD 134 initiates the receive operation of the next receive empty buffer descriptor. Also the occupancy of the received empty buffer is updated in the descriptors of the descriptor circular buffer. The NERD 134 also analyzes the descriptors of the received empty buffers so as to detect the end of transmission. The NERD 134 then configures the direct memory access module 204-2 to write data into first memory 106-1 of first host 124. In one embodiment, the end of packet field is used to signal the completion of the data transmission. In one implementation, the interrupt generator 206 signals the completion of the transmittal process.
[0038] Fig. 3 illustrates an exemplary method 300 for sharing of the network interface controller 118 on system boot, according to an embodiment of the present subject matter. The method 300 may be described in the general context of computer executable instructions. Generally, these computer executable instructions can include routines, programs, objects, components, data structures, procedures, modules, functions, etc., that perform particular functions or implement particular abstract data types. The method 300 may also be practiced in a distributed computing environment where functions are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, computer executable instructions may be located in both local and remote computer storage media, including memory storage devices.
[0039] The order in which the method 300 is described is not intended to be construed as a limitation, and any number of the described method blocks can be combined in any order to implement the method 300, or an alternative method. Additionally, individual blocks may be deleted from the method 300 without departing from the spirit and scope of the subject matter described herein. Furthermore, the method 300 can be implemented in any suitable hardware, software, firmware, or combination thereof.
[0040] At block 302, a host of the system 100, say the first host 124 is booted up and the first operating system is loaded. The first host 124 then detects if the network interface controller 118 is owned by the other host, i.e. the second host 126, as illustrated in block 304. If the network interface controller 118 is not owned by the second host 126, the first host 124 installs the first network interface driver 130-1 as shown in block 306. Then the first host 124 uses the network interface controller 118 directly as depicted in block 308. In one implementation, the first host 124 access the network interface controller 1 18 through the DIL unit 120.
[0041] If at block 304, the first host 124 determines the network interface controller 118 to be owned by the other host, i.e. the second host 126, the first host 124 sends a request to the network virtualization processor 122 to initiate the sharing of the network interface controller 118 as illustrated in block 310. At block 312, the network virtualization processor 122 forwards the request of sharing the network interface controller 118 to the second host 126. [0042] At block 314, the network virtualization processor 122 initiates a high bandwidth, low latency, data communication channel between the two hosts 124 and 126. In one implementation the data communication channel is established using the NERD 134. At block 316 the NERD drivers, 132-1 and 132-2, are installed for the hosts to facilitate the data communication between the hosts 124 and 126. Thus the method 300 enables multiple hosts to use the network interface controller 118 to connect to a network or other computing devices with the help of NERD 134.
[0043] Fig. 4 illustrates an exemplary method 400 for transfer of ownership of the network interface controller 1 18, according to an embodiment of the present subject matter. The method 400 may be described in the general context of computer executable instructions. Generally, computer executable instructions can include routines, programs, objects, components, data structures, procedures, modules, functions, etc., that perform particular functions or implement particular abstract data types. The method 400 may also be practiced in a distributed computing environment where functions are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, computer executable instructions may be located in both local and remote computer storage media, including memory storage devices.
[0044] The order in which the method 400 is described is not intended to be construed as a limitation, and any number of the described method blocks can be combined in any order to implement the method 400, or an alternative method. Additionally, individual blocks may be deleted from the method 400 without departing from the spirit and scope of the subject matter described herein. Furthermore, the method 400 can be implemented in any suitable hardware, software, firmware, or combination thereof.
[0045] At block 402, the system 100 receives a request to shutdown any of the hosts, say the first host 124. As illustrated in block 404, the first host 124 determines if it has the ownership of the network interface controller 118. If the first host 124 does not own the network interface controller 1 18, then as shown in block 406, the first host 124 sends a request to the network virtualization processor 122 to terminate the data communication channel, implemented using the NERD 134, with the second host 126. As illustrated in block 408, the first host 124 now unloads the first NERD driver 132-1 and proceeds to shutdown. [0046] If at block 404, it is determined that the first host 124 is the owner of network interface controller 118, the first host 124 checks if the network interface controller 1 18 is being shared with the other host, i.e. the second host 126 which is illustrated at block 410. If the network interface controller 1 18 is not being shared with the second host 126, as depicted in block 412, the first host 124 unloads the first network interface driver 130-1 and proceeds to shutdown. If the network interface controller 118 is shared with the second host 126, the first host 124 sends a signal to the network virtualization processor 122 informing about the shutdown, as depicted in block 414.
[0047] As shown in block 416, the network virtualization processor 122 now configures the DIL unit 120 to seamlessly assign the ownership of the network interface controller 1 18 to the second host 126. As illustrated in block 418, the second host 126 is assigned the ownership of the network interface controller 118 so that the second host 126 can control the network interface controller 1 18 directly. The second host 126 now installs the second network interface driver 130-2 so as to communicate with the network interface controller 1 18 through the DIL unit 120, which is depicted in block 420. As shown in block 422, the network virtualization processor 122 now terminates the data communication channel, which was implemented using the NERD 134, with the first host 124.
[0048] Although implementations for network interface controller sharing in a multi host computing system have been described in language specific to structural features and/or methods, it is to be understood that the appended claims are not necessarily limited to the specific features or methods described. Rather, the specific features and methods are disclosed as exemplary implementations for network interface controller sharing.

Claims

We claim:
1. A multi-host computing system ( 100), comprising:
a network interface controller (118) configured to provide access to at least one communication network to a plurality of hosts of the multi-host computing system (100); a device interconnect logic (DIL) unit (120), communicatively coupled to the network interface controller (1 18), wherein the DIL unit (120) is configured to determine a first host, from amongst the plurality of hosts, wherein the first host controls the network interface controller (118); and
a peripheral component interconnect express (PCIe)-to-PCIe network redirection engine (134) configured to transmit data between the plurality of hosts, based in part on control signals generated by the first host.
2. The multi-host computing system (100), as claimed in claim 1, wherein the first host is configured to route data between the network interface controller (1 15) and the at least one of the plurality of hosts.
3. The multi-host computing system (100), as claimed in claim 2, wherein the first host is configured to run at least one of a Dynamic Host Configuration Protocol (DHCP) service and a network address translation (NAT) service to assign an internet protocol (IP) address to the at least one of the plurality of hosts and route data between the network interface controller and the at least one of the plurality of hosts based in part on the IP address associated with the at least one of the plurality of hosts
4. The multi-host computing system (100), as claimed in claim 1, wherein the first host is configured to implement a L2 switch, wherein the L2 switch is configured to exchange data between a driver for the PCIe-to-PCIe network redirection engine (134) and a driver for the network interface controller (118).
5. The multi-host computing system (100), as claimed in claim 1, wherein the network redirection engine (134) further comprises at least one direct memory access (DMA) module (204) configured to initiate and establish a data communication channel between the first host and at least one of the plurality of hosts.
6. The multi-host computing system (100), as claimed in claim 5, wherein the network redirection engine (134) further comprises an interrupt generator (206) configured to generate a trigger on determining an end of packet field in a transmit data buffer descriptor of at least one data packet transmitted over the data communication channel.
7. The multi-host computing system (100), as claimed in claim 1, wherein the network interface controller (118) is an Ethernet controller.
8. The multi-host computing system (100), as claimed in claim 1, wherein the network redirection engine (134) is at least one of a network device, a Peripheral Component Interconnect Express (PCIe) compliant device, a Peripheral Component Interconnect compliant (PCI) device, a non-PCI compliant device and a non-PCIe compliant device .
9. A method of sharing a network interface controller (1 18) amongst a plurality of hosts in a multi-host computing system (100) on booting a first host, the method comprising:
determining, by a network virtualization processor (122), whether the network interface controller (118) is controlled by a second host;
initiating running of a network interface driver, by the network virtualization processor (122), on determining the network interface controller (1 18) not to be controlled by the second host; and
generating control signals, by the network virtualization processor (122), for controlling the network interface controller (1 18).
10. The method as claimed in claim 9, wherein the method further comprises:
sending a request to a network virtualization processor to share the network . . interface controller (118) on determining the network interface controller (118) to be used by the second host;
initiating a data communication channel over a network redirection engine (134) between the first host and the second host; and
running network redirection engine drivers for at least one of the first host and the second host to initiate the data communication channel, wherein the sharing of the network interface controller (1 18) takes place over the data communication channel.
11. The method as claimed in claim 10, wherein the sending further comprises transmitting a request by the second host to share the network interface controller (1 18) on determining the network interface controller (118) to be used by the second host.
12. The method as claimed in claim 10, wherein the method further comprises: determining whether the network interface controller (1 18) is controlled by the first host;
ascertaining whether the network interface controller (118) is shared with a second host based on the determination;
initiating a request to transfer the control of the network interface controller (118) to the second host on ascertaining the network interface controller (1 18) to be shared with the second host;
running a network interface driver for the second host; and
terminating a data communication channel between the first host and the second host.
PCT/IN2012/000272 2011-04-18 2012-04-17 Network interface sharing in multi host computing systems WO2012143942A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN1331CH2011 2011-04-18
IN1331/CHE/2011 2011-04-18

Publications (2)

Publication Number Publication Date
WO2012143942A2 true WO2012143942A2 (en) 2012-10-26
WO2012143942A3 WO2012143942A3 (en) 2013-01-03

Family

ID=47041986

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IN2012/000272 WO2012143942A2 (en) 2011-04-18 2012-04-17 Network interface sharing in multi host computing systems

Country Status (1)

Country Link
WO (1) WO2012143942A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10397140B2 (en) 2015-04-23 2019-08-27 Hewlett-Packard Development Company, L.P. Multi-processor computing systems

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040260842A1 (en) * 2003-04-18 2004-12-23 Nextio Inc. Switching apparatus and method for providing shared I/O within a load-store fabric
CN1617506A (en) * 2003-11-12 2005-05-18 杨骁翀 Shared access to internet technology
CN101067794A (en) * 2007-06-14 2007-11-07 中兴通讯股份有限公司 Multi-nuclear processor and serial port multiplexing method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040260842A1 (en) * 2003-04-18 2004-12-23 Nextio Inc. Switching apparatus and method for providing shared I/O within a load-store fabric
CN1617506A (en) * 2003-11-12 2005-05-18 杨骁翀 Shared access to internet technology
CN101067794A (en) * 2007-06-14 2007-11-07 中兴通讯股份有限公司 Multi-nuclear processor and serial port multiplexing method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10397140B2 (en) 2015-04-23 2019-08-27 Hewlett-Packard Development Company, L.P. Multi-processor computing systems

Also Published As

Publication number Publication date
WO2012143942A3 (en) 2013-01-03

Similar Documents

Publication Publication Date Title
EP3343881B1 (en) Packet processing method in cloud computing system, host, and system
CN111490949B (en) Method, network card, host device and computer system for forwarding data packets
US9645956B2 (en) Delivering interrupts through non-transparent bridges in a PCI-express network
US9916269B1 (en) Packet queueing for network device
US10474606B2 (en) Management controller including virtual USB host controller
US10067900B2 (en) Virtualized I/O device sharing within a distributed processing node system
US10467179B2 (en) Method and device for sharing PCIe I/O device, and interconnection system
US10191865B1 (en) Consolidating write transactions for a network device
US9918270B2 (en) Wireless interface sharing
US11086801B1 (en) Dynamic resource management of network device
US10817448B1 (en) Reducing read transactions to peripheral devices
US11609866B2 (en) PCIe peripheral sharing
WO2023221847A1 (en) Data access method based on direct communication of virtual machine device, and device and system
US8996734B2 (en) I/O virtualization and switching system
US11741039B2 (en) Peripheral component interconnect express device and method of operating the same
KR20150081497A (en) Apparatus for Virtualizing a Network Interface and Method thereof
US11467998B1 (en) Low-latency packet processing for network device
WO2012140673A2 (en) Audio controller
US20170344511A1 (en) Apparatus assigning controller and data sharing method
WO2023125565A1 (en) Network node configuration and access request processing method and apparatus
WO2012143942A2 (en) Network interface sharing in multi host computing systems
US9921867B2 (en) Negotiation between virtual machine and host to determine executor of packet flow control policy with reduced address space
WO2012143947A2 (en) Multi-host peripheral controller
US20230350824A1 (en) Peripheral component interconnect express device and operating method thereof
US20210357351A1 (en) Computing device with safe and secure coupling between virtual machines and peripheral component interconnect express device

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12773710

Country of ref document: EP

Kind code of ref document: A2