US20170351639A1 - Remote memory access using memory mapped addressing among multiple compute nodes - Google Patents
Remote memory access using memory mapped addressing among multiple compute nodes Download PDFInfo
- Publication number
 - US20170351639A1 US20170351639A1 US15/174,718 US201615174718A US2017351639A1 US 20170351639 A1 US20170351639 A1 US 20170351639A1 US 201615174718 A US201615174718 A US 201615174718A US 2017351639 A1 US2017351639 A1 US 2017351639A1
 - Authority
 - US
 - United States
 - Prior art keywords
 - compute node
 - bar
 - memory
 - region
 - adapter
 - Prior art date
 - Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
 - Abandoned
 
Links
Images
Classifications
- 
        
- G—PHYSICS
 - G06—COMPUTING OR CALCULATING; COUNTING
 - G06F—ELECTRIC DIGITAL DATA PROCESSING
 - G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
 - G06F13/38—Information transfer, e.g. on bus
 - G06F13/42—Bus transfer protocol, e.g. handshake; Synchronisation
 - G06F13/4282—Bus transfer protocol, e.g. handshake; Synchronisation on a serial bus, e.g. I2C bus, SPI bus
 
 - 
        
- G—PHYSICS
 - G06—COMPUTING OR CALCULATING; COUNTING
 - G06F—ELECTRIC DIGITAL DATA PROCESSING
 - G06F15/00—Digital computers in general; Data processing equipment in general
 - G06F15/16—Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
 - G06F15/163—Interprocessor communication
 - G06F15/173—Interprocessor communication using an interconnection network, e.g. matrix, shuffle, pyramid, star, snowflake
 - G06F15/17306—Intercommunication techniques
 - G06F15/17331—Distributed shared memory [DSM], e.g. remote direct memory access [RDMA]
 
 - 
        
- H—ELECTRICITY
 - H04—ELECTRIC COMMUNICATION TECHNIQUE
 - H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
 - H04L67/00—Network arrangements or protocols for supporting network services or applications
 - H04L67/01—Protocols
 - H04L67/10—Protocols in which an application is distributed across nodes in the network
 - H04L67/1097—Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
 
 - 
        
- H—ELECTRICITY
 - H04—ELECTRIC COMMUNICATION TECHNIQUE
 - H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
 - H04L67/00—Network arrangements or protocols for supporting network services or applications
 - H04L67/01—Protocols
 - H04L67/133—Protocols for remote procedure calls [RPC]
 
 - 
        
- H—ELECTRICITY
 - H04—ELECTRIC COMMUNICATION TECHNIQUE
 - H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
 - H04L69/00—Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
 - H04L69/16—Implementation or adaptation of Internet protocol [IP], of transmission control protocol [TCP] or of user datagram protocol [UDP]
 
 - 
        
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
 - Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
 - Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
 - Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
 
 
Definitions
- This disclosure relates in general to the field of communications and, more particularly, to remote memory access with memory mapped addressing among multiple compute nodes.
 - Compute nodes such as microservers and hypervisor-based virtual machines executing in a single chassis can provide scaled out workloads in hyper-scale data centers.
 - Microservers are an emerging trend of servers for processing lightweight workloads with large numbers (e.g., tens or even hundreds) of relatively lightweight server nodes bundled together in a shared chassis infrastructure, for example, sharing power, cooling fans, and input/output components, eliminating space and power consumption demands of duplicate infrastructure components.
 - the microserver topology facilitates density, lower power per node, reduced costs, and increased operational efficiency.
 - Microservers are generally based on small form-factor, system-on-a-chip (SoC) boards, which pack processing capability, memory, and system input/output onto a single integrated circuit.
 - SoC system-on-a-chip
 - FIG. 1 is a simplified block diagram illustrating a communication system for facilitating remote memory access with memory mapped addressing among multiple compute nodes
 - FIG. 2 is a simplified block diagram illustrating other example details of embodiments of the communication system
 - FIG. 3 is a simplified block diagram illustrating yet other example details of embodiments of the communication system
 - FIG. 4 is a simplified block diagram illustrating yet other example details of embodiments of the communication system
 - FIG. 5 is a simplified sequence diagram illustrating example operations that may be associated with an embodiment of the communication system
 - FIG. 6 is a simplified sequence diagram illustrating other example operations that may be associated with an embodiment of the communication system
 - FIG. 7 is a simplified flow diagram illustrating yet other example operations that may be associated with an embodiment of the communication system
 - FIG. 8 is a simplified flow diagram illustrating yet other example operations that may be associated with an embodiment of the communication system.
 - FIG. 9 is a simplified flow diagram illustrating yet other example operations that may be associated with an embodiment of the communication system.
 - An example method for facilitating remote memory access with memory mapped addressing among multiple compute nodes is executed at an input/output (IO) adapter in communication with the compute nodes over a Peripheral Component Interconnect Express (PCIE) bus, the method including: receiving a memory request from a first compute node to permit access by a second compute node to a local memory region of the first compute node; generating a remap window region in a memory element of the IO adapter, the remap window region corresponding to a base address register (BAR) of the second compute node in the IO adapter; and configuring the remap window region to point to the local memory region of the first compute node, wherein access by the second compute node to the BAR corresponding with the remap window region results in direct access of the local memory region of the first compute node by the second compute node.
 - the term “compute node” refers to a hardware processing apparatus, in which user applications (e.g., software programs) are executed.
 - FIG. 1 is a simplified block diagram illustrating a communication system 10 for facilitating remote memory access with memory mapped addressing among multiple compute nodes in accordance with one example embodiment.
 - FIG. 1 illustrates a communication system 10 comprising a chassis 12 , which includes a plurality of compute nodes 14 that communicate with network 16 through a common input/output (I/O) adapter 18 .
 - An upstream switch 20 facilitates north-south traffic between compute nodes 14 and network 16 .
 - Shared IO adapter 18 presents network and storage devices on a Peripheral Component Interconnect Express (PCIE) bus 22 to compute nodes 14 .
 - PCIE Peripheral Component Interconnect Express
 - each compute node appears as a PCIE device to other compute nodes in chassis 12 .
 - compute nodes 14 include capabilities for processing, memory, network and storage resources.
 - compute node Host 1 runs (e.g., executes) an operating system 24 and various applications 26 .
 - a device driver (also referred to herein as a driver) 28 operates or controls a particular type of device that is attached to compute node 14 .
 - each PCIE device visible to (e.g., accessible by) Host 1 may be associated with a separate device driver in some embodiments.
 - all PCIE endpoints visible to Host 1 may be associated with a single PCIE device driver.
 - device driver 28 provides a software interface to hardware devices, enabling operating system 24 and applications 26 to access hardware functions (e.g., memory access) without needing to know precise details of the hardware being used.
 - substantially all PCIE endpoints appear as hardware device to the accessing compute node, irrespective of its actual form.
 - compute nodes 14 may comprise virtual machines; however, because one compute node is visible as a PCIE device to another compute node, they appear as hardware devices to each other and are associated with corresponding device drivers.
 - Driver 28 communicates with the hardware device through PCIE bus 22 . When one of applications 26 invokes a routine in driver 28 , driver 28 issues commands to the hardware device it is associated with. Thus, driver 28 facilitates communication (e.g., acts as a translator) between its associated hardware device and applications 26 .
 - Driver 28 is hardware dependent and operating-system-specific.
 - each of compute nodes 14 includes various hardware components, such as one or more sockets 30 (e.g., socket refers to a hardware receptacle that enables a collection of central processing unit (CPU) cores with a direct pipe to memory); each socket holds one processor 32 ; each processor comprises one or more CPU cores 34 ; each CPU core 34 executes instructions (e.g., computations, such as Floating-point Operations Per Second (FLOPS)); a memory element 36 may facilitate operations of CPU cores 34 .
 - socket refers to a hardware receptacle that enables a collection of central processing unit (CPU) cores with a direct pipe to memory
 - each socket holds one processor 32 ; each processor comprises one or more CPU cores 34 ; each CPU core 34 executes instructions (e.g., computations, such as Floating-point Operations Per Second (FLOPS)); a memory element 36 may facilitate operations of CPU cores 34 .
 - FLOPS Floating-point Operations Per Second
 - Common IO adapter 18 facilitates communication to and from each of compute nodes 14 .
 - IO adapter 18 services both network and storage access requests from compute nodes 14 in chassis 12 , facilitating a cost efficient architecture.
 - a memory element 38 may be associated with (e.g., accessed by) IP adapter 18 .
 - Memory element 38 includes various base address registers (BARs) 40 and remap windows 42 for various operations as described herein.
 - BARs base address registers
 - a remap window helper register 44 and firmware 46 are also included (among other components) in IO adapter 18 .
 - firmware comprises machine-readable and executable instructions and associated data that are stored in (e.g., embedded in, forming an integral part of, etc.) hardware, such as a read-only memory, or flash memory, or an ASIC, or a field programmable gate array (FPGA) and executed by one or more processors (not shown) in IO adapter 18 to control the operations of IO adapter 18 .
 - firmware 46 comprises a combination of software and hardware used exclusively to control operations of IO adapter 18 .
 - network traffic between compute nodes 14 and network 16 may be termed as “North-South Traffic”; network traffic among compute nodes 14 may be termed as “East-West Traffic”.
 - compute nodes 14 are unaware of the physical location of other compute nodes, for example, whether they exist in same chassis 12 , or are located remotely, over network 16 .
 - compute nodes 14 are agnostic to the direction of network traffic they originate or terminate, such as whether the traffic is North-South, or East-West, and thereby use the same addressing mechanism (e.g., L2 Ethernet MAC address/IP address) for addressing nodes located in same chassis 12 or located in a remote node in same L2/L3 domain.
 - L2 Ethernet MAC address/IP address e.g., L2 Ethernet MAC address/IP address
 - a memory access scheme using low latency and low overhead protocols implemented in IO adapter 18 allows any one (or more) compute nodes 14 , for example, Host 1 , to share and access remote memory of another compute node (e.g., across different servers; across a hypervisor; across different operating systems), for example, Host 2 .
 - Host 2 may include an operating system different from that of Host 1 without departing from the scope of the embodiments.
 - the protocols as described herein do not require any particularized (e.g., custom) support from the operating systems or networking stack of Host 1 or Host 2 .
 - the scheme is completely transparent to the operating systems of Host 1 and Host 2 , allowing suitable throughput while communicating in different memory domains.
 - RDMA Remote Direct Memory Access
 - RoCE RDMA over Converged Ethernet
 - InfiniBand InfiniBand
 - RDMA communication is based on a set of three queues: (i) a send queue and (ii) a receive queue, comprising a Queue Pair (QP) and (iii) a Completion Queue (CQ).
 - Posts in the QP are used to initiate the sending or receiving of data.
 - a sending application e.g., driver
 - WQE placed on the send queue contains a pointer to the message to be sent; a pointer in the WQE on the receive queue contains a pointer to a buffer where an incoming message can be placed.
 - the sender's adapter consumes WQE from the send queue at the egress side and streams the data from the memory region to the remote receiver.
 - the receiver's adapter consumes the WQEs at the receive queue at the ingress side and places the received data in appropriate memory regions of the receiving application. Any memory sharing or access between a sending compute node and the receiving compute node thus requires tedious channel setup, RDMA protocols, etc.
 - Such remote memory access sharing protocols can have unnecessary overhead. For example, every packet from any compute node, say Host 1 , has to hit a port of upstream switch 20 and then return on the same pipe back to IO adapter 18 , which then redirects it to the destination compute node, say Host 2 .
 - Such east-west data sharing can cause inefficient utilization of bandwidth in the common pipe, which is potentially used by various other compute nodes performing extensive north-south traffic with network 16 .
 - the east-west traffic pattern also increases application response latency, for example, due to longer path to be traversed by network packets.
 - Communication system 10 is configured to address these issues (among others) to offer a system and method for facilitating remote memory access with memory mapped addressing among multiple compute nodes 14 sharing IO adapter 18 .
 - PCIE which is typically supported by almost all operating systems, is used to share data from a memory region on one compute node, say Host 1 , with a different memory region of another compute node, say Host 2 .
 - memory region comprises a block (e.g., section, portion, slice, chunk, piece, space, etc.) of memory that can be accessed through a contiguous range of memory addresses (e.g., a memory address is a unique identifier (e.g., binary identifier) used by a processor for tracking a location of each memory byte stored in the memory).
 - a memory address is a unique identifier (e.g., binary identifier) used by a processor for tracking a location of each memory byte stored in the memory).
 - window in the context of memory regions refers to a memory region comprising a contiguous range of memory addresses, either virtual or physical.
 - IO adapter 18 is connected to compute nodes 14 by means of PCIE bus 22 .
 - IO adapter 18 includes an embedded operating system hosting multiple VNICs configured with memory resources of memory element 38 .
 - Each VNIC accesses a separate, exclusive region of memory element 38 .
 - Each PCIE endpoint, namely VNICs is typically associated with a host software driver, namely device driver 28 .
 - each VNIC that requires a separate driver is considered a separate PCIE device.
 - a PCIe data transfer subsystem in a computing system includes a PCIe root complex comprising a computer hardware chipset that handles communications between the PCIE endpoints.
 - the root complex enables PCIe endpoints to be discovered, enumerated and worked upon by the host operating system.
 - the base PCIe switching structure of a single root complex has a tree topology, which addresses PCIe endpoints through a bus numbering scheme.
 - Configuration software on the root complex detects every bus, device and function (e.g., storage adapter, networking adapter, graphics adapter, hard drive interface, device controller, Ethernet controller, etc.) within a given PCIe topology.
 - the IO adapter's operating system assigns address space in the IO adapter memory element 38 to each PCIe endpoint (e.g., VNIC) so that the PCIe endpoint can understand at what address space it is identified by the IO adapter and map the corresponding interrupts accordingly.
 - PCIe endpoint e.g., VNIC
 - the PCIe's device driver 28 compatible with the host operating system 24 can work efficiently with the PCIe endpoint and facilitate appropriate device specific functionality.
 - Each PCIE endpoint is enabled on IO adapter 18 by being mapped into a memory-mapped address space in memory element 18 referred to as configuration space (e.g., register, typically consisting of 256 bytes).
 - the configuration space contains a number of base address registers (BARs) 40 , comprising the starting address of a contiguous mapped address in IO adapter memory element 38 .
 - BARs base address registers
 - a 32-bit BAR 0 is offset 10 h in PCI Compatible Configuration Space—and post enumeration would contain the start address of BAR.
 - Any other PCIE endpoint, to access (e.g., read data from or write data to) the PCIE endpoint associated with a specific BAR would submit a request with the address of that BAR.
 - An enumeration software allocates memory for the PCIE endpoints and writes to corresponding BARs.
 - Firmware 46 programs the PCIe endpoint's BARs to inform the PCIe endpoints of its address mapping. When the BAR for a particular PCIe endpoint is written, all memory transactions generated to that bus address range are claimed by the particular PCIe endpoint.
 - OS 24 provides a physical address to BAR 40 and allocates the address space for device driver 28 to interact with the flash memory device.
 - device driver 28 When device driver 28 is loaded, it requests the memory mapped address from OS 24 corresponding to the physical address so that it can work with the flash memory device using the address handle. Subsequent accesses to BAR 40 from device driver 28 are completely transparent to OS 24 as it has already carved out the address space sufficient to work with the flash memory device.
 - typical PCIE data access is between application 26 and the PCIE endpoint, such as the flash memory device.
 - PCIE data access is not typically used across two different compute nodes 14 . In other words, one compute node typically cannot share its memory space with another compute node using native PCIE protocols.
 - Remap window feature includes a remap window base and remap window region for memory mapping for the purpose of remapping root complex IO and memory BARs to address ranges that are directly addressable by the processor.
 - Remap window base is used to configure a start address of a memory region which can be mapped to any other memory region.
 - the remap window region refers to the mapped region in memory element 38 differentiated according to a virtual network interface card (VNIC) identifier (ID) configured in remap window helper register 44 in IO adapter 18 .
 - VNIC virtual network interface card
 - the VNIC ID could map to any host-based VNIC or root complex VNIC.
 - four remap window regions each capable of addressing 4 MB may be allocated for the remap window feature, permitting easy access of up to 16 MB of memory either in host memory or Root Complex endpoint device memory.
 - multiple PCIE ports on PCIE bus 22 distinguish different PCIE lanes associated with distinct compute nodes 14 .
 - Each memory region in Root Complex endpoint device memory is associated with a distinct PCIE lane that is completely independent of each other such that no two memory regions share any PCIE activity with each other.
 - an administrator configures the VNIC ID of computing nodes 14 through respective service profiles.
 - a unified computing system manager e.g., network management application such as Cisco® UCSM
 - firmware 46 populates the VNIC ID information in remap window helper register 44 and also makes the VNICs ready and discoverable from corresponding computing nodes 14 .
 - Firmware 46 adds the BAR size of a specific BAR, for example, BAR 3 , to the memory region allocated with each VNIC ID. In an example embodiment, 16 MB may be added to accommodate all four remap window regions.
 - the enumeration software (BIOS) of IO adapter 18 discovers the new PCIE device. Through PCI enumeration protocol, the BIOS identifies BAR size requirements, and associates a physical address to corresponding Host 1 in BAR 40 . In various embodiments, three separate BARs are provided for each VNIC, namely, BAR 0 , BAR 1 and BAR 2 .
 - Device driver 28 upon loading in Host 1 requests OS 24 to provide memory mapped equivalent of the physical address for each BAR. It identifies that BAR 2 is the remap window region according to a preconfigured protocol between firmware 46 and driver 28 .
 - the memory mapped IO address comprises an address handle given by OS 24 to access the BAR 2 region of memory element 38 in IO adapter 18 .
 - Applications 26 using device driver 28 understands the capability of remap window exposed by device driver 28 . Similar sequence of events occurs in another compute node, say Host 2 , when it powers up and its device driver is loaded in its OS. Through a pre-determined protocol, applications 26 in compute nodes 14 , say Host 1 and Host 2 , exchange their respective address handles through firmware 46 and request corresponding memory access.
 - the memory access mechanisms described herein can present one of the lowest latency protocols to communicate with different servers, virtual machines, or other such compute nodes 14 .
 - the memory access mechanisms described herein can also be used as IPC between two compute nodes 14 .
 - the operating system or network stacks do not need any separate, or distinct configuration to enable such remote memory access.
 - IO adapter 18 servicing a hypervisor can use the described mechanisms to allow various applications executing in separate virtual machines (e.g., guest domains) to communicate with each other without having to go through specially installed IPC software (e.g., VMWARE ESX/ESXi) or other external memory management/sharing applications.
 - compute nodes 14 comprise microservers
 - the network ecosystem e.g., of network 16
 - the network ecosystem may support different classes and QoS policies for network traffic, which can result in different priority flows.
 - storage traffic does not typically have any associated QoS.
 - differentiated traffic types e.g., some traffic having QoS, other traffic not having QoS
 - the condition can become worse with performance drops becoming noticeable in some servers. In other words, performance of some servers drops when other unrelated servers are experiencing heavy network traffic.
 - a cooperative I/O scheduling across the servers may be implemented. For example, every server monitors and records a number of IO requests issued to IO adapter 18 . Such IO statistics are shared with other servers through the local memory mapped scheme in BAR 3 as described herein. Such data sharing can facilitate decisions at the individual servers regarding whether to send a SCSI_BUSY message to its OS storage stack. Thus, even though the associated storage VNIC has bandwidth to push the IOs to IO adapter 18 , it will not schedule the IO requests, voluntarily relinquishing claim on storage for some time, until the network traffic bottleneck clears up. Such actions can lead to other VNICs balancing out storage traffic pattern in chassis 12 , maintaining the IO equilibrium therein.
 - IO adapter 18 receives a memory request from one of compute nodes 14 , say Host 1 , to permit access by another of compute nodes 14 , say Host 2 , to a local memory region of Host 1 (assume the local memory region is in memory element 36 ).
 - the memory request comprises a host identifier of Host 2 and address of the local memory region of Host 1 in some embodiments.
 - the host identifier can be obtained from a resource map providing identifying information of compute nodes 14 in communication with IO adapter 18 over PCIE bus 22 .
 - Firmware 46 in IO adapter 18 generates remap window region 42 in memory element 38 of IO adapter 18 , remap window region 42 corresponding to BAR 40 (e.g., BAR 2 ) of Host 2 in IO adapter 18 .
 - Firmware 46 configures remap window region 42 to point to the local memory region of Host 1 , access by Host 2 to BAR 2 corresponding with remap window region 42 resulting in direct access of the local memory region of Host 1 by Host 2 .
 - compute nodes 14 are associated with unique PCIE endpoints on PCIE bus 22 ; therefore, each has distinct BARs 40 associated therewith.
 - BAR 2 associated with remap window region 42 can comprise one of a plurality of BARs associated with Host 2 .
 - device driver 28 of Host 2 associates BAR 2 with remap window region, such that application 26 executing in Host 2 can access the local memory region of Host 1 through appropriate access requests to BAR 2 using device driver 28 .
 - configuring remap window region 42 comprises configuring a remap window base in a BAR Resource Table (BRT) to be a start address of the local memory region.
 - BRT BAR Resource Table
 - network topology of the network including chassis 12 can include any number of compute nodes, servers, hardware accelerators, virtual machines, switches (including distributed virtual switches), routers, and other nodes inter-connected to form a large and complex network.
 - a node may be any electronic device, client, server, peer, service, application, or other object capable of sending, receiving, or forwarding information over communications channels in a network.
 - Elements of FIG. 1 may be coupled to one another through one or more interfaces employing any suitable connection (wired or wireless), which provides a viable pathway for electronic communications. Additionally, any one or more of these elements may be combined or removed from the architecture based on particular configuration needs.
 - Communication system 10 may include a configuration capable of TCP/IP communications for the electronic transmission or reception of data packets in a network. Communication system 10 may also operate in conjunction with a User Datagram Protocol/Internet Protocol (UDP/IP) or any other suitable protocol, where appropriate and based on particular needs. In addition, gateways, routers, switches, and any other suitable nodes (physical or virtual) may be used to facilitate electronic communication between various nodes in the network.
 - UDP/IP User Datagram Protocol/Internet Protocol
 - gateways, routers, switches, and any other suitable nodes may be used to facilitate electronic communication between various nodes in the network.
 - the example network environment may be configured over a physical infrastructure that may include one or more networks and, further, may be configured in any form including, but not limited to, local area networks (LANs), wireless local area networks (WLANs), VLANs, metropolitan area networks (MANs), VPNs, Intranet, Extranet, any other appropriate architecture or system, or any combination thereof that facilitates communications in a network.
 - LANs local area networks
 - WLANs wireless local area networks
 - MANs metropolitan area networks
 - VPNs Intranet, Extranet, any other appropriate architecture or system, or any combination thereof that facilitates communications in a network.
 - a communication link may represent any electronic link supporting a LAN environment such as, for example, cable, PCIE, Ethernet, wireless technologies (e.g., IEEE 802.11x), ATM, fiber optics, etc. or any suitable combination thereof.
 - communication links may represent a remote connection through any appropriate medium (e.g., digital subscriber lines (DSL), telephone lines, T1 lines, T3 lines, wireless, satellite, fiber optics, cable, Ethernet, etc. or any combination thereof) and/or through any additional networks such as a wide area networks (e.g., the Internet).
 - DSL digital subscriber lines
 - T1 lines T1 lines
 - T3 lines wireless, satellite, fiber optics, cable, Ethernet, etc. or any combination thereof
 - any additional networks such as a wide area networks (e.g., the Internet).
 - chassis 12 may comprise a rack-mounted enclosure, blade enclosure, or a rack computer that accepts plug-in compute nodes 14 .
 - chassis 12 can include, in a general sense, any suitable network element, which encompasses computers, network appliances, servers, routers, switches, gateways, bridges, load-balancers, firewalls, processors, modules, or any other suitable device, component, element, or object operable to exchange information in a network environment.
 - the network elements may include any suitably configured hardware provisioned with suitable software, components, modules, interfaces, or objects that facilitate the operations thereof. This may be inclusive of appropriate algorithms and communication protocols that allow for the effective exchange of data or information.
 - Compute nodes 14 may comprise printed circuit boards, for example, manufactured with empty sockets. Each printed circuit board may hold more than one processor (e.g., within the same processor family, differing core counts, with a wide range of frequencies and vastly differing memory cache structures may be included in a single processor/socket combination).
 - compute nodes 14 may include hypervisors and virtual machines.
 - IO adapter 18 may include an electronic circuit, expansion card or plug-in module that accepts input and generates output in a particular format. IO adapter 18 facilitates conversion of data format and electronic timing between input/output streams and internal computer circuits of chassis 12 .
 - IO adapter 18 may comprise a hypervisor, and compute nodes 14 may comprise separate virtual machines.
 - FIG. 2 is a simplified block diagram illustrating example details according to an embodiment of communication system 10 .
 - computing nodes 14 namely Host 1 and Host 2 respectively, are to share data across memory regions according to embodiments of communication system 10 .
 - Each compute node 14 namely Host 1 and Host 2 connects to IO adapter 18 through a respective virtual network interface card (VNIC) 48 ( 1 ) and 48 ( 2 ) at the compute node side and a respective PCIE port 50 ( 1 ) and 50 ( 2 ) at the IO adapter side.
 - VNIC virtual network interface card
 - Firmware 46 exposes (e.g., creates, generates, provides, etc.) a separate VNIC 52 ( 1 ) and 52 ( 2 ) for corresponding PCIE ports 50 ( 1 ) and 50 ( 2 ).
 - VNIC 52 ( 1 ) and 52 ( 2 ) at IO adapter 18 act as standalone Ethernet network controller adapters for network traffic and/or as storage controller adapters for storage traffic from and to respective compute nodes 14 ( 1 ) and 14 ( 2 ). For example, all traffic from VNIC 48 ( 1 ) on Host 1 is sent to corresponding PCIE port 50 ( 1 ), through VNIC 52 ( 1 ), to the external facing port, if needed.
 - VNICs 48 ( 1 ), 48 ( 2 ), 52 ( 1 ) and 52 ( 2 ) are created based on user configurations, for example, as specified in a service profile and policy configured at the UCSM and deployed therefrom.
 - Each VNIC 52 ( 1 ) and 52 ( 2 ) at IO adapter 18 is associated with BAR 40 ( 1 ) and 40 ( 2 ) respectively, each comprising three separate memory spaces denoted as: BAR 0 , BAR 1 and BAR 2 .
 - BARs 40 ( 1 ) and 40 ( 2 ) predominantly expose hardware functionality, such as memory spaces that can be used by host software, such as applications 26 , to work with VNIC 52 ( 1 ) and 52 ( 2 ).
 - Host 1 To explain further, consider Host 1 . Note that the descriptions herein for Host 1 apply equally for Host 2 .
 - Operating system 24 in Host 1 enumerates BARs 40 ( 1 ) associated with Host 1 and maps IO address space in host memory 36 to each BAR such that any access to the corresponding mapped addresses in the mapped IO address space in Host 1 will point to (e.g., correspond with, associate with) the appropriate one of BARs 40 ( 1 ): BAR 0 , BAR 1 and BAR 2 in IO adapter 18 .
 - mapped addresses in Host 1 may be virtual, they point to the physical memory region in IO adapter 18 .
 - Device driver 28 accesses BARs 40 ( 1 ) using the memory mapped addresses returned by OS 24 .
 - BAR 2 is reserved for remap window 42 , which is identified by the device driver in respective compute nodes 14 .
 - BAR 2 of BAR 40 ( 1 ) is reserved for remap window region 42 ( 1 )
 - BAR 2 of BAR 40 ( 2 ) is reserved for remap window region 42 ( 2 ).
 - device driver 28 in Host 1 understands BAR 2 of 40 ( 1 ) to be associated with remap window 42 ( 1 ).
 - firmware 46 configures remap window 42 ( 2 ) of Host 2 to point to memory addressed space 56 ( 1 ) of Host 1 .
 - firmware 46 configures remap window 42 ( 1 ) to point to memory addressed space 56 ( 2 ) of Host 2 .
 - BAR 2 of BAR 40 ( 1 ) associated with Host 1 refers to memory space 56 ( 2 ) of Host 2 ; likewise, BAR 2 of BAR 40 ( 2 ) associated with Host 2 refers to memory space 56 ( 1 ) of Host 1 . Anything written to BAR 2 of BAR 40 ( 1 ) by Host 1 will be as if written directly into memory space 56 ( 2 ) of Host 2 , without any intervening protocols or communication. Thus applications in separate compute nodes can easily access the memory present in their peer's memory domain.
 - FIG. 3 is a simplified block diagram illustrating example details according to an embodiment of communication system 10 .
 - Memory and I/O requests in IO adapter 18 are handled using remap window helper register 44 comprising three cascaded hardware tables: BAR Match Table (BMT) 44 ( 1 ), BMT associated random access memory (RAM) 44 ( 2 ), and BAR Resource Table (BRT) 44 ( 3 ).
 - BMT BAR Match Table
 - RAM BMT associated random access memory
 - BRT BAR Resource Table
 - BMT 44 ( 1 ) provides a mechanism to determine whether a memory request (e.g., transaction) received from Host 1 matches a valid PCIE device, such as Host 2 .
 - BMT 44 ( 1 ) uses a search key comprising (among other parameters) a host ID and a BAR address, including length and offset.
 - a hit in BMT 44 ( 1 ) outputs a Hit Index, which indexes into an associated RAM entry in table 44 ( 2 ).
 - BRT 44 ( 3 ) provides a mechanism to flexibly map a single BAR to one or more possibly non-contiguous, adapter memory-mapped resources.
 - BRT 44 ( 3 ) comprises a logical table implemented in the hardware RAM of IO adapter 18 .
 - Firmware 46 of IO adapter 18 presents a virtualized view of PCIE endpoints' configuration space to compute nodes 14 .
 - Host 1 configures memory/IO bar window(s) in the VNIC's configuration space
 - Host 1 's BAR address windows are translated by remap window helper register 44 to map them to the local root complex endpoint's BAR windows in IO adapter's local address space.
 - memory region 56 ( 2 ) of Host 1 is mapped to remap window region 42 ( 2 ) of Host 2 in memory element 38 .
 - the device drivers running on compute nodes 14 may post work requests using their assigned memory bar windows.
 - a memory request from Host 1 to allow access to a specific memory region 56 ( 1 ) by a remote PCIE endpoint, say Host 2 may proceed as follows.
 - the memory request is converted into a search key to BMT 44 ( 1 ), triggering a lookup (e.g., ternary content-addressable memory (TCAM)) of BMT 44 ( 1 ), which outputs a hit index to RAM 44 ( 2 ) that activates a read of appropriate entry in BRT 44 ( 3 ).
 - TCAM ternary content-addressable memory
 - the memory request from Host 1 may reference a VNIC number, which may be converted into the corresponding host identifier by suitable modules.
 - Firmware 46 programs the appropriate entry in BRT 44 ( 3 ) to point to the provided address 56 ( 1 ) of Host 1 .
 - the specific memory region of the appropriate entry in BRT 44 ( 3 ) is already pre-mapped to BAR 2 of Host 2 as remap window 42 ( 2 ).
 - the entry in BRT 44 ( 3 ) references remap window region 42 ( 2 ), which now directly points to memory space 56 ( 1 ) of Host 1 after configuration by firmware 46 .
 - Any memory requests going through remap window region 42 ( 1 ) will be tagged with the VNIC of the destination compute node 14 . Any writes by Host 1 into local memory region 56 ( 1 ) can be directly accessed by Host 2 through its mapped remap window region 42 ( 2 ) without any intervention by operating systems or CPUs.
 - FIG. 4 is a simplified block diagram illustrating example details according to an embodiment of communication system 10 .
 - application A in Host 1 and application B in Host 2 exchanges data according to mechanisms as described herein.
 - Application B takes the following actions:
 - Application B sends a memory mapped address (e.g.,
 - IOMMU mapped address of memory space 56 ( 2 ) to driver 28 in Host 2 requesting access to the PCIE endpoint corresponding to Host 1 .
 - Driver 28 triggers firmware 46 in IO adapter 18 to configure a remap window base 58 in BRT 44 ( 3 ) with the memory mapped address and associate it with the destination VNIC of Host 1 as identified through a predetermined protocol.
 - Firmware 46 configures remap window base 58 with the given address and sets up application specific integrated circuit (ASIC) data structures to be ready for remap window region access.
 - Firmware 46 discovers the destination VNIC of Host 1 that wants to access the memory region as given by driver 28 .
 - Firmware 46 configures BRT 0 , corresponding to BAR 2 of the destination VNIC Host 1 , with the remap window region address and offset that would correspond to the remap window base 58 .
 - Configured BRT 0 corresponds to remap window 42 ( 1 ) and points to memory region 56 ( 2 ) of Host 2 .
 - firmware 46 sends notification to driver 28 running in Host 1 that its BAR 2 is ready to access the Host 2 memory.
 - driver 28 running in Host 1 passes the notification to application A.
 - Application A already has memory mapped the BAR 2 region with appropriate IOMMU configuration (e.g., addresses).
 - application A's read/write access to BAR 2 of Host 1 maps to remote memory region 56 ( 2 ) present in Host 2 's memory domain.
 - application B's read/write access to memory region 56 ( 2 ) maps to BAR 2 of Host 1 .
 - FIG. 5 is a simplified sequence diagram illustrating example operations 60 according to an embodiment of communication system 10 associated with a driver load scenario and discovery of various resources presented to driver 28 including remap window 42 mapped in BAR 40 .
 - driver 28 corresponding to VNIC 0 of one of compute nodes 14 , say Host 1 , is loaded.
 - driver 28 reads BAR 40 and identifies BAR 2 as the remap BAR.
 - driver 28 maps the BARs and gets physical addresses from OS 24 .
 - OS 24 provides memory mapped address for the physical address of the BAR.
 - application 26 maps the address in user space (e.g., using MMAP).
 - firmware 46 prepares remap window 42 for usage by driver 28 .
 - FIG. 6 is a simplified sequence diagram illustrating example operations 70 according to an embodiment of communication system 10 between applications 26 running on two different compute nodes 14 and firmware 46 to enable the remap window configuration for the purpose of accessing remote memory.
 - Host 1 includes application 26 , which produces data, and is referred to as producer 72 ;
 - Host 2 includes another application 26 , which consumes the data, and is referred to as consumer 74 .
 - IO adapter 18 includes a resource map providing resource information, for example, its memory offset and length, associated with the corresponding VNIC.
 - the resource map associates memory address offsets (also referred to herein as “memory offsets,” or simply “offsets”) with the BAR of one or more I/O resources (the I/O resource corresponding to a PCIE device, such as VNIC).
 - the resource map may include information identifying each PCIE device on PCIE bus 22 and its corresponding BARs.
 - the resource map may be comprised in remap window help register 44 .
 - BAR 0 of each PCIE endpoint may point to the resource map stored in IO adapter 18 .
 - the PCIE endpoints may be identified using host indices, or other suitable identifiers.
 - device driver 28 in producer 72 identifies other compute nodes 14 present in chassis 12 .
 - Consumer 74 notifies firmware 46 of its intent to read the contents of the memory of Host 1 through a resource update.
 - Firmware 46 decodes the request, identifies the source and destination VNICs and sends notification to the VNIC whose associated memory is to be read.
 - producer 72 creates data at memory offset with dirty bit reset.
 - dirty bit is well known in the art to be associated with a block of memory and indicates whether or not the corresponding block of memory has been modified; if the bit is set (or reset), the data has been modified since the last time it was read).
 - device driver 28 in Host 1 notifies firmware 46 about the data availability.
 - the notification's meta-data includes the address to be read from at its local memory space 56 ( 1 ) including: address, length, destination host index of consumer and a key.
 - firmware 46 configures remap window base 58 with the memory offset.
 - firmware 46 configures remap window region 42 ( 2 ) of BAR 2 associated with Host 2 to point to memory space 56 ( 1 ) of Host 1 .
 - firmware 46 configures remap window 42 ( 2 ) only once (e.g., for all transactions between producer 72 and same consumer 74 ).
 - firmware 46 notifies consumer 74 that the data is ready.
 - consumer 74 reads BAR 2 at the memory offset specified by firmware 46 . Reading BAR 2 at the memory offset is identical to accessing memory region 56 ( 1 ) of producer 72 .
 - consumer 74 marks the dirty bit in the meta data, indicating that the data has been read.
 - consumer 74 may notify firmware 46 that consumer 74 has completed reading the data.
 - firmware 46 may notify producer 72 that data has been consumed by consumer 74 .
 - producer 72 writes the next set of data to the memory offset, and the operations resume from 76 and continue thereafter.
 - FIG. 7 is a simplified flow diagram illustrating example operations 100 according to an embodiment of communication system 10 .
 - produce 72 in Host 1 is providing data to consumer 74 in Host 2 , both Host 1 and Host 2 being connected over PCIE bus 22 with IO adapter 18 .
 - IO adapter 18 includes a resource map providing resource information, such as resource location (e.g., PCIE host index), length and offsets where firmware data is present.
 - device driver 28 in Host 1 identifies the resource layout from the resource map, including other compute nodes 14 in chassis 12 . In an example embodiment, identifying the resource layout comprises parsing the resource map.
 - consumer application 74 notifies firmware 46 of intent to read contents of memory 56 ( 1 ) of Host 1 through a resource update message (or other suitable mechanism).
 - firmware 46 decodes the request, identifies the source and destination VNICs and sends notification to Host 1 VNIC from which the memory is to be read.
 - producer application 72 provides to firmware 46 the address of memory region 56 ( 1 ) to be read from, through an appropriate memory request.
 - firmware 46 sends the remap window region information (e.g., remap window base 58 , remap window region 42 ( 2 )) to consumer application 74 (e.g., through associated VNIC) BAR 2 at known offset and notifies the consumer VNIC.
 - firmware 46 maps the address belonging to the producer application VNIC, namely, address of memory region 56 ( 1 ) into the consumer VNIC's BAR 2 remap window region 42 ( 2 ) at known offset.
 - consumer VNIC passes the remap window information to consumer application 74 .
 - consumer application 74 reads the data from its memory mapped BAR 2 which corresponds to memory 56 ( 1 ) of producer application 72 .
 - FIG. 8 is a simplified flow diagram illustrating example operations 130 according to an embodiment of communication system 10 .
 - an administrator configures VNIC on Host 1 's service profile.
 - UCSM sends VNIC details configured by the administrator to IO adapter 18 's firmware 46 using a suitable control management protocol.
 - firmware 46 populates VNIC information and makes it ready and discoverable from Host 1 .
 - firmware 46 adds BAR size of BAR 2 in VNIC as 16 MB to accommodate four remap window regions.
 - Host 1 is powered up.
 - BIOS enumeration software discovers PCIE endpoint and through PCIE enumeration protocol, identifies BAR size requirements and associates physical address to each BAR.
 - device driver 28 of Host 1 upon loading, requests OS 24 to provide memory mapped equivalent of physical address for each BAR.
 - device driver 28 identifies that BAR 2 is remap window region according to preconfigured protocol with firmware 46 .
 - application 26 running in Host 1 's operating system 24 making use of device driver 28 understands capability of remap window exposed by device driver 28 .
 - application 26 exchanges handles with peer host (e.g., Host 2 ) through firmware 46 and permits memory to be accessed by the peer host.
 - peer host e.g., Host 2
 - FIG. 9 is a simplified flow diagram illustrating example operations 160 according to an embodiment of communication system 10 .
 - producer application 72 e.g., application 26 running in Host 1
 - device driver 28 triggers firmware 46 to configure remap window base 58 with destination VNIC (e.g., corresponding to Host 2 ) and associated address.
 - firmware 46 configures remap window base 58 with associated address.
 - firmware 46 sets up ASIC data structures to be ready for access to remap window region 42 ( 2 ).
 - firmware discovers destination VNIC associated with consumer application 74 .
 - firmware 46 configured BRT 0 mapped in BAR 2 of destination VNIC with remap window region address and offset in BRT 44 ( 3 ). In other words, firmware 46 maps appropriate entry in BRT 44 ( 3 ) corresponding to remap window region 42 ( 2 ) to point to memory region 56 ( 1 ).
 - firmware 46 sends notification event to device driver 28 in Host 2 that its BAR 2 is ready to access producer's memory 56 ( 1 ).
 - device driver 28 in Host 2 passes event to consumer application 74 running therein with memory mapped to corresponding BAR 2 region in user space with appropriate IOMMU configuration.
 - consumer application 72 's read/write access to its BAR 2 maps to remote memory region 56 ( 2 ) of producer application 74 's memory domain.
 - references to various features e.g., elements, structures, modules, components, steps, operations, characteristics, etc.
 - references to various features e.g., elements, structures, modules, components, steps, operations, characteristics, etc.
 - references to various features are intended to mean that any such features are included in one or more embodiments of the present disclosure, but may or may not necessarily be combined in the same embodiments.
 - optically efficient refers to improvements in speed and/or efficiency of a specified outcome and do not purport to indicate that a process for achieving the specified outcome has achieved, or is capable of achieving, an “optimal” or perfectly speedy/perfectly efficient state.
 - Embodiments described herein may be used as or to support firmware instructions executed upon some form of processing core (such as the processor of IO adapter 18 ) or otherwise implemented or realized upon or within a machine-readable medium.
 - a machine-readable medium includes any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer).
 - a machine-readable medium can include such as a read only memory (ROM); a random access memory (RAM); a magnetic disk storage media; an optical storage media; and a flash memory device, etc.
 - a machine-readable medium can include propagated signals such as electrical, optical, acoustical or other form of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.).
 - At least some portions of the activities outlined herein may be implemented in software in, for example, IO adapter 18 .
 - one or more of these features may be implemented in hardware, provided external to these elements, or consolidated in any appropriate manner to achieve the intended functionality.
 - the various network elements may include software (or reciprocating software) that can coordinate in order to achieve the operations as outlined herein.
 - these elements may include any suitable algorithms, hardware, software, components, modules, interfaces, or objects that facilitate the operations thereof.
 - IO adapter 18 described and shown herein may also include suitable interfaces for receiving, transmitting, and/or otherwise communicating data or information in a network environment.
 - some of the processors and memory elements associated with the various nodes may be removed, or otherwise consolidated such that a single processor and a single memory element are responsible for certain activities.
 - the arrangements depicted in the FIGURES may be more logical in their representations, whereas a physical architecture may include various permutations, combinations, and/or hybrids of these elements. It is imperative to note that countless possible design configurations can be used to achieve the operational objectives outlined here. Accordingly, the associated infrastructure has a myriad of substitute arrangements, design choices, device possibilities, hardware configurations, software implementations, equipment options, etc.
 - one or more memory elements can store data used for the operations described herein.
 - a processor can execute any type of instructions associated with the data to achieve the operations detailed herein in this Specification.
 - processors could transform an element or an article (e.g., data) from one state or thing to another state or thing.
 - the activities outlined herein may be implemented with fixed logic or programmable logic (e.g., software/computer instructions executed by a processor) and the elements identified herein could be some type of a programmable processor, programmable digital logic (e.g., a field programmable gate array (FPGA), an erasable programmable read only memory (EPROM), an electrically erasable programmable read only memory (EEPROM)), an ASIC that includes digital logic, software, code, electronic instructions, flash memory, optical disks, CD-ROMs, DVD ROMs, magnetic or optical cards, other types of machine-readable mediums suitable for storing electronic instructions, or any suitable combination thereof.
 - FPGA field programmable gate array
 - EPROM erasable programmable read only memory
 - EEPROM electrically erasable programmable read only memory
 - ASIC that includes digital logic, software, code, electronic instructions, flash memory, optical disks, CD-ROMs, DVD ROMs, magnetic or optical cards, other types of machine-readable mediums suitable for
 - These devices may further keep information in any suitable type of non-transitory storage medium (e.g., random access memory (RAM), read only memory (ROM), field programmable gate array (FPGA), erasable programmable read only memory (EPROM), electrically erasable programmable ROM (EEPROM), etc.), software, hardware, or in any other suitable component, device, element, or object where appropriate and based on particular needs.
 - RAM random access memory
 - ROM read only memory
 - FPGA field programmable gate array
 - EPROM erasable programmable read only memory
 - EEPROM electrically erasable programmable ROM
 - any of the memory items discussed herein should be construed as being encompassed within the broad term ‘memory element.’
 - any of the potential processing elements, modules, and machines described in this Specification should be construed as being encompassed within the broad term ‘processor.’
 - communication system 10 may be applicable to other exchanges or routing protocols.
 - communication system 10 has been illustrated with reference to particular elements and operations that facilitate the communication process, these elements, and operations may be replaced by any suitable architecture or process that achieves the intended functionality of communication system 10 .
 
Landscapes
- Engineering & Computer Science (AREA)
 - Theoretical Computer Science (AREA)
 - Physics & Mathematics (AREA)
 - General Physics & Mathematics (AREA)
 - General Engineering & Computer Science (AREA)
 - Computer Hardware Design (AREA)
 - Computer Networks & Wireless Communication (AREA)
 - Signal Processing (AREA)
 - Software Systems (AREA)
 - Mathematical Physics (AREA)
 - Computer And Data Communications (AREA)
 - Computer Security & Cryptography (AREA)
 - Multi Processors (AREA)
 
Abstract
An example method for facilitating remote memory access with memory mapped addressing among multiple compute nodes is executed at an input/output (IO) adapter in communication with the compute nodes over a Peripheral Component Interconnect Express (PCIE) bus, the method including: receiving a memory request from a first compute node to permit access by a second compute node to a local memory region of the first compute node; generating a remap window region in a memory element of the IO adapter, the remap window region corresponding to a base address register (BAR) of the second compute node; and configuring the remap window region to point to the local memory region of the first compute node, wherein access by the second compute node to the BAR corresponding with the remap window region results in direct access of the local memory region of the first compute node by the second compute node.
  Description
-  This disclosure relates in general to the field of communications and, more particularly, to remote memory access with memory mapped addressing among multiple compute nodes.
 -  Compute nodes such as microservers and hypervisor-based virtual machines executing in a single chassis can provide scaled out workloads in hyper-scale data centers. Microservers are an emerging trend of servers for processing lightweight workloads with large numbers (e.g., tens or even hundreds) of relatively lightweight server nodes bundled together in a shared chassis infrastructure, for example, sharing power, cooling fans, and input/output components, eliminating space and power consumption demands of duplicate infrastructure components. The microserver topology facilitates density, lower power per node, reduced costs, and increased operational efficiency. Microservers are generally based on small form-factor, system-on-a-chip (SoC) boards, which pack processing capability, memory, and system input/output onto a single integrated circuit. Unlike the relatively newer microservers, hypervisor-based virtual machines have been in use for several years. Yet, sharing data across the compute nodes with more effective and efficient inter-process communication has always been a challenge.
 -  To provide a more complete understanding of the present disclosure and features and advantages thereof, reference is made to the following description, taken in conjunction with the accompanying figures, wherein like reference numerals represent like parts, in which:
 -  
FIG. 1 is a simplified block diagram illustrating a communication system for facilitating remote memory access with memory mapped addressing among multiple compute nodes; -  
FIG. 2 is a simplified block diagram illustrating other example details of embodiments of the communication system; -  
FIG. 3 is a simplified block diagram illustrating yet other example details of embodiments of the communication system; -  
FIG. 4 is a simplified block diagram illustrating yet other example details of embodiments of the communication system; -  
FIG. 5 is a simplified sequence diagram illustrating example operations that may be associated with an embodiment of the communication system; -  
FIG. 6 is a simplified sequence diagram illustrating other example operations that may be associated with an embodiment of the communication system; -  
FIG. 7 is a simplified flow diagram illustrating yet other example operations that may be associated with an embodiment of the communication system; -  
FIG. 8 is a simplified flow diagram illustrating yet other example operations that may be associated with an embodiment of the communication system; and -  
FIG. 9 is a simplified flow diagram illustrating yet other example operations that may be associated with an embodiment of the communication system. -  An example method for facilitating remote memory access with memory mapped addressing among multiple compute nodes is executed at an input/output (IO) adapter in communication with the compute nodes over a Peripheral Component Interconnect Express (PCIE) bus, the method including: receiving a memory request from a first compute node to permit access by a second compute node to a local memory region of the first compute node; generating a remap window region in a memory element of the IO adapter, the remap window region corresponding to a base address register (BAR) of the second compute node in the IO adapter; and configuring the remap window region to point to the local memory region of the first compute node, wherein access by the second compute node to the BAR corresponding with the remap window region results in direct access of the local memory region of the first compute node by the second compute node. As used herein, the term “compute node” refers to a hardware processing apparatus, in which user applications (e.g., software programs) are executed.
 -  Turning to
FIG. 1 ,FIG. 1 is a simplified block diagram illustrating acommunication system 10 for facilitating remote memory access with memory mapped addressing among multiple compute nodes in accordance with one example embodiment.FIG. 1 illustrates acommunication system 10 comprising achassis 12, which includes a plurality ofcompute nodes 14 that communicate withnetwork 16 through a common input/output (I/O)adapter 18. Anupstream switch 20 facilitates north-south traffic betweencompute nodes 14 andnetwork 16. SharedIO adapter 18 presents network and storage devices on a Peripheral Component Interconnect Express (PCIE)bus 22 tocompute nodes 14. In various embodiments, each compute node appears as a PCIE device to other compute nodes inchassis 12. -  In a general sense,
compute nodes 14 include capabilities for processing, memory, network and storage resources. For example, as shown in greater detail in the figure, compute node Host1 runs (e.g., executes) anoperating system 24 andvarious applications 26. A device driver (also referred to herein as a driver) 28 operates or controls a particular type of device that is attached to computenode 14. For example, each PCIE device visible to (e.g., accessible by) Host1 may be associated with a separate device driver in some embodiments. In another example, all PCIE endpoints visible to Host1 may be associated with a single PCIE device driver. In a generalsense device driver 28 provides a software interface to hardware devices, enablingoperating system 24 andapplications 26 to access hardware functions (e.g., memory access) without needing to know precise details of the hardware being used. -  In many embodiments, substantially all PCIE endpoints appear as hardware device to the accessing compute node, irrespective of its actual form. For example, in some embodiments, compute
nodes 14 may comprise virtual machines; however, because one compute node is visible as a PCIE device to another compute node, they appear as hardware devices to each other and are associated with corresponding device drivers.Driver 28 communicates with the hardware device throughPCIE bus 22. When one ofapplications 26 invokes a routine indriver 28,driver 28 issues commands to the hardware device it is associated with. Thus,driver 28 facilitates communication (e.g., acts as a translator) between its associated hardware device andapplications 26.Driver 28 is hardware dependent and operating-system-specific. -  In various embodiments, each of
compute nodes 14, as shown using example Host1, includes various hardware components, such as one or more sockets 30 (e.g., socket refers to a hardware receptacle that enables a collection of central processing unit (CPU) cores with a direct pipe to memory); each socket holds oneprocessor 32; each processor comprises one ormore CPU cores 34; eachCPU core 34 executes instructions (e.g., computations, such as Floating-point Operations Per Second (FLOPS)); amemory element 36 may facilitate operations ofCPU cores 34. -  
Common IO adapter 18 facilitates communication to and from each ofcompute nodes 14. In various embodiments,IO adapter 18 services both network and storage access requests fromcompute nodes 14 inchassis 12, facilitating a cost efficient architecture. In various embodiments, amemory element 38 may be associated with (e.g., accessed by)IP adapter 18.Memory element 38 includes various base address registers (BARs) 40 and remapwindows 42 for various operations as described herein. A remapwindow helper register 44 andfirmware 46 are also included (among other components) inIO adapter 18. As used herein the term “firmware” comprises machine-readable and executable instructions and associated data that are stored in (e.g., embedded in, forming an integral part of, etc.) hardware, such as a read-only memory, or flash memory, or an ASIC, or a field programmable gate array (FPGA) and executed by one or more processors (not shown) inIO adapter 18 to control the operations ofIO adapter 18. In a general sense,firmware 46 comprises a combination of software and hardware used exclusively to control operations ofIO adapter 18. -  In a general sense, network traffic between
compute nodes 14 andnetwork 16 may be termed as “North-South Traffic”; network traffic amongcompute nodes 14 may be termed as “East-West Traffic”. Note thatcompute nodes 14 are unaware of the physical location of other compute nodes, for example, whether they exist insame chassis 12, or are located remotely, overnetwork 16. Thus, computenodes 14 are agnostic to the direction of network traffic they originate or terminate, such as whether the traffic is North-South, or East-West, and thereby use the same addressing mechanism (e.g., L2 Ethernet MAC address/IP address) for addressing nodes located insame chassis 12 or located in a remote node in same L2/L3 domain. -  According to various embodiments of
communication system 10, a memory access scheme using low latency and low overhead protocols implemented inIO adapter 18 allows any one (or more) computenodes 14, for example, Host1, to share and access remote memory of another compute node (e.g., across different servers; across a hypervisor; across different operating systems), for example, Host2. Host2 may include an operating system different from that of Host1 without departing from the scope of the embodiments. The protocols as described herein do not require any particularized (e.g., custom) support from the operating systems or networking stack of Host1 or Host2. The scheme is completely transparent to the operating systems of Host1 and Host2, allowing suitable throughput while communicating in different memory domains. -  For purposes of illustrating the techniques of
communication system 10, it is important to understand the communications that may be traversing the system shown inFIG. 1 . The following foundational information may be viewed as a basis from which the present disclosure may be properly explained. Such information is offered earnestly for purposes of explanation only and, accordingly, should not be construed in any way to limit the broad scope of the present disclosure and its potential applications. -  In any server ecosystem, typical challenges to achieving better inter-process communication or sharing the data across the servers include reliable tunnels for the data sharing, low latency for the communication, low overhead while working with remote server etc. There are several solutions available in the market that predominantly use network tunnels to communicate in two distinct physical servers. Typical examples would be Remote Direct Memory Access (RDMA), RDMA over Converged Ethernet (RoCE) and InfiniBand. Although proven and in use for several years, such network based communication can be limited by various parameters, such as latency, networking stack dependency (e.g., network stack awareness), OS interference (e.g., OS dependency, OS awareness, OS configuration), IO semantics and key exchanges for security, necessity for protocol awareness, complex channel semantics and tedious channel setup procedures. Moreover, not all operating systems support RDMA (and its variants).
 -  For example, RDMA communication is based on a set of three queues: (i) a send queue and (ii) a receive queue, comprising a Queue Pair (QP) and (iii) a Completion Queue (CQ). Posts in the QP are used to initiate the sending or receiving of data. A sending application (e.g., driver) places instructions, called Work Queue Elements (WQE), on its work queues that generate buffers in the sender's adapter to send data. The WQE placed on the send queue contains a pointer to the message to be sent; a pointer in the WQE on the receive queue contains a pointer to a buffer where an incoming message can be placed. The sender's adapter consumes WQE from the send queue at the egress side and streams the data from the memory region to the remote receiver. When data arrives at the remote receiver, the receiver's adapter consumes the WQEs at the receive queue at the ingress side and places the received data in appropriate memory regions of the receiving application. Any memory sharing or access between a sending compute node and the receiving compute node thus requires tedious channel setup, RDMA protocols, etc.
 -  Moreover, in a chassis where several compute nodes share a common IO adapter, such remote memory access sharing protocols can have unnecessary overhead. For example, every packet from any compute node, say Host1, has to hit a port of
upstream switch 20 and then return on the same pipe back toIO adapter 18, which then redirects it to the destination compute node, say Host2. Such east-west data sharing can cause inefficient utilization of bandwidth in the common pipe, which is potentially used by various other compute nodes performing extensive north-south traffic withnetwork 16. The east-west traffic pattern also increases application response latency, for example, due to longer path to be traversed by network packets. -  
Communication system 10 is configured to address these issues (among others) to offer a system and method for facilitating remote memory access with memory mapped addressing amongmultiple compute nodes 14 sharingIO adapter 18. In various embodiments, PCIE, which is typically supported by almost all operating systems, is used to share data from a memory region on one compute node, say Host1, with a different memory region of another compute node, say Host2. As used herein, the term “memory region” comprises a block (e.g., section, portion, slice, chunk, piece, space, etc.) of memory that can be accessed through a contiguous range of memory addresses (e.g., a memory address is a unique identifier (e.g., binary identifier) used by a processor for tracking a location of each memory byte stored in the memory). As used herein, the term “window” in the context of memory regions refers to a memory region comprising a contiguous range of memory addresses, either virtual or physical. -  In various embodiments,
IO adapter 18 is connected to computenodes 14 by means ofPCIE bus 22.IO adapter 18 includes an embedded operating system hosting multiple VNICs configured with memory resources ofmemory element 38. Each VNIC accesses a separate, exclusive region ofmemory element 38. Each PCIE endpoint, namely VNICs is typically associated with a host software driver, namelydevice driver 28. In an example embodiment, each VNIC that requires a separate driver is considered a separate PCIE device. -  For ease of explanation of various embodiments, a brief overview of PCIE protocol is provided herein. A PCIe data transfer subsystem in a computing system (such as that of an IO adapter) includes a PCIe root complex comprising a computer hardware chipset that handles communications between the PCIE endpoints. The root complex enables PCIe endpoints to be discovered, enumerated and worked upon by the host operating system. The base PCIe switching structure of a single root complex has a tree topology, which addresses PCIe endpoints through a bus numbering scheme. Configuration software on the root complex detects every bus, device and function (e.g., storage adapter, networking adapter, graphics adapter, hard drive interface, device controller, Ethernet controller, etc.) within a given PCIe topology.
 -  The IO adapter's operating system assigns address space in the IO
adapter memory element 38 to each PCIe endpoint (e.g., VNIC) so that the PCIe endpoint can understand at what address space it is identified by the IO adapter and map the corresponding interrupts accordingly. After the configuration of the PCIe endpoint device is complete, the PCIe'sdevice driver 28 compatible with thehost operating system 24 can work efficiently with the PCIe endpoint and facilitate appropriate device specific functionality. -  Each PCIE endpoint is enabled on
IO adapter 18 by being mapped into a memory-mapped address space inmemory element 18 referred to as configuration space (e.g., register, typically consisting of 256 bytes). The configuration space contains a number of base address registers (BARs) 40, comprising the starting address of a contiguous mapped address in IOadapter memory element 38. For example, a 32-bit BAR0 is offset 10 h in PCI Compatible Configuration Space—and post enumeration would contain the start address of BAR. Any other PCIE endpoint, to access (e.g., read data from or write data to) the PCIE endpoint associated with a specific BAR, would submit a request with the address of that BAR. An enumeration software allocates memory for the PCIE endpoints and writes to corresponding BARs.Firmware 46 programs the PCIe endpoint's BARs to inform the PCIe endpoints of its address mapping. When the BAR for a particular PCIe endpoint is written, all memory transactions generated to that bus address range are claimed by the particular PCIe endpoint. -  Typically, when a PCIE endpoint, say a flash memory device, is discovered on one of the compute nodes, say, Host1,
OS 24 provides a physical address to BAR 40 and allocates the address space fordevice driver 28 to interact with the flash memory device. Whendevice driver 28 is loaded, it requests the memory mapped address fromOS 24 corresponding to the physical address so that it can work with the flash memory device using the address handle. Subsequent accesses toBAR 40 fromdevice driver 28 are completely transparent toOS 24 as it has already carved out the address space sufficient to work with the flash memory device. Thus, typical PCIE data access is betweenapplication 26 and the PCIE endpoint, such as the flash memory device. PCIE data access is not typically used across twodifferent compute nodes 14. In other words, one compute node typically cannot share its memory space with another compute node using native PCIE protocols. -  Nevertheless, according to various embodiments, appropriate configuration of
IO adapter 18 with multiple ports and remap window feature can support memory sharing betweencompute nodes 14 using PCIE. Remap window feature includes a remap window base and remap window region for memory mapping for the purpose of remapping root complex IO and memory BARs to address ranges that are directly addressable by the processor. Remap window base is used to configure a start address of a memory region which can be mapped to any other memory region. The remap window region refers to the mapped region inmemory element 38 differentiated according to a virtual network interface card (VNIC) identifier (ID) configured in remapwindow helper register 44 inIO adapter 18. -  The VNIC ID could map to any host-based VNIC or root complex VNIC. In some embodiments, four remap window regions, each capable of addressing 4 MB may be allocated for the remap window feature, permitting easy access of up to 16 MB of memory either in host memory or Root Complex endpoint device memory. Moreover, multiple PCIE ports on
PCIE bus 22 distinguish different PCIE lanes associated withdistinct compute nodes 14. Each memory region in Root Complex endpoint device memory is associated with a distinct PCIE lane that is completely independent of each other such that no two memory regions share any PCIE activity with each other. -  In an example embodiment, an administrator configures the VNIC ID of
computing nodes 14 through respective service profiles. A unified computing system manager (e.g., network management application such as Cisco® UCSM) programs the VNIC ID inIO adapter 18 through appropriate control management protocol. Upon reception,firmware 46 populates the VNIC ID information in remapwindow helper register 44 and also makes the VNICs ready and discoverable fromcorresponding computing nodes 14.Firmware 46 adds the BAR size of a specific BAR, for example, BAR3, to the memory region allocated with each VNIC ID. In an example embodiment, 16 MB may be added to accommodate all four remap window regions. -  After one of
computing nodes 14, say Host1 is powered up, the enumeration software (BIOS) ofIO adapter 18 discovers the new PCIE device. Through PCI enumeration protocol, the BIOS identifies BAR size requirements, and associates a physical address to corresponding Host1 inBAR 40. In various embodiments, three separate BARs are provided for each VNIC, namely, BAR0, BAR1 and BAR2.Device driver 28 upon loading inHost1 requests OS 24 to provide memory mapped equivalent of the physical address for each BAR. It identifies that BAR2 is the remap window region according to a preconfigured protocol betweenfirmware 46 anddriver 28. The memory mapped IO address comprises an address handle given byOS 24 to access the BAR2 region ofmemory element 38 inIO adapter 18.Applications 26 usingdevice driver 28 understands the capability of remap window exposed bydevice driver 28. Similar sequence of events occurs in another compute node, say Host2, when it powers up and its device driver is loaded in its OS. Through a pre-determined protocol,applications 26 incompute nodes 14, say Host1 and Host2, exchange their respective address handles throughfirmware 46 and request corresponding memory access. -  The memory access mechanisms described herein can present one of the lowest latency protocols to communicate with different servers, virtual machines, or other
such compute nodes 14. In some embodiments, the memory access mechanisms described herein can also be used as IPC between two computenodes 14. Note that the operating system or network stacks do not need any separate, or distinct configuration to enable such remote memory access. In some embodiments,IO adapter 18 servicing a hypervisor can use the described mechanisms to allow various applications executing in separate virtual machines (e.g., guest domains) to communicate with each other without having to go through specially installed IPC software (e.g., VMWARE ESX/ESXi) or other external memory management/sharing applications. -  In an example embodiment wherein compute
nodes 14 comprise microservers the storage and network is shared across multiple servers (e.g., in some cases sixteen servers). The network ecosystem (e.g., of network 16) may support different classes and QoS policies for network traffic, which can result in different priority flows. However, storage traffic does not typically have any associated QoS. Such differentiated traffic types (e.g., some traffic having QoS, other traffic not having QoS) can create imbalance of traffic performance across different servers causing some servers using (or allocated) larger bandwidths and other servers using (or allocated) poor bandwidth. With large amounts of input/output among (or from/to) servers, the condition can become worse with performance drops becoming noticeable in some servers. In other words, performance of some servers drops when other unrelated servers are experiencing heavy network traffic. -  To have balanced throughput across the servers, a cooperative I/O scheduling across the servers may be implemented. For example, every server monitors and records a number of IO requests issued to
IO adapter 18. Such IO statistics are shared with other servers through the local memory mapped scheme in BAR3 as described herein. Such data sharing can facilitate decisions at the individual servers regarding whether to send a SCSI_BUSY message to its OS storage stack. Thus, even though the associated storage VNIC has bandwidth to push the IOs toIO adapter 18, it will not schedule the IO requests, voluntarily relinquishing claim on storage for some time, until the network traffic bottleneck clears up. Such actions can lead to other VNICs balancing out storage traffic pattern inchassis 12, maintaining the IO equilibrium therein. -  In various embodiments,
IO adapter 18 receives a memory request from one ofcompute nodes 14, say Host1, to permit access by another ofcompute nodes 14, say Host2, to a local memory region of Host1 (assume the local memory region is in memory element 36). The memory request comprises a host identifier of Host2 and address of the local memory region of Host1 in some embodiments. The host identifier can be obtained from a resource map providing identifying information ofcompute nodes 14 in communication withIO adapter 18 overPCIE bus 22. -  
Firmware 46 inIO adapter 18 generates remapwindow region 42 inmemory element 38 ofIO adapter 18,remap window region 42 corresponding to BAR 40 (e.g., BAR2) of Host2 inIO adapter 18.Firmware 46 configures remapwindow region 42 to point to the local memory region of Host1, access by Host2 to BAR2 corresponding withremap window region 42 resulting in direct access of the local memory region of Host1 by Host2. Note thatcompute nodes 14 are associated with unique PCIE endpoints onPCIE bus 22; therefore, each hasdistinct BARs 40 associated therewith. Moreover, the direct access of the local memory region of Host1 by Host2 does not involve operating systems of Host1 and/or Host2. BAR2 associated withremap window region 42 can comprise one of a plurality of BARs associated with Host2. -  In various embodiments,
device driver 28 of Host2 associates BAR2 with remap window region, such thatapplication 26 executing in Host2 can access the local memory region of Host1 through appropriate access requests to BAR2 usingdevice driver 28. In various embodiments, configuringremap window region 42 comprises configuring a remap window base in a BAR Resource Table (BRT) to be a start address of the local memory region. -  Turning to the infrastructure of
communication system 10, network topology of thenetwork including chassis 12 can include any number of compute nodes, servers, hardware accelerators, virtual machines, switches (including distributed virtual switches), routers, and other nodes inter-connected to form a large and complex network. A node may be any electronic device, client, server, peer, service, application, or other object capable of sending, receiving, or forwarding information over communications channels in a network. Elements ofFIG. 1 may be coupled to one another through one or more interfaces employing any suitable connection (wired or wireless), which provides a viable pathway for electronic communications. Additionally, any one or more of these elements may be combined or removed from the architecture based on particular configuration needs. -  
Communication system 10 may include a configuration capable of TCP/IP communications for the electronic transmission or reception of data packets in a network.Communication system 10 may also operate in conjunction with a User Datagram Protocol/Internet Protocol (UDP/IP) or any other suitable protocol, where appropriate and based on particular needs. In addition, gateways, routers, switches, and any other suitable nodes (physical or virtual) may be used to facilitate electronic communication between various nodes in the network. -  Note that the numerical and letter designations assigned to the elements of
FIG. 1 do not connote any type of hierarchy; the designations are arbitrary and have been used for purposes of teaching only. Such designations should not be construed in any way to limit their capabilities, functionalities, or applications in the potential environments that may benefit from the features ofcommunication system 10. It should be understood thatcommunication system 10 shown inFIG. 1 is simplified for ease of illustration. -  The example network environment may be configured over a physical infrastructure that may include one or more networks and, further, may be configured in any form including, but not limited to, local area networks (LANs), wireless local area networks (WLANs), VLANs, metropolitan area networks (MANs), VPNs, Intranet, Extranet, any other appropriate architecture or system, or any combination thereof that facilitates communications in a network.
 -  In some embodiments, a communication link may represent any electronic link supporting a LAN environment such as, for example, cable, PCIE, Ethernet, wireless technologies (e.g., IEEE 802.11x), ATM, fiber optics, etc. or any suitable combination thereof. In other embodiments, communication links may represent a remote connection through any appropriate medium (e.g., digital subscriber lines (DSL), telephone lines, T1 lines, T3 lines, wireless, satellite, fiber optics, cable, Ethernet, etc. or any combination thereof) and/or through any additional networks such as a wide area networks (e.g., the Internet).
 -  In various embodiments,
chassis 12 may comprise a rack-mounted enclosure, blade enclosure, or a rack computer that accepts plug-incompute nodes 14. Note thatchassis 12 can include, in a general sense, any suitable network element, which encompasses computers, network appliances, servers, routers, switches, gateways, bridges, load-balancers, firewalls, processors, modules, or any other suitable device, component, element, or object operable to exchange information in a network environment. Moreover, the network elements may include any suitably configured hardware provisioned with suitable software, components, modules, interfaces, or objects that facilitate the operations thereof. This may be inclusive of appropriate algorithms and communication protocols that allow for the effective exchange of data or information. -  
Compute nodes 14 may comprise printed circuit boards, for example, manufactured with empty sockets. Each printed circuit board may hold more than one processor (e.g., within the same processor family, differing core counts, with a wide range of frequencies and vastly differing memory cache structures may be included in a single processor/socket combination). In some embodiments, computenodes 14 may include hypervisors and virtual machines.IO adapter 18 may include an electronic circuit, expansion card or plug-in module that accepts input and generates output in a particular format.IO adapter 18 facilitates conversion of data format and electronic timing between input/output streams and internal computer circuits ofchassis 12. In some embodiments,IO adapter 18 may comprise a hypervisor, and computenodes 14 may comprise separate virtual machines. -  Turning to
FIG. 2 ,FIG. 2 is a simplified block diagram illustrating example details according to an embodiment ofcommunication system 10. Assume, merely for example purposes and not as a limitation that computingnodes 14, namely Host1 andHost 2 respectively, are to share data across memory regions according to embodiments ofcommunication system 10. Eachcompute node 14, namely Host1 and Host2 connects toIO adapter 18 through a respective virtual network interface card (VNIC) 48(1) and 48(2) at the compute node side and a respective PCIE port 50(1) and 50(2) at the IO adapter side.Firmware 46 exposes (e.g., creates, generates, provides, etc.) a separate VNIC 52(1) and 52(2) for corresponding PCIE ports 50(1) and 50(2). VNIC 52(1) and 52(2) atIO adapter 18 act as standalone Ethernet network controller adapters for network traffic and/or as storage controller adapters for storage traffic from and to respective compute nodes 14(1) and 14(2). For example, all traffic from VNIC 48(1) on Host1 is sent to corresponding PCIE port 50(1), through VNIC 52(1), to the external facing port, if needed. VNICs 48(1), 48(2), 52(1) and 52(2) are created based on user configurations, for example, as specified in a service profile and policy configured at the UCSM and deployed therefrom. Each VNIC 52(1) and 52(2) atIO adapter 18 is associated with BAR 40(1) and 40(2) respectively, each comprising three separate memory spaces denoted as: BAR0, BAR1 and BAR2. BARs 40(1) and 40(2) predominantly expose hardware functionality, such as memory spaces that can be used by host software, such asapplications 26, to work with VNIC 52(1) and 52(2). -  To explain further, consider Host1. Note that the descriptions herein for Host1 apply equally for Host2.
Operating system 24 in Host1 enumerates BARs 40(1) associated with Host1 and maps IO address space inhost memory 36 to each BAR such that any access to the corresponding mapped addresses in the mapped IO address space in Host1 will point to (e.g., correspond with, associate with) the appropriate one of BARs 40(1): BAR0, BAR1 and BAR2 inIO adapter 18. Note that whereas mapped addresses in Host1 may be virtual, they point to the physical memory region inIO adapter 18.Device driver 28 accesses BARs 40(1) using the memory mapped addresses returned byOS 24. -  In various embodiments, BAR2 is reserved for
remap window 42, which is identified by the device driver inrespective compute nodes 14. For example, BAR2 of BAR 40(1) is reserved for remap window region 42(1) and BAR2 of BAR 40(2) is reserved for remap window region 42(2). In other words,device driver 28 in Host1 understands BAR2 of 40(1) to be associated with remap window 42(1). When device driver 28 (or application 26) in Host1 wants to allow another compute node, such as Host2, to access its local memory 56(1),firmware 46 configures remap window 42(2) of Host2 to point to memory addressed space 56(1) of Host1. Similarly, when device driver 28 (or application 26) in Host2 wants to allow Host1 to access its local memory 56(2),firmware 46 configures remap window 42(1) to point to memory addressed space 56(2) of Host2. -  In other words, BAR2 of BAR 40(1) associated with Host1 refers to memory space 56(2) of Host2; likewise, BAR2 of BAR 40(2) associated with Host2 refers to memory space 56(1) of Host1. Anything written to BAR2 of BAR 40(1) by Host1 will be as if written directly into memory space 56(2) of Host2, without any intervening protocols or communication. Thus applications in separate compute nodes can easily access the memory present in their peer's memory domain.
 -  Turning to
FIG. 3 ,FIG. 3 is a simplified block diagram illustrating example details according to an embodiment ofcommunication system 10. Memory and I/O requests inIO adapter 18 are handled using remapwindow helper register 44 comprising three cascaded hardware tables: BAR Match Table (BMT) 44(1), BMT associated random access memory (RAM) 44(2), and BAR Resource Table (BRT) 44(3). These tables attempt to resolve memory and I/O transactions to a IO adapter memory address inmemory element 38 without involving any processor ofIO adapter 18 or operating system ofcompute nodes 14. BMT 44(1) provides a mechanism to determine whether a memory request (e.g., transaction) received from Host1 matches a valid PCIE device, such as Host2. BMT 44(1) uses a search key comprising (among other parameters) a host ID and a BAR address, including length and offset. A hit in BMT 44(1) outputs a Hit Index, which indexes into an associated RAM entry in table 44(2). BRT 44(3) provides a mechanism to flexibly map a single BAR to one or more possibly non-contiguous, adapter memory-mapped resources. In some embodiments, BRT 44(3) comprises a logical table implemented in the hardware RAM ofIO adapter 18. -  
Firmware 46 ofIO adapter 18 presents a virtualized view of PCIE endpoints' configuration space to computenodes 14. When Host1 configures memory/IO bar window(s) in the VNIC's configuration space, Host1's BAR address windows are translated by remapwindow helper register 44 to map them to the local root complex endpoint's BAR windows in IO adapter's local address space. For example, memory region 56(2) of Host1 is mapped to remap window region 42(2) of Host2 inmemory element 38. After enumeration and virtualization of the configuration space of the PCIE endpoints, the device drivers running oncompute nodes 14 may post work requests using their assigned memory bar windows. -  During operation, a memory request from Host1 to allow access to a specific memory region 56(1) by a remote PCIE endpoint, say Host2 may proceed as follows. Host1 sends a memory request to
firmware 46, including HostID=identifier of remote peer, say Host2; type=remote_memory_access; address=address of local memory 56(1). The memory request is converted into a search key to BMT 44(1), triggering a lookup (e.g., ternary content-addressable memory (TCAM)) of BMT 44(1), which outputs a hit index to RAM 44(2) that activates a read of appropriate entry in BRT 44(3). In some embodiments, the memory request from Host1 may reference a VNIC number, which may be converted into the corresponding host identifier by suitable modules.Firmware 46 programs the appropriate entry in BRT 44(3) to point to the provided address 56(1) of Host1. The specific memory region of the appropriate entry in BRT 44(3) is already pre-mapped to BAR2 of Host2 as remap window 42(2). In other words, the entry in BRT 44(3) references remap window region 42(2), which now directly points to memory space 56(1) of Host1 after configuration byfirmware 46. Any memory requests going through remap window region 42(1) will be tagged with the VNIC of thedestination compute node 14. Any writes by Host1 into local memory region 56(1) can be directly accessed by Host2 through its mapped remap window region 42(2) without any intervention by operating systems or CPUs. -  Turning to
FIG. 4 ,FIG. 4 is a simplified block diagram illustrating example details according to an embodiment ofcommunication system 10. Assume that application A in Host1 and application B in Host2 exchanges data according to mechanisms as described herein. Application B takes the following actions: Application B sends a memory mapped address (e.g., -  IOMMU mapped address) of memory space 56(2) to
driver 28 in Host2 requesting access to the PCIE endpoint corresponding to Host1.Driver 28triggers firmware 46 inIO adapter 18 to configure aremap window base 58 in BRT 44(3) with the memory mapped address and associate it with the destination VNIC of Host1 as identified through a predetermined protocol. -  
Firmware 46 configures remapwindow base 58 with the given address and sets up application specific integrated circuit (ASIC) data structures to be ready for remap window region access.Firmware 46 discovers the destination VNIC of Host1 that wants to access the memory region as given bydriver 28.Firmware 46 configures BRT0, corresponding to BAR2 of the destination VNIC Host1, with the remap window region address and offset that would correspond to theremap window base 58. Configured BRT0 corresponds to remap window 42(1) and points to memory region 56(2) of Host2. -  After remap window region 42(1) is configured on behalf of the destination VNIC,
firmware 46 sends notification todriver 28 running in Host1 that its BAR2 is ready to access the Host2 memory. Upon receiving the notification fromfirmware 46,driver 28 running in Host1 passes the notification to application A. Application A already has memory mapped the BAR2 region with appropriate IOMMU configuration (e.g., addresses). Subsequently application A's read/write access to BAR2 of Host1 maps to remote memory region 56(2) present in Host2's memory domain. Likewise, application B's read/write access to memory region 56(2) maps to BAR2 of Host1. Thus both application A and application B running indifferent compute nodes 14 can communicate with each other without any OS intervention. -  Turning to
FIG. 5 ,FIG. 5 is a simplified sequence diagram illustratingexample operations 60 according to an embodiment ofcommunication system 10 associated with a driver load scenario and discovery of various resources presented todriver 28 includingremap window 42 mapped inBAR 40. At 62,driver 28 corresponding to VNIC0 of one ofcompute nodes 14, say Host1, is loaded. At 62,driver 28 readsBAR 40 and identifies BAR2 as the remap BAR. At 66,driver 28 maps the BARs and gets physical addresses fromOS 24. At 68,OS 24 provides memory mapped address for the physical address of the BAR. At 70,application 26 maps the address in user space (e.g., using MMAP). At 72,firmware 46 prepares remapwindow 42 for usage bydriver 28. -  Turning to
FIG. 6 ,FIG. 6 is a simplified sequence diagram illustratingexample operations 70 according to an embodiment ofcommunication system 10 betweenapplications 26 running on twodifferent compute nodes 14 andfirmware 46 to enable the remap window configuration for the purpose of accessing remote memory. Assume, merely for example purposes and not as a limitation that Host1 includesapplication 26, which produces data, and is referred to asproducer 72; Host2 includes anotherapplication 26, which consumes the data, and is referred to asconsumer 74. -  
IO adapter 18 includes a resource map providing resource information, for example, its memory offset and length, associated with the corresponding VNIC. In some embodiments, the resource map associates memory address offsets (also referred to herein as “memory offsets,” or simply “offsets”) with the BAR of one or more I/O resources (the I/O resource corresponding to a PCIE device, such as VNIC). For example, the resource map may include information identifying each PCIE device onPCIE bus 22 and its corresponding BARs. In many embodiments, the resource map may be comprised in remap window help register 44. In various embodiments, BAR0 of each PCIE endpoint may point to the resource map stored inIO adapter 18. The PCIE endpoints may be identified using host indices, or other suitable identifiers. On parsing the resource map,device driver 28 inproducer 72 identifiesother compute nodes 14 present inchassis 12.Consumer 74 notifiesfirmware 46 of its intent to read the contents of the memory of Host1 through a resource update.Firmware 46 decodes the request, identifies the source and destination VNICs and sends notification to the VNIC whose associated memory is to be read. -  At 76,
producer 72 creates data at memory offset with dirty bit reset. (Note that the dirty bit is well known in the art to be associated with a block of memory and indicates whether or not the corresponding block of memory has been modified; if the bit is set (or reset), the data has been modified since the last time it was read). At 78,device driver 28 in Host1 notifiesfirmware 46 about the data availability. The notification's meta-data includes the address to be read from at its local memory space 56(1) including: address, length, destination host index of consumer and a key. At 80,firmware 46 configures remapwindow base 58 with the memory offset. At 82,firmware 46 configures remap window region 42(2) of BAR2 associated with Host2 to point to memory space 56(1) of Host1. At 84,firmware 46 configures remap window 42(2) only once (e.g., for all transactions betweenproducer 72 and same consumer 74). -  At 86,
firmware 46 notifiesconsumer 74 that the data is ready. At 88,consumer 74 reads BAR2 at the memory offset specified byfirmware 46. Reading BAR2 at the memory offset is identical to accessing memory region 56(1) ofproducer 72. At 90,consumer 74 marks the dirty bit in the meta data, indicating that the data has been read. At 92,consumer 74 may notifyfirmware 46 thatconsumer 74 has completed reading the data. At 94,firmware 46 may notifyproducer 72 that data has been consumed byconsumer 74. At 96,producer 72 writes the next set of data to the memory offset, and the operations resume from 76 and continue thereafter. -  Turning to
FIG. 7 ,FIG. 7 is a simplified flow diagram illustratingexample operations 100 according to an embodiment ofcommunication system 10. Assume thatproduce 72 in Host1 is providing data toconsumer 74 in Host2, both Host1 and Host2 being connected overPCIE bus 22 withIO adapter 18. At 102,IO adapter 18 includes a resource map providing resource information, such as resource location (e.g., PCIE host index), length and offsets where firmware data is present. At 104,device driver 28 in Host1 identifies the resource layout from the resource map, includingother compute nodes 14 inchassis 12. In an example embodiment, identifying the resource layout comprises parsing the resource map. -  At 108,
consumer application 74 notifiesfirmware 46 of intent to read contents of memory 56(1) of Host1 through a resource update message (or other suitable mechanism). At 110,firmware 46 decodes the request, identifies the source and destination VNICs and sends notification to Host1 VNIC from which the memory is to be read. At 112,producer application 72 provides tofirmware 46 the address of memory region 56(1) to be read from, through an appropriate memory request. At 114,firmware 46 sends the remap window region information (e.g., remapwindow base 58, remap window region 42(2)) to consumer application 74 (e.g., through associated VNIC) BAR2 at known offset and notifies the consumer VNIC. -  At 116,
firmware 46 maps the address belonging to the producer application VNIC, namely, address of memory region 56(1) into the consumer VNIC's BAR2 remap window region 42(2) at known offset. At 118, consumer VNIC passes the remap window information toconsumer application 74. At 120,consumer application 74 reads the data from its memory mapped BAR2 which corresponds to memory 56(1) ofproducer application 72. -  Turning to
FIG. 8 ,FIG. 8 is a simplified flow diagram illustratingexample operations 130 according to an embodiment ofcommunication system 10. At 132, an administrator configures VNIC on Host1's service profile. At 134, UCSM sends VNIC details configured by the administrator toIO adapter 18'sfirmware 46 using a suitable control management protocol. At 136,firmware 46 populates VNIC information and makes it ready and discoverable from Host1. At 138,firmware 46 adds BAR size of BAR2 in VNIC as 16 MB to accommodate four remap window regions. -  At 140, Host1 is powered up. At 142, BIOS enumeration software discovers PCIE endpoint and through PCIE enumeration protocol, identifies BAR size requirements and associates physical address to each BAR. At 144,
device driver 28 of Host1 upon loading,requests OS 24 to provide memory mapped equivalent of physical address for each BAR. At 146,device driver 28 identifies that BAR2 is remap window region according to preconfigured protocol withfirmware 46. At 148,application 26 running in Host1'soperating system 24 making use ofdevice driver 28 understands capability of remap window exposed bydevice driver 28. At 150,application 26 exchanges handles with peer host (e.g., Host2) throughfirmware 46 and permits memory to be accessed by the peer host. -  Turning to
FIG. 9 ,FIG. 9 is a simplified flow diagram illustratingexample operations 160 according to an embodiment ofcommunication system 10. At 162, producer application 72 (e.g.,application 26 running in Host1) sends a memory request comprising a destination host (e.g., Host2) and IOMMU mapped address of memory region 56(1) todevice driver 28. At 164,device driver 28triggers firmware 46 to configureremap window base 58 with destination VNIC (e.g., corresponding to Host2) and associated address. At 166,firmware 46 configures remapwindow base 58 with associated address. At 168,firmware 46 sets up ASIC data structures to be ready for access to remap window region 42(2). At 170, firmware discovers destination VNIC associated withconsumer application 74. At 172,firmware 46 configured BRT0 mapped in BAR2 of destination VNIC with remap window region address and offset in BRT 44(3). In other words,firmware 46 maps appropriate entry in BRT 44(3) corresponding to remap window region 42(2) to point to memory region 56(1). At 174,firmware 46 sends notification event todevice driver 28 in Host2 that its BAR2 is ready to access producer's memory 56(1). At 176,device driver 28 in Host2 passes event toconsumer application 74 running therein with memory mapped to corresponding BAR2 region in user space with appropriate IOMMU configuration. At 178,consumer application 72's read/write access to its BAR2 maps to remote memory region 56(2) ofproducer application 74's memory domain. -  Note that in this Specification, references to various features (e.g., elements, structures, modules, components, steps, operations, characteristics, etc.) included in “one embodiment”, “example embodiment”, “an embodiment”, “another embodiment”, “some embodiments”, “various embodiments”, “other embodiments”, “alternative embodiment”, and the like are intended to mean that any such features are included in one or more embodiments of the present disclosure, but may or may not necessarily be combined in the same embodiments. Furthermore, the words “optimize,” “optimization,” and related terms are terms of art that refer to improvements in speed and/or efficiency of a specified outcome and do not purport to indicate that a process for achieving the specified outcome has achieved, or is capable of achieving, an “optimal” or perfectly speedy/perfectly efficient state.
 -  Embodiments described herein may be used as or to support firmware instructions executed upon some form of processing core (such as the processor of IO adapter 18) or otherwise implemented or realized upon or within a machine-readable medium. A machine-readable medium includes any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer). For example, a machine-readable medium can include such as a read only memory (ROM); a random access memory (RAM); a magnetic disk storage media; an optical storage media; and a flash memory device, etc. In addition, a machine-readable medium can include propagated signals such as electrical, optical, acoustical or other form of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.).
 -  In example implementations, at least some portions of the activities outlined herein may be implemented in software in, for example,
IO adapter 18. In some embodiments, one or more of these features may be implemented in hardware, provided external to these elements, or consolidated in any appropriate manner to achieve the intended functionality. The various network elements may include software (or reciprocating software) that can coordinate in order to achieve the operations as outlined herein. In still other embodiments, these elements may include any suitable algorithms, hardware, software, components, modules, interfaces, or objects that facilitate the operations thereof. -  Furthermore,
IO adapter 18 described and shown herein (and/or their associated structures) may also include suitable interfaces for receiving, transmitting, and/or otherwise communicating data or information in a network environment. Additionally, some of the processors and memory elements associated with the various nodes may be removed, or otherwise consolidated such that a single processor and a single memory element are responsible for certain activities. In a general sense, the arrangements depicted in the FIGURES may be more logical in their representations, whereas a physical architecture may include various permutations, combinations, and/or hybrids of these elements. It is imperative to note that countless possible design configurations can be used to achieve the operational objectives outlined here. Accordingly, the associated infrastructure has a myriad of substitute arrangements, design choices, device possibilities, hardware configurations, software implementations, equipment options, etc. -  In some of example embodiments, one or more memory elements (e.g.,
memory element 38, memory element 36) can store data used for the operations described herein. This includes the memory element being able to store instructions (e.g., software, logic, code, etc.) in non-transitory media, such that the instructions are executed to carry out the activities described in this Specification. A processor can execute any type of instructions associated with the data to achieve the operations detailed herein in this Specification. In one example, processors could transform an element or an article (e.g., data) from one state or thing to another state or thing. In another example, the activities outlined herein may be implemented with fixed logic or programmable logic (e.g., software/computer instructions executed by a processor) and the elements identified herein could be some type of a programmable processor, programmable digital logic (e.g., a field programmable gate array (FPGA), an erasable programmable read only memory (EPROM), an electrically erasable programmable read only memory (EEPROM)), an ASIC that includes digital logic, software, code, electronic instructions, flash memory, optical disks, CD-ROMs, DVD ROMs, magnetic or optical cards, other types of machine-readable mediums suitable for storing electronic instructions, or any suitable combination thereof. -  These devices may further keep information in any suitable type of non-transitory storage medium (e.g., random access memory (RAM), read only memory (ROM), field programmable gate array (FPGA), erasable programmable read only memory (EPROM), electrically erasable programmable ROM (EEPROM), etc.), software, hardware, or in any other suitable component, device, element, or object where appropriate and based on particular needs. The information being tracked, sent, received, or stored in
communication system 10 could be provided in any database, register, table, cache, queue, control list, or storage structure, based on particular needs and implementations, all of which could be referenced in any suitable timeframe. Any of the memory items discussed herein should be construed as being encompassed within the broad term ‘memory element.’ Similarly, any of the potential processing elements, modules, and machines described in this Specification should be construed as being encompassed within the broad term ‘processor.’ -  It is also important to note that the operations and steps described with reference to the preceding FIGURES illustrate only some of the possible scenarios that may be executed by, or within, the system. Some of these operations may be deleted or removed where appropriate, or these steps may be modified or changed considerably without departing from the scope of the discussed concepts. In addition, the timing of these operations may be altered considerably and still achieve the results taught in this disclosure. The preceding operational flows have been offered for purposes of example and discussion. Substantial flexibility is provided by the system in that any suitable arrangements, chronologies, configurations, and timing mechanisms may be provided without departing from the teachings of the discussed concepts.
 -  Although the present disclosure has been described in detail with reference to particular arrangements and configurations, these example configurations and arrangements may be changed significantly without departing from the scope of the present disclosure. For example, although the present disclosure has been described with reference to particular communication exchanges involving certain network access and protocols,
communication system 10 may be applicable to other exchanges or routing protocols. Moreover, althoughcommunication system 10 has been illustrated with reference to particular elements and operations that facilitate the communication process, these elements, and operations may be replaced by any suitable architecture or process that achieves the intended functionality ofcommunication system 10. -  Numerous other changes, substitutions, variations, alterations, and modifications may be ascertained to one skilled in the art and it is intended that the present disclosure encompass all such changes, substitutions, variations, alterations, and modifications as falling within the scope of the appended claims. In order to assist the United States Patent and Trademark Office (USPTO) and, additionally, any readers of any patent issued on this application in interpreting the claims appended hereto, Applicant wishes to note that the Applicant: (a) does not intend any of the appended claims to invoke paragraph six (6) of 35 U.S.C.
section 112 as it exists on the date of the filing hereof unless the words “means for” or “step for” are specifically used in the particular claims; and (b) does not intend, by any statement in the specification, to limit this disclosure in any way that is not otherwise reflected in the appended claims. 
Claims (20)
 1. A method executed at an input/output (IO) adapter in communication with a plurality of compute nodes over a Peripheral Component Interconnect Express (PCIE) bus, the method comprising:
    receiving a memory request from a first compute node to permit access by a second compute node to a local memory region of the first compute node;
 generating a remap window region in a memory element of the IO adapter, the remap window region corresponding to a base address register (BAR) of the second compute node in the IO adapter; and
 configuring the remap window region to point to the local memory region of the first compute node, wherein access by the second compute node to the BAR corresponding with the remap window region results in direct access of the local memory region of the first compute node by the second compute node.
  2. The method of claim 1 , wherein the compute nodes are associated with unique PCIE endpoints on the PCIE bus.
     3. The method of claim 1 , wherein the direct access of the local memory region of the first compute node by the second compute node does not involve operating systems of the first compute node and the second compute node.
     4. The method of claim 1 , wherein the BAR of the second compute node comprises one of a plurality of BARs associated with the second compute node.
     5. The method of claim 4 . wherein a device driver of the second compute node associates the BAR with the remap window region, such that an application executing in the second compute node can access the local memory region of the first compute node through appropriate access requests to the BAR using the device driver.
     6. The method of claim 1 , wherein configuring the remap window region comprises configuring a remap window base in a BAR Resource Table (BRT) to be a start address of the local memory region.
     7. The method of claim 1 , wherein the memory request comprises a host identifier of the second compute node and address of the local memory region of the first compute node.
     8. The method of claim 7 , wherein the host identifier is obtained from a resource map providing identifying information of the compute nodes in communication with the IO adapter over the PCIE bus.
     9. The method of claim 1 , wherein the compute nodes comprise microservers executing in a single chassis.
     10. The method of claim 1 , wherein the compute nodes comprise virtual machines executing in a single hypervisor.
     11. Non-transitory tangible media that includes instructions for execution, which when executed by a processor of a IO adapter in communication with a plurality of compute nodes over a PCIE bus, is operable to perform operations comprising:
    receiving a memory request from a first compute node to permit access by a second compute node to a local memory region of the first compute node;
 generating a remap window region in a memory element of the IO adapter, the remap window region corresponding to a BAR of the second compute node in the IO adapter; and
 configuring the remap window region to point to the local memory region of the first compute node, wherein access by the second compute node to the BAR corresponding with the remap window region results in direct access of the local memory region of the first compute node by the second compute node.
  12. The media of claim 11 , wherein the BAR of the second compute node comprises one of a plurality of BARs associated with the second compute node.
     13. The media of claim 11 , wherein a device driver of the second compute node associates the BAR with the remap window region, such that an application executing in the second compute node can access the local memory region of the first compute node through appropriate access requests to the BAR using the device driver.
     14. The media of claim 11 , wherein configuring the remap window region comprises configuring a remap window base in a BRT to be a start address of the local memory region.
     15. The media of claim 11 , wherein the compute nodes comprise microservers executing in a single chassis.
     16. An apparatus, comprising:
    an IO adapter;
 a plurality of compute nodes in communication with the IO adapter over a PCIE bus;
 a physical memory for storing data; and
 a processor, wherein the processor executes instructions associated with the data, wherein the processor and the physical memory cooperate, such that the apparatus is configured for:
 receiving a memory request from a first compute node to permit access by a second compute node to a local memory region of the first compute node;
generating a remap window region in a memory element of the IO adapter, the remap window region corresponding to a BAR of the second compute node in the IO adapter; and
configuring the remap window region to point to the local memory region of the first compute node, wherein access by the second compute node to the BAR corresponding with the remap window region results in direct access of the local memory region of the first compute node by the second compute node.
 17. The apparatus of claim 16 , wherein the BAR of the second compute node comprises one of a plurality of BARs associated with the second compute node.
     18. The apparatus of claim 16 , wherein a device driver of the second compute node associates the BAR with the remap window region, such that an application executing in the second compute node can access the local memory region of the first compute node through appropriate access requests to the BAR using the device driver.
     19. The apparatus of claim 16 , wherein configuring the remap window region comprises configuring a remap window base in a BRT to be a start address of the local memory region.
     20. The apparatus of claim 16 , wherein the compute nodes comprise microservers executing in a single chassis.
    Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title | 
|---|---|---|---|
| US15/174,718 US20170351639A1 (en) | 2016-06-06 | 2016-06-06 | Remote memory access using memory mapped addressing among multiple compute nodes | 
| US16/542,952 US10872056B2 (en) | 2016-06-06 | 2019-08-16 | Remote memory access using memory mapped addressing among multiple compute nodes | 
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title | 
|---|---|---|---|
| US15/174,718 US20170351639A1 (en) | 2016-06-06 | 2016-06-06 | Remote memory access using memory mapped addressing among multiple compute nodes | 
Related Child Applications (1)
| Application Number | Title | Priority Date | Filing Date | 
|---|---|---|---|
| US16/542,952 Division US10872056B2 (en) | 2016-06-06 | 2019-08-16 | Remote memory access using memory mapped addressing among multiple compute nodes | 
Publications (1)
| Publication Number | Publication Date | 
|---|---|
| US20170351639A1 true US20170351639A1 (en) | 2017-12-07 | 
Family
ID=60483270
Family Applications (2)
| Application Number | Title | Priority Date | Filing Date | 
|---|---|---|---|
| US15/174,718 Abandoned US20170351639A1 (en) | 2016-06-06 | 2016-06-06 | Remote memory access using memory mapped addressing among multiple compute nodes | 
| US16/542,952 Active US10872056B2 (en) | 2016-06-06 | 2019-08-16 | Remote memory access using memory mapped addressing among multiple compute nodes | 
Family Applications After (1)
| Application Number | Title | Priority Date | Filing Date | 
|---|---|---|---|
| US16/542,952 Active US10872056B2 (en) | 2016-06-06 | 2019-08-16 | Remote memory access using memory mapped addressing among multiple compute nodes | 
Country Status (1)
| Country | Link | 
|---|---|
| US (2) | US20170351639A1 (en) | 
Cited By (9)
| Publication number | Priority date | Publication date | Assignee | Title | 
|---|---|---|---|---|
| US20180322299A1 (en) * | 2017-05-04 | 2018-11-08 | Dell Products L.P. | Systems and methods for hardware-based security for inter-container communication | 
| US20180335956A1 (en) * | 2017-05-17 | 2018-11-22 | Dell Products L.P. | Systems and methods for reducing data copies associated with input/output communications in a virtualized storage environment | 
| US20190050341A1 (en) * | 2018-03-30 | 2019-02-14 | Intel Corporation | Memory-addressed maps for persistent storage device | 
| US11086813B1 (en) * | 2017-06-02 | 2021-08-10 | Sanmina Corporation | Modular non-volatile memory express storage appliance and method therefor | 
| CN113297114A (en) * | 2021-05-21 | 2021-08-24 | 清创网御(合肥)科技有限公司 | Method for supporting multiple processes and multiple threads based on PCIE (peripheral component interface express) independent IO (input/output) of encryption card | 
| US20220100582A1 (en) * | 2020-09-25 | 2022-03-31 | Intel Corporation | Disaggregated computing for distributed confidential computing environment | 
| US20220138102A1 (en) * | 2019-05-28 | 2022-05-05 | Micron Technology, Inc. | Intelligent Content Migration with Borrowed Memory | 
| US20220391348A1 (en) * | 2021-06-04 | 2022-12-08 | Microsoft Technology Licensing, Llc | Userspace networking with remote direct memory access | 
| US11972112B1 (en) * | 2023-01-27 | 2024-04-30 | Dell Products, L.P. | Host IO device direct read operations on peer memory over a PCIe non-transparent bridge | 
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title | 
|---|---|---|---|---|
| US20080288661A1 (en) * | 2007-05-16 | 2008-11-20 | Michael Galles | Method and system to map virtual i/o devices and resources to a standard i/o bus | 
| US20140164666A1 (en) * | 2012-12-07 | 2014-06-12 | Hon Hai Precision Industry Co., Ltd. | Server and method for sharing peripheral component interconnect express interface | 
| US20140189278A1 (en) * | 2012-12-27 | 2014-07-03 | Huawei Technologies Co., Ltd. | Method and apparatus for allocating memory space with write-combine attribute | 
| US20160294983A1 (en) * | 2015-03-30 | 2016-10-06 | Mellanox Technologies Ltd. | Memory sharing using rdma | 
| US20170277655A1 (en) * | 2016-03-25 | 2017-09-28 | Microsoft Technology Licensing, Llc | Memory sharing for working data using rdma | 
Family Cites Families (542)
| Publication number | Priority date | Publication date | Assignee | Title | 
|---|---|---|---|---|
| JPH023656Y2 (en) | 1984-10-23 | 1990-01-29 | ||
| GB2249460B (en) | 1990-09-19 | 1994-06-29 | Intel Corp | Network providing common access to dissimilar hardware interfaces | 
| US5544347A (en) | 1990-09-24 | 1996-08-06 | Emc Corporation | Data storage system controlled remote data mirroring with respectively maintained data indices | 
| US5430859A (en) | 1991-07-26 | 1995-07-04 | Sundisk Corporation | Solid state memory system including plural memory chips and a serialized bus | 
| US5263003A (en) | 1991-11-12 | 1993-11-16 | Allen-Bradley Company, Inc. | Flash memory circuit and method of operation | 
| JP2855019B2 (en) | 1992-02-10 | 1999-02-10 | 富士通株式会社 | External storage device data guarantee method and external storage device | 
| US5339445A (en) | 1992-11-16 | 1994-08-16 | Harris Corporation | Method of autonomously reducing power consumption in a computer sytem by compiling a history of power consumption | 
| US5812814A (en) | 1993-02-26 | 1998-09-22 | Kabushiki Kaisha Toshiba | Alternative flash EEPROM semiconductor memory system | 
| IL110891A (en) | 1993-09-14 | 1999-03-12 | Spyrus | System and method for data access control | 
| US5617421A (en) | 1994-06-17 | 1997-04-01 | Cisco Systems, Inc. | Extended domain computer network using standard links | 
| WO1996010787A1 (en) | 1994-10-04 | 1996-04-11 | Banctec, Inc. | An object-oriented computer environment and related method | 
| DE19540915A1 (en) | 1994-11-10 | 1996-05-15 | Raymond Engineering | Redundant arrangement of solid state memory modules | 
| JPH096548A (en) | 1995-06-22 | 1997-01-10 | Fujitsu Ltd | Disk array device | 
| US5690194A (en) | 1995-10-30 | 1997-11-25 | Illinois Tool Works Inc. | One-way pivoting gear damper | 
| US5812950A (en) | 1995-11-27 | 1998-09-22 | Telefonaktiebolaget Lm Ericsson (Publ) | Cellular telephone system having prioritized greetings for predefined services to a subscriber | 
| US5809285A (en) | 1995-12-21 | 1998-09-15 | Compaq Computer Corporation | Computer system having a virtual drive array controller | 
| US6035105A (en) | 1996-01-02 | 2000-03-07 | Cisco Technology, Inc. | Multiple VLAN architecture system | 
| US5742604A (en) | 1996-03-28 | 1998-04-21 | Cisco Systems, Inc. | Interswitch link mechanism for connecting high-performance network switches | 
| US5740171A (en) | 1996-03-28 | 1998-04-14 | Cisco Systems, Inc. | Address translation mechanism for a high-performance network switch | 
| US5764636A (en) | 1996-03-28 | 1998-06-09 | Cisco Technology, Inc. | Color blocking logic mechanism for a high-performance network switch | 
| US6101497A (en) | 1996-05-31 | 2000-08-08 | Emc Corporation | Method and apparatus for independent and simultaneous access to a common data set | 
| US6076105A (en) | 1996-08-02 | 2000-06-13 | Hewlett-Packard Corp. | Distributed resource and project management | 
| US6202135B1 (en) | 1996-12-23 | 2001-03-13 | Emc Corporation | System and method for reconstructing data associated with protected storage volume stored in multiple modules of back-up mass data storage facility | 
| US6185203B1 (en) | 1997-02-18 | 2001-02-06 | Vixel Corporation | Fibre channel switching fabric | 
| US6043777A (en) | 1997-06-10 | 2000-03-28 | Raytheon Aircraft Company | Method and apparatus for global positioning system based cooperative location system | 
| US6209059B1 (en) | 1997-09-25 | 2001-03-27 | Emc Corporation | Method and apparatus for the on-line reconfiguration of the logical volumes of a data storage system | 
| US6188694B1 (en) | 1997-12-23 | 2001-02-13 | Cisco Technology, Inc. | Shared spanning tree protocol | 
| US6208649B1 (en) | 1998-03-11 | 2001-03-27 | Cisco Technology, Inc. | Derived VLAN mapping technique | 
| US6260120B1 (en) | 1998-06-29 | 2001-07-10 | Emc Corporation | Storage mapping and partitioning among multiple host processors in the presence of login state changes and host controller replacement | 
| US6295575B1 (en) | 1998-06-29 | 2001-09-25 | Emc Corporation | Configuring vectors of logical storage units for data storage partitioning and sharing | 
| US6269381B1 (en) | 1998-06-30 | 2001-07-31 | Emc Corporation | Method and apparatus for backing up data before updating the data and for restoring from the backups | 
| US6542909B1 (en) | 1998-06-30 | 2003-04-01 | Emc Corporation | System for determining mapping of logical objects in a computer system | 
| US6269431B1 (en) | 1998-08-13 | 2001-07-31 | Emc Corporation | Virtual storage and block level direct access of secondary storage for recovery of backup data | 
| US6148414A (en) | 1998-09-24 | 2000-11-14 | Seek Systems, Inc. | Methods and systems for implementing shared disk array management functions | 
| US6266705B1 (en) | 1998-09-29 | 2001-07-24 | Cisco Systems, Inc. | Look up mechanism and associated hash table for a network switch | 
| US6226771B1 (en) | 1998-12-14 | 2001-05-01 | Cisco Technology, Inc. | Method and apparatus for generating error detection data for encapsulated frames | 
| JP2000242434A (en) | 1998-12-22 | 2000-09-08 | Hitachi Ltd | Storage system | 
| US6542961B1 (en) | 1998-12-22 | 2003-04-01 | Hitachi, Ltd. | Disk storage system including a switch | 
| US6400730B1 (en) | 1999-03-10 | 2002-06-04 | Nishan Systems, Inc. | Method and apparatus for transferring data between IP network devices and SCSI and fibre channel devices over an IP network | 
| US6564252B1 (en) | 1999-03-11 | 2003-05-13 | Microsoft Corporation | Scalable storage system with unique client assignment to storage server partitions | 
| GB2350028B (en) | 1999-05-11 | 2001-05-09 | Ioannis Papaefstathiou | On-line compression/decompression of ATM streams over the wan links in a lan interconnection network over wan | 
| US6219753B1 (en) | 1999-06-04 | 2001-04-17 | International Business Machines Corporation | Fiber channel topological structure and method including structure and method for raid devices and controllers | 
| US6408406B1 (en) | 1999-08-31 | 2002-06-18 | Western Digital Technologies, Inc. | Hard disk drive infant mortality test | 
| US8266367B2 (en) | 2003-12-02 | 2012-09-11 | Super Talent Electronics, Inc. | Multi-level striping and truncation channel-equalization for flash-memory system | 
| JP4162347B2 (en) | 2000-01-31 | 2008-10-08 | 富士通株式会社 | Network system | 
| US6877044B2 (en) | 2000-02-10 | 2005-04-05 | Vicom Systems, Inc. | Distributed storage management platform architecture | 
| US20020103889A1 (en) | 2000-02-11 | 2002-08-01 | Thomas Markson | Virtual storage layer approach for dynamically associating computer storage with processing hosts | 
| US20020120741A1 (en) | 2000-03-03 | 2002-08-29 | Webb Theodore S. | Systems and methods for using distributed interconnects in information management enviroments | 
| EP1282861A4 (en) | 2000-04-18 | 2008-03-05 | Storeage Networking Technologi | Storage virtualization in a storage area network | 
| JP3868708B2 (en) | 2000-04-19 | 2007-01-17 | 株式会社日立製作所 | Snapshot management method and computer system | 
| US6976090B2 (en) | 2000-04-20 | 2005-12-13 | Actona Technologies Ltd. | Differentiated content and application delivery via internet | 
| US6708227B1 (en) | 2000-04-24 | 2004-03-16 | Microsoft Corporation | Method and system for providing common coordination and administration of multiple snapshot providers | 
| US20020049980A1 (en) | 2000-05-31 | 2002-04-25 | Hoang Khoi Nhu | Controlling data-on-demand client access | 
| US6772231B2 (en) | 2000-06-02 | 2004-08-03 | Hewlett-Packard Development Company, L.P. | Structure and process for distributing SCSI LUN semantics across parallel distributed components | 
| US6779094B2 (en) | 2000-06-19 | 2004-08-17 | Storage Technology Corporation | Apparatus and method for instant copy of data by writing new data to an additional physical storage area | 
| US6675258B1 (en) | 2000-06-30 | 2004-01-06 | Lsi Logic Corporation | Methods and apparatus for seamless firmware update and propagation in a dual raid controller system | 
| US6715007B1 (en) | 2000-07-13 | 2004-03-30 | General Dynamics Decision Systems, Inc. | Method of regulating a flow of data in a communication system and apparatus therefor | 
| US6952734B1 (en) | 2000-08-21 | 2005-10-04 | Hewlett-Packard Development Company, L.P. | Method for recovery of paths between storage area network nodes with probationary period and desperation repair | 
| WO2002023345A1 (en) | 2000-09-13 | 2002-03-21 | Geodesic Systems, Incorporated | Conservative garbage collectors that can be used with general memory allocators | 
| US6847647B1 (en) | 2000-09-26 | 2005-01-25 | Hewlett-Packard Development Company, L.P. | Method and apparatus for distributing traffic over multiple switched fiber channel routes | 
| US6978300B1 (en) | 2000-10-19 | 2005-12-20 | International Business Machines Corporation | Method and apparatus to perform fabric management | 
| US7313614B2 (en) | 2000-11-02 | 2007-12-25 | Sun Microsystems, Inc. | Switching system | 
| US6553390B1 (en) | 2000-11-14 | 2003-04-22 | Advanced Micro Devices, Inc. | Method and apparatus for simultaneous online access of volume-managed data storage | 
| US6629198B2 (en) | 2000-12-08 | 2003-09-30 | Sun Microsystems, Inc. | Data storage system and method employing a write-ahead hash log | 
| US7165096B2 (en) | 2000-12-22 | 2007-01-16 | Data Plow, Inc. | Storage area network file system | 
| WO2002065275A1 (en) | 2001-01-11 | 2002-08-22 | Yottayotta, Inc. | Storage virtualization system and methods | 
| US6748502B2 (en) | 2001-01-12 | 2004-06-08 | Hitachi, Ltd. | Virtual volume storage | 
| US6880062B1 (en) | 2001-02-13 | 2005-04-12 | Candera, Inc. | Data mover mechanism to achieve SAN RAID at wire speed | 
| US7222255B1 (en) | 2001-02-28 | 2007-05-22 | 3Com Corporation | System and method for network performance testing | 
| US6625675B2 (en) | 2001-03-23 | 2003-09-23 | International Business Machines Corporation | Processor for determining physical lane skew order | 
| US6820099B1 (en) | 2001-04-13 | 2004-11-16 | Lsi Logic Corporation | Instantaneous data updating using snapshot volumes | 
| US20020156971A1 (en) | 2001-04-19 | 2002-10-24 | International Business Machines Corporation | Method, apparatus, and program for providing hybrid disk mirroring and striping | 
| US7305658B1 (en) | 2001-05-07 | 2007-12-04 | Microsoft Corporation | Method and system for application partitions | 
| US6738933B2 (en) | 2001-05-09 | 2004-05-18 | Mercury Interactive Corporation | Root cause analysis of server system performance degradations | 
| US6876656B2 (en) | 2001-06-15 | 2005-04-05 | Broadcom Corporation | Switch assisted frame aliasing for storage virtualization | 
| US7403987B1 (en) | 2001-06-29 | 2008-07-22 | Symantec Operating Corporation | Transactional SAN management | 
| US7143300B2 (en) | 2001-07-25 | 2006-11-28 | Hewlett-Packard Development Company, L.P. | Automated power management system for a network of computers | 
| US20030026267A1 (en) | 2001-07-31 | 2003-02-06 | Oberman Stuart F. | Virtual channels in a network switch | 
| US7325050B2 (en) | 2001-09-19 | 2008-01-29 | Dell Products L.P. | System and method for strategic power reduction in a computer system | 
| US7085827B2 (en) | 2001-09-20 | 2006-08-01 | Hitachi, Ltd. | Integrated service management system for remote customer support | 
| US7340555B2 (en) | 2001-09-28 | 2008-03-04 | Dot Hill Systems Corporation | RAID system for performing efficient mirrored posted-write operations | 
| US6976134B1 (en) | 2001-09-28 | 2005-12-13 | Emc Corporation | Pooling and provisioning storage resources in a storage network | 
| US20030154271A1 (en) | 2001-10-05 | 2003-08-14 | Baldwin Duane Mark | Storage area network methods and apparatus with centralized management | 
| US6920494B2 (en) | 2001-10-05 | 2005-07-19 | International Business Machines Corporation | Storage area network methods and apparatus with virtual SAN recognition | 
| US7200144B2 (en) | 2001-10-18 | 2007-04-03 | Qlogic, Corp. | Router and methods using network addresses for virtualization | 
| US7043650B2 (en) | 2001-10-31 | 2006-05-09 | Hewlett-Packard Development Company, L.P. | System and method for intelligent control of power consumption of distributed services during periods when power consumption must be reduced | 
| JP2003162439A (en) | 2001-11-22 | 2003-06-06 | Hitachi Ltd | Storage system and control method thereof | 
| US6986015B2 (en) | 2001-12-10 | 2006-01-10 | Incipient, Inc. | Fast path caching | 
| US6959373B2 (en) | 2001-12-10 | 2005-10-25 | Incipient, Inc. | Dynamic and variable length extents | 
| US7171668B2 (en) | 2001-12-17 | 2007-01-30 | International Business Machines Corporation | Automatic data interpretation and implementation using performance capacity management framework over many servers | 
| US7499410B2 (en) | 2001-12-26 | 2009-03-03 | Cisco Technology, Inc. | Fibre channel switch that enables end devices in different fabrics to communicate with one another while retaining their unique fibre channel domain—IDs | 
| US7433948B2 (en) | 2002-01-23 | 2008-10-07 | Cisco Technology, Inc. | Methods and apparatus for implementing virtualization of storage within a storage area network | 
| US20070094465A1 (en) | 2001-12-26 | 2007-04-26 | Cisco Technology, Inc., A Corporation Of California | Mirroring mechanisms for storage area networks and network based virtualization | 
| US9009427B2 (en) | 2001-12-26 | 2015-04-14 | Cisco Technology, Inc. | Mirroring mechanisms for storage area networks and network based virtualization | 
| US7548975B2 (en) | 2002-01-09 | 2009-06-16 | Cisco Technology, Inc. | Methods and apparatus for implementing virtualization of storage within a storage area network through a virtual enclosure | 
| US7599360B2 (en) | 2001-12-26 | 2009-10-06 | Cisco Technology, Inc. | Methods and apparatus for encapsulating a frame for transmission in a storage area network | 
| US6895429B2 (en) | 2001-12-28 | 2005-05-17 | Network Appliance, Inc. | Technique for enabling multiple virtual filers on a single filer to participate in multiple address spaces with overlapping network addresses | 
| JP3936585B2 (en) | 2002-01-09 | 2007-06-27 | 株式会社日立製作所 | Storage device operation system and storage device rental service method | 
| US7155494B2 (en) | 2002-01-09 | 2006-12-26 | Sancastle Technologies Ltd. | Mapping between virtual local area networks and fibre channel zones | 
| US6728791B1 (en) | 2002-01-16 | 2004-04-27 | Adaptec, Inc. | RAID 1 read mirroring method for host adapters | 
| US7359321B1 (en) | 2002-01-17 | 2008-04-15 | Juniper Networks, Inc. | Systems and methods for selectively performing explicit congestion notification | 
| CA2369228A1 (en) | 2002-01-24 | 2003-07-24 | Alcatel Canada Inc. | System and method for managing configurable elements of devices in a network element and a network | 
| US7062568B1 (en) | 2002-01-31 | 2006-06-13 | Forcelo Networks, Inc. | Point-to-point protocol flow control extension | 
| US6983303B2 (en) | 2002-01-31 | 2006-01-03 | Hewlett-Packard Development Company, Lp. | Storage aggregator for enhancing virtualization in data storage networks | 
| JP3993773B2 (en) | 2002-02-20 | 2007-10-17 | 株式会社日立製作所 | Storage subsystem, storage control device, and data copy method | 
| US6907419B1 (en) | 2002-02-27 | 2005-06-14 | Storage Technology Corporation | Method, system, and product for maintaining within a virtualization system a historical performance database for physical devices | 
| US20030174725A1 (en) | 2002-03-15 | 2003-09-18 | Broadcom Corporation | IP multicast packet replication process and apparatus therefore | 
| DE20204860U1 (en) | 2002-03-26 | 2003-07-31 | Alfit AG, Götzis | Drawer pull-out guides with automatic retraction with integrated damping | 
| US8051197B2 (en) | 2002-03-29 | 2011-11-01 | Brocade Communications Systems, Inc. | Network congestion management systems and methods | 
| US6848759B2 (en) | 2002-04-03 | 2005-02-01 | Illinois Tool Works Inc. | Self-closing slide mechanism with damping | 
| US20030189929A1 (en) | 2002-04-04 | 2003-10-09 | Fujitsu Limited | Electronic apparatus for assisting realization of storage area network system | 
| US6683883B1 (en) | 2002-04-09 | 2004-01-27 | Sancastle Technologies Ltd. | ISCSI-FCP gateway | 
| US7206288B2 (en) | 2002-06-12 | 2007-04-17 | Cisco Technology, Inc. | Methods and apparatus for characterizing a route in fibre channel fabric | 
| US7237045B2 (en) | 2002-06-28 | 2007-06-26 | Brocade Communications Systems, Inc. | Apparatus and method for storage processing through scalable port processors | 
| US7353305B2 (en) | 2002-06-28 | 2008-04-01 | Brocade Communications Systems, Inc. | Apparatus and method for data virtualization in a storage processing device | 
| US6986069B2 (en) | 2002-07-01 | 2006-01-10 | Newisys, Inc. | Methods and apparatus for static and dynamic power management of computer systems | 
| US7444263B2 (en) | 2002-07-01 | 2008-10-28 | Opnet Technologies, Inc. | Performance metric collection and automated analysis | 
| US7069465B2 (en) | 2002-07-26 | 2006-06-27 | International Business Machines Corporation | Method and apparatus for reliable failover involving incomplete raid disk writes in a clustering system | 
| US6907505B2 (en) | 2002-07-31 | 2005-06-14 | Hewlett-Packard Development Company, L.P. | Immediately available, statically allocated, full-logical-unit copy with a transient, snapshot-copy-like intermediate stage | 
| US7174354B2 (en) | 2002-07-31 | 2007-02-06 | Bea Systems, Inc. | System and method for garbage collection in a computer system, which uses reinforcement learning to adjust the allocation of memory space, calculate a reward, and use the reward to determine further actions to be taken on the memory space | 
| US7120728B2 (en) | 2002-07-31 | 2006-10-10 | Brocade Communications Systems, Inc. | Hardware-based translating virtualization switch | 
| US7269168B2 (en) | 2002-07-31 | 2007-09-11 | Brocade Communications Systems, Inc. | Host bus adaptor-based virtualization switch | 
| US7379990B2 (en) | 2002-08-12 | 2008-05-27 | Tsao Sheng Ted Tai | Distributed virtual SAN | 
| US7467406B2 (en) | 2002-08-23 | 2008-12-16 | Nxp B.V. | Embedded data set processing | 
| US7379959B2 (en) | 2002-09-07 | 2008-05-27 | Appistry, Inc. | Processing information using a hive of computing engines including request handlers and process handlers | 
| US8805918B1 (en) | 2002-09-11 | 2014-08-12 | Cisco Technology, Inc. | Methods and apparatus for implementing exchange management for virtualization of storage within a storage area network | 
| US20040054776A1 (en) | 2002-09-16 | 2004-03-18 | Finisar Corporation | Network expert analysis process | 
| US7343524B2 (en) | 2002-09-16 | 2008-03-11 | Finisar Corporation | Network analysis omniscent loop state machine | 
| US20040059807A1 (en) | 2002-09-16 | 2004-03-25 | Finisar Corporation | Network analysis topology detection | 
| US7352706B2 (en) | 2002-09-16 | 2008-04-01 | Finisar Corporation | Network analysis scalable analysis tool for multiple protocols | 
| US7277431B2 (en) | 2002-10-31 | 2007-10-02 | Brocade Communications Systems, Inc. | Method and apparatus for encryption or compression devices inside a storage area network fabric | 
| US8230066B2 (en) | 2002-11-04 | 2012-07-24 | International Business Machines Corporation | Location independent backup of data from mobile and stationary computers in wide regions regarding network and server activities | 
| US7774839B2 (en) | 2002-11-04 | 2010-08-10 | Riverbed Technology, Inc. | Feedback mechanism to minimize false assertions of a network intrusion | 
| US7433326B2 (en) | 2002-11-27 | 2008-10-07 | Cisco Technology, Inc. | Methods and devices for exchanging peer parameters between network devices | 
| US7143259B2 (en) | 2002-12-20 | 2006-11-28 | Veritas Operating Corporation | Preservation of intent of a volume creator with a logical volume | 
| US7010645B2 (en) | 2002-12-27 | 2006-03-07 | International Business Machines Corporation | System and method for sequentially staging received data to a write cache in advance of storing the received data | 
| US7263614B2 (en) | 2002-12-31 | 2007-08-28 | Aol Llc | Implicit access for communications pathway | 
| WO2004077214A2 (en) | 2003-01-30 | 2004-09-10 | Vaman Technologies (R & D) Limited | System and method for scheduling server functions irrespective of server functionality | 
| US7383381B1 (en) | 2003-02-28 | 2008-06-03 | Sun Microsystems, Inc. | Systems and methods for configuring a storage virtualization environment | 
| US7904599B1 (en) | 2003-03-28 | 2011-03-08 | Cisco Technology, Inc. | Synchronization and auditing of zone configuration data in storage-area networks | 
| US20040190901A1 (en) | 2003-03-29 | 2004-09-30 | Xiaojun Fang | Bi-directional optical network element and its control protocols for WDM rings | 
| US7546475B2 (en) | 2003-05-13 | 2009-06-09 | Hewlett-Packard Development Company, L.P. | Power-aware adaptation in a data center | 
| US7318133B2 (en) | 2003-06-03 | 2008-01-08 | Hitachi, Ltd. | Method and apparatus for replicating volumes | 
| EP1483984B1 (en) | 2003-06-05 | 2017-08-09 | Grass GmbH | Drawer slide | 
| US7330931B2 (en) | 2003-06-26 | 2008-02-12 | Copan Systems, Inc. | Method and system for accessing auxiliary data in power-efficient high-capacity scalable storage system | 
| US8046809B2 (en) | 2003-06-30 | 2011-10-25 | World Wide Packets, Inc. | Multicast services control system and method | 
| US7287121B2 (en) | 2003-08-27 | 2007-10-23 | Aristos Logic Corporation | System and method of establishing and reconfiguring volume profiles in a storage system | 
| US20050050211A1 (en) | 2003-08-29 | 2005-03-03 | Kaul Bharat B. | Method and apparatus to manage network addresses | 
| US7474666B2 (en) | 2003-09-03 | 2009-01-06 | Cisco Technology, Inc. | Switch port analyzers | 
| US7441154B2 (en) | 2003-09-12 | 2008-10-21 | Finisar Corporation | Network analysis tool | 
| US20050076113A1 (en) | 2003-09-12 | 2005-04-07 | Finisar Corporation | Network analysis sample management process | 
| US20050060574A1 (en) | 2003-09-13 | 2005-03-17 | Finisar Corporation | Network analysis graphical user interface | 
| US7865907B2 (en) | 2003-09-25 | 2011-01-04 | Fisher-Rosemount Systems, Inc. | Method and apparatus for providing automatic software updates | 
| US20050091426A1 (en) | 2003-10-23 | 2005-04-28 | Horn Robert L. | Optimized port selection for command completion in a multi-ported storage controller system | 
| US7149858B1 (en) | 2003-10-31 | 2006-12-12 | Veritas Operating Corporation | Synchronous replication for system and data security | 
| US7234032B2 (en) | 2003-11-20 | 2007-06-19 | International Business Machines Corporation | Computerized system, method and program product for managing an enterprise storage system | 
| US7171514B2 (en) | 2003-11-20 | 2007-01-30 | International Business Machines Corporation | Apparatus and method to control access to logical volumes using parallel access volumes | 
| JP4400913B2 (en) | 2003-11-26 | 2010-01-20 | 株式会社日立製作所 | Disk array device | 
| US7934023B2 (en) | 2003-12-01 | 2011-04-26 | Cisco Technology, Inc. | Apparatus and method for performing fast fibre channel write operations over relatively high latency networks | 
| WO2005066830A1 (en) | 2004-01-08 | 2005-07-21 | Agency For Science, Technology & Research | A shared storage network system and a method for operating a shared storage network system | 
| US7707309B2 (en) | 2004-01-29 | 2010-04-27 | Brocade Communication Systems, Inc. | Isolation switch for fibre channel fabrics in storage area networks | 
| US7843906B1 (en) | 2004-02-13 | 2010-11-30 | Habanero Holdings, Inc. | Storage gateway initiator for fabric-backplane enterprise servers | 
| US7397770B2 (en) | 2004-02-20 | 2008-07-08 | International Business Machines Corporation | Checking and repairing a network configuration | 
| JP2005242403A (en) | 2004-02-24 | 2005-09-08 | Hitachi Ltd | Computer system | 
| JP2005242555A (en) | 2004-02-25 | 2005-09-08 | Hitachi Ltd | Storage control system and method for mounting firmware in disk-type storage device of storage control system | 
| JP4485230B2 (en) | 2004-03-23 | 2010-06-16 | 株式会社日立製作所 | Migration execution method | 
| US20050235072A1 (en) | 2004-04-17 | 2005-10-20 | Smith Wilfred A | Data storage controller | 
| US7487321B2 (en) | 2004-04-19 | 2009-02-03 | Cisco Technology, Inc. | Method and system for memory leak detection | 
| WO2005109212A2 (en) | 2004-04-30 | 2005-11-17 | Commvault Systems, Inc. | Hierarchical systems providing unified of storage information | 
| US20050283658A1 (en) | 2004-05-21 | 2005-12-22 | Clark Thomas K | Method, apparatus and program storage device for providing failover for high availability in an N-way shared-nothing cluster system | 
| US7542681B2 (en) | 2004-06-29 | 2009-06-02 | Finisar Corporation | Network tap with interface for connecting to pluggable optoelectronic module | 
| JP4870915B2 (en) | 2004-07-15 | 2012-02-08 | 株式会社日立製作所 | Storage device | 
| US8018936B2 (en) | 2004-07-19 | 2011-09-13 | Brocade Communications Systems, Inc. | Inter-fabric routing | 
| US7408883B2 (en) | 2004-09-01 | 2008-08-05 | Nettest, Inc. | Apparatus and method for performing a loopback test in a communication system | 
| US7835361B1 (en) | 2004-10-13 | 2010-11-16 | Sonicwall, Inc. | Method and apparatus for identifying data patterns in a file | 
| US7500053B1 (en) | 2004-11-05 | 2009-03-03 | Commvvault Systems, Inc. | Method and system for grouping storage system components | 
| US7492779B2 (en) | 2004-11-05 | 2009-02-17 | Atrica Israel Ltd. | Apparatus for and method of support for committed over excess traffic in a distributed queuing system | 
| US7392458B2 (en) | 2004-11-19 | 2008-06-24 | International Business Machines Corporation | Method and system for enhanced error identification with disk array parity checking | 
| US7631023B1 (en) | 2004-11-24 | 2009-12-08 | Symantec Operating Corporation | Performance-adjusted data allocation in a multi-device file system | 
| US8086755B2 (en) | 2004-11-29 | 2011-12-27 | Egenera, Inc. | Distributed multicast system and method in a network | 
| US7512705B2 (en) | 2004-12-01 | 2009-03-31 | Hewlett-Packard Development Company, L.P. | Truncating data units | 
| US9501473B1 (en) | 2004-12-21 | 2016-11-22 | Veritas Technologies Llc | Workflow process with temporary storage resource reservation | 
| US20060198319A1 (en) | 2005-02-01 | 2006-09-07 | Schondelmayer Adam H | Network diagnostic systems and methods for aggregated links | 
| US8041967B2 (en) | 2005-02-15 | 2011-10-18 | Hewlett-Packard Development Company, L.P. | System and method for controlling power to resources based on historical utilization data | 
| JP2006268524A (en) | 2005-03-24 | 2006-10-05 | Fujitsu Ltd | Storage device, control method thereof, and program | 
| US8335231B2 (en) | 2005-04-08 | 2012-12-18 | Cisco Technology, Inc. | Hardware based zoning in fibre channel networks | 
| US7574536B2 (en) | 2005-04-22 | 2009-08-11 | Sun Microsystems, Inc. | Routing direct memory access requests using doorbell addresses | 
| US7225103B2 (en) | 2005-06-30 | 2007-05-29 | Oracle International Corporation | Automatic determination of high significance alert thresholds for system performance metrics using an exponentially tailed model | 
| US7716648B2 (en) | 2005-08-02 | 2010-05-11 | Oracle America, Inc. | Method and apparatus for detecting memory leaks in computer systems | 
| US7447839B2 (en) | 2005-09-13 | 2008-11-04 | Yahoo! Inc. | System for a distributed column chunk data store | 
| US8161134B2 (en) | 2005-09-20 | 2012-04-17 | Cisco Technology, Inc. | Smart zoning to enforce interoperability matrix in a storage area network | 
| US7702851B2 (en) | 2005-09-20 | 2010-04-20 | Hitachi, Ltd. | Logical volume transfer method and storage network system | 
| US7657796B1 (en) | 2005-09-30 | 2010-02-02 | Symantec Operating Corporation | System and method for distributed storage verification | 
| US20070079068A1 (en) | 2005-09-30 | 2007-04-05 | Intel Corporation | Storing data with different specified levels of data redundancy | 
| US7760717B2 (en) | 2005-10-25 | 2010-07-20 | Brocade Communications Systems, Inc. | Interface switch for use with fibre channel fabrics in storage area networks | 
| US7484132B2 (en) | 2005-10-28 | 2009-01-27 | International Business Machines Corporation | Clustering process for software server failure prediction | 
| US7434105B1 (en) | 2005-11-07 | 2008-10-07 | Symantec Operating Corporation | Selective self-healing of memory errors using allocation location information | 
| US7907532B2 (en) | 2005-11-23 | 2011-03-15 | Jds Uniphase Corporation | Pool-based network diagnostic systems and methods | 
| US9122643B2 (en) | 2005-12-08 | 2015-09-01 | Nvidia Corporation | Event trigger based data backup services | 
| US7793138B2 (en) | 2005-12-21 | 2010-09-07 | Cisco Technology, Inc. | Anomaly detection for storage traffic in a data center | 
| US7697554B1 (en) | 2005-12-27 | 2010-04-13 | Emc Corporation | On-line data migration of a logical/virtual storage array by replacing virtual names | 
| US20070162969A1 (en) | 2005-12-30 | 2007-07-12 | Becker Wolfgang A | Provider-tenant systems, and methods for using the same | 
| US8032621B1 (en) | 2006-01-03 | 2011-10-04 | Emc Corporation | Methods and apparatus providing root cause analysis on alerts | 
| KR100735437B1 (en) | 2006-03-07 | 2007-07-04 | 삼성전자주식회사 | RAID system and method in mobile communication terminal | 
| US20070211640A1 (en) | 2006-03-10 | 2007-09-13 | Mcdata Corporation | Switch testing in a communications network | 
| US8104041B2 (en) | 2006-04-24 | 2012-01-24 | Hewlett-Packard Development Company, L.P. | Computer workload redistribution based on prediction from analysis of local resource utilization chronology data | 
| US20070258380A1 (en) | 2006-05-02 | 2007-11-08 | Mcdata Corporation | Fault detection, isolation and recovery for a switch system of a computer network | 
| US7669071B2 (en) | 2006-05-05 | 2010-02-23 | Dell Products L.P. | Power allocation management in an information handling system | 
| US20070263545A1 (en) | 2006-05-12 | 2007-11-15 | Foster Craig E | Network diagnostic systems and methods for using network configuration data | 
| US7707481B2 (en) | 2006-05-16 | 2010-04-27 | Pitney Bowes Inc. | System and method for efficient uncorrectable error detection in flash memory | 
| US20070276884A1 (en) | 2006-05-23 | 2007-11-29 | Hitachi, Ltd. | Method and apparatus for managing backup data and journal | 
| US7587570B2 (en) | 2006-05-31 | 2009-09-08 | International Business Machines Corporation | System and method for providing automated storage provisioning | 
| TW200801952A (en) | 2006-06-02 | 2008-01-01 | Via Tech Inc | Method for setting up a peripheral component interconnect express (PCIE) | 
| US8274993B2 (en) | 2006-06-16 | 2012-09-25 | Cisco Technology, Inc. | Fibre channel dynamic zoning | 
| US7706303B2 (en) | 2006-06-26 | 2010-04-27 | Cisco Technology, Inc. | Port pooling | 
| US7757059B1 (en) | 2006-06-29 | 2010-07-13 | Emc Corporation | Virtual array non-disruptive management data migration | 
| US20080034149A1 (en) | 2006-08-02 | 2008-02-07 | Feringo, Inc. | High capacity USB or 1394 memory device with internal hub | 
| TW200811663A (en) | 2006-08-25 | 2008-03-01 | Icreate Technologies Corp | Redundant array of independent disks system | 
| US8948199B2 (en) | 2006-08-30 | 2015-02-03 | Mellanox Technologies Ltd. | Fibre channel processing by a host channel adapter | 
| US8015353B2 (en) | 2006-08-31 | 2011-09-06 | Dell Products L.P. | Method for automatic RAID configuration on data storage media | 
| JP4890160B2 (en) | 2006-09-06 | 2012-03-07 | 株式会社日立製作所 | Storage system and backup / recovery method | 
| US8233380B2 (en) | 2006-11-06 | 2012-07-31 | Hewlett-Packard Development Company, L.P. | RDMA QP simplex switchless connection | 
| US7949847B2 (en) | 2006-11-29 | 2011-05-24 | Hitachi, Ltd. | Storage extent allocation method for thin provisioning storage | 
| US7643505B1 (en) | 2006-11-30 | 2010-01-05 | Qlogic, Corporation | Method and system for real time compression and decompression | 
| US8019938B2 (en) | 2006-12-06 | 2011-09-13 | Fusion-I0, Inc. | Apparatus, system, and method for solid-state storage as cache for high-capacity, non-volatile storage | 
| US8489817B2 (en) | 2007-12-06 | 2013-07-16 | Fusion-Io, Inc. | Apparatus, system, and method for caching data | 
| US7774329B1 (en) | 2006-12-22 | 2010-08-10 | Amazon Technologies, Inc. | Cross-region data access in partitioned framework | 
| US8266238B2 (en) | 2006-12-27 | 2012-09-11 | Intel Corporation | Memory mapped network access | 
| US7681089B2 (en) | 2007-02-20 | 2010-03-16 | Dot Hill Systems Corporation | Redundant storage controller system with enhanced failure analysis capability | 
| US7668981B1 (en) | 2007-03-28 | 2010-02-23 | Symantec Operating Corporation | Storage paths | 
| US8095618B2 (en) | 2007-03-30 | 2012-01-10 | Microsoft Corporation | In-memory caching of shared customizable multi-tenant data | 
| US7689384B1 (en) | 2007-03-30 | 2010-03-30 | United Services Automobile Association (Usaa) | Managing the performance of an electronic device | 
| US8140676B2 (en) | 2007-04-10 | 2012-03-20 | Apertio Limited | Data access in distributed server systems | 
| US8416788B2 (en) | 2007-04-26 | 2013-04-09 | Microsoft Corporation | Compression of data packets while maintaining endpoint-to-endpoint authentication | 
| JP2008287631A (en) | 2007-05-21 | 2008-11-27 | Hitachi Ltd | Deployment target computer, deployment system, and deployment method | 
| US9678803B2 (en) | 2007-06-22 | 2017-06-13 | Red Hat, Inc. | Migration of network entities to a cloud infrastructure | 
| US9588821B2 (en) | 2007-06-22 | 2017-03-07 | Red Hat, Inc. | Automatic determination of required resource allocation of virtual machines | 
| JP4430093B2 (en) | 2007-08-29 | 2010-03-10 | 富士通株式会社 | Storage control device and firmware update method | 
| US7707371B1 (en) | 2007-09-10 | 2010-04-27 | Cisco Technology, Inc. | Storage area network (SAN) switch multi-pass erase of data on target devices | 
| US20090083484A1 (en) | 2007-09-24 | 2009-03-26 | Robert Beverley Basham | System and Method for Zoning of Devices in a Storage Area Network | 
| US8341121B1 (en) | 2007-09-28 | 2012-12-25 | Emc Corporation | Imminent failure prioritized backup | 
| US7895428B2 (en) | 2007-09-28 | 2011-02-22 | International Business Machines Corporation | Applying firmware updates to servers in a data center | 
| US8024773B2 (en) | 2007-10-03 | 2011-09-20 | International Business Machines Corporation | Integrated guidance and validation policy based zoning mechanism | 
| US7957295B2 (en) | 2007-11-02 | 2011-06-07 | Cisco Technology, Inc. | Ethernet performance monitoring | 
| JP2009116809A (en) | 2007-11-09 | 2009-05-28 | Hitachi Ltd | Storage control device, storage system, and virtual volume control method | 
| US7984259B1 (en) | 2007-12-17 | 2011-07-19 | Netapp, Inc. | Reducing load imbalance in a storage system | 
| US7979670B2 (en) | 2008-01-24 | 2011-07-12 | Quantum Corporation | Methods and systems for vectored data de-duplication | 
| US8930537B2 (en) | 2008-02-28 | 2015-01-06 | International Business Machines Corporation | Zoning of devices in a storage area network with LUN masking/mapping | 
| US20110035494A1 (en) | 2008-04-15 | 2011-02-10 | Blade Network Technologies | Network virtualization for a virtualized server data center environment | 
| US8429736B2 (en) | 2008-05-07 | 2013-04-23 | Mcafee, Inc. | Named sockets in a firewall | 
| US8297722B2 (en) | 2008-06-03 | 2012-10-30 | Rev-A-Shelf Company, Llc | Soft close drawer assembly and bracket | 
| AU2009259876A1 (en) | 2008-06-19 | 2009-12-23 | Servicemesh, Inc. | Cloud computing gateway, cloud computing hypervisor, and methods for implementing same | 
| US7930593B2 (en) | 2008-06-23 | 2011-04-19 | Hewlett-Packard Development Company, L.P. | Segment-based technique and system for detecting performance anomalies and changes for a computer-based service | 
| US8175103B2 (en) | 2008-06-26 | 2012-05-08 | Rockstar Bidco, LP | Dynamic networking of virtual machines | 
| US7840730B2 (en) | 2008-06-27 | 2010-11-23 | Microsoft Corporation | Cluster shared volumes | 
| US7975175B2 (en) | 2008-07-09 | 2011-07-05 | Oracle America, Inc. | Risk indices for enhanced throughput in computing systems | 
| US8887166B2 (en) | 2008-07-10 | 2014-11-11 | Juniper Networks, Inc. | Resource allocation and modification using access patterns | 
| CN101639835A (en) | 2008-07-30 | 2010-02-03 | 国际商业机器公司 | Method and device for partitioning application database in multi-tenant scene | 
| US8031703B2 (en) | 2008-08-14 | 2011-10-04 | Dell Products, Lp | System and method for dynamic maintenance of fabric subsets in a network | 
| US7903566B2 (en) | 2008-08-20 | 2011-03-08 | The Boeing Company | Methods and systems for anomaly detection using internet protocol (IP) traffic conversation data | 
| WO2010037147A2 (en) | 2008-09-29 | 2010-04-01 | Whiptail Technologies | Method and system for a storage area network | 
| US8442059B1 (en) | 2008-09-30 | 2013-05-14 | Gridiron Systems, Inc. | Storage proxy with virtual ports configuration | 
| US9473419B2 (en) | 2008-12-22 | 2016-10-18 | Ctera Networks, Ltd. | Multi-tenant cloud storage system | 
| US20100174968A1 (en) | 2009-01-02 | 2010-07-08 | Microsoft Corporation | Heirarchical erasure coding | 
| EP2228719A1 (en) | 2009-03-11 | 2010-09-15 | Zimory GmbH | Method of executing a virtual machine, computing system and computer program | 
| US20100318609A1 (en) | 2009-06-15 | 2010-12-16 | Microsoft Corporation | Bridging enterprise networks into cloud | 
| US8140914B2 (en) | 2009-06-15 | 2012-03-20 | Microsoft Corporation | Failure-model-driven repair and backup | 
| US8352941B1 (en) | 2009-06-29 | 2013-01-08 | Emc Corporation | Scalable and secure high-level storage access for cloud computing platforms | 
| US20110010394A1 (en) | 2009-07-08 | 2011-01-13 | International Business Machines Corporation | Client-specific data customization for shared databases | 
| US8234377B2 (en) | 2009-07-22 | 2012-07-31 | Amazon Technologies, Inc. | Dynamically migrating computer networks | 
| US8700751B2 (en) | 2009-07-24 | 2014-04-15 | Cisco Technology, Inc. | Optimizing fibre channel zoneset configuration and activation | 
| US8291264B2 (en) | 2009-08-03 | 2012-10-16 | Siemens Aktiengesellschaft | Method and system for failure prediction with an agent | 
| US9973446B2 (en) | 2009-08-20 | 2018-05-15 | Oracle International Corporation | Remote shared server peripherals over an Ethernet network for resource virtualization | 
| US8935500B1 (en) | 2009-09-24 | 2015-01-13 | Vmware, Inc. | Distributed storage resource scheduler and load balancer | 
| WO2011037576A1 (en) | 2009-09-25 | 2011-03-31 | Hewlett-Packard Development Company, L.P. | Mapping non-prefetchable storage locations into memory mapped input/output space | 
| US8532108B2 (en) | 2009-09-30 | 2013-09-10 | Alcatel Lucent | Layer 2 seamless site extension of enterprises in cloud computing | 
| US8438251B2 (en) | 2009-10-09 | 2013-05-07 | Oracle International Corporation | Methods and systems for implementing a virtual storage network | 
| US8392760B2 (en) | 2009-10-14 | 2013-03-05 | Microsoft Corporation | Diagnosing abnormalities without application-specific knowledge | 
| US8205951B2 (en) | 2009-11-04 | 2012-06-26 | Knape & Vogt Manufacturing Company | Closing device for drawers | 
| US20120137367A1 (en) | 2009-11-06 | 2012-05-31 | Cataphora, Inc. | Continuous anomaly detection based on behavior modeling and heterogeneous information analysis | 
| US8392764B2 (en) | 2009-11-16 | 2013-03-05 | Cooper Technologies Company | Methods and systems for identifying and configuring networked devices | 
| US8705513B2 (en) | 2009-12-15 | 2014-04-22 | At&T Intellectual Property I, L.P. | Methods and apparatus to communicatively couple virtual private networks to virtual machines within distributive computing networks | 
| CN102111318B (en) | 2009-12-23 | 2013-07-24 | 杭州华三通信技术有限公司 | Method for distributing virtual local area network resource and switch | 
| US20110161496A1 (en) | 2009-12-28 | 2011-06-30 | Nicklin Jonathan C | Implementation and management of internet accessible services using dynamically provisioned resources | 
| US9959147B2 (en) | 2010-01-13 | 2018-05-01 | Vmware, Inc. | Cluster configuration through host ranking | 
| WO2011091056A1 (en) | 2010-01-19 | 2011-07-28 | Servicemesh, Inc. | System and method for a cloud computing abstraction layer | 
| US9152463B2 (en) | 2010-01-20 | 2015-10-06 | Xyratex Technology Limited—A Seagate Company | Electronic data store | 
| US8301746B2 (en) | 2010-01-26 | 2012-10-30 | International Business Machines Corporation | Method and system for abstracting non-functional requirements based deployment of virtual machines | 
| US20110239039A1 (en) | 2010-03-26 | 2011-09-29 | Dieffenbach Devon C | Cloud computing enabled robust initialization and recovery of it services | 
| US8407517B2 (en) | 2010-04-08 | 2013-03-26 | Hitachi, Ltd. | Methods and apparatus for managing error codes for storage systems coupled with external storage systems | 
| US8611352B2 (en) | 2010-04-20 | 2013-12-17 | Marvell World Trade Ltd. | System and method for adapting a packet processing pipeline | 
| US8345692B2 (en) | 2010-04-27 | 2013-01-01 | Cisco Technology, Inc. | Virtual switching overlay for cloud computing | 
| US8719804B2 (en) | 2010-05-05 | 2014-05-06 | Microsoft Corporation | Managing runtime execution of applications on cloud computing systems | 
| US8688792B2 (en) | 2010-05-06 | 2014-04-01 | Nec Laboratories America, Inc. | Methods and systems for discovering configuration data | 
| US8473515B2 (en) | 2010-05-10 | 2013-06-25 | International Business Machines Corporation | Multi-tenancy in database namespace | 
| US8910278B2 (en) | 2010-05-18 | 2014-12-09 | Cloudnexa | Managing services in a cloud computing environment | 
| US8477610B2 (en) | 2010-05-31 | 2013-07-02 | Microsoft Corporation | Applying policies to schedule network bandwidth among virtual machines | 
| US8493983B2 (en) | 2010-06-02 | 2013-07-23 | Cisco Technology, Inc. | Virtual fabric membership assignments for fiber channel over Ethernet network devices | 
| EP2577539B1 (en) | 2010-06-02 | 2018-12-19 | VMware, Inc. | Securing customer virtual machines in a multi-tenant cloud | 
| US8386431B2 (en) | 2010-06-14 | 2013-02-26 | Sap Ag | Method and system for determining database object associated with tenant-independent or tenant-specific data, configured to store data partition, current version of the respective convertor | 
| US9323775B2 (en) | 2010-06-19 | 2016-04-26 | Mapr Technologies, Inc. | Map-reduce ready distributed file system | 
| US8479211B1 (en) | 2010-06-29 | 2013-07-02 | Amazon Technologies, Inc. | Dynamic resource commitment management | 
| US9104619B2 (en) | 2010-07-23 | 2015-08-11 | Brocade Communications Systems, Inc. | Persisting data across warm boots | 
| US8473557B2 (en) | 2010-08-24 | 2013-06-25 | At&T Intellectual Property I, L.P. | Methods and apparatus to migrate virtual machines between distributive computing networks across a wide area network | 
| US8656023B1 (en) | 2010-08-26 | 2014-02-18 | Adobe Systems Incorporated | Optimization scheduler for deploying applications on a cloud | 
| US8768981B1 (en) | 2010-08-27 | 2014-07-01 | Disney Enterprises, Inc. | System and method for distributing and accessing files in a distributed storage system | 
| US8290919B1 (en) | 2010-08-27 | 2012-10-16 | Disney Enterprises, Inc. | System and method for distributing and accessing files in a distributed storage system | 
| US8798456B2 (en) | 2010-09-01 | 2014-08-05 | Brocade Communications Systems, Inc. | Diagnostic port for inter-switch link testing in electrical, optical and remote loopback modes | 
| US9311158B2 (en) | 2010-09-03 | 2016-04-12 | Adobe Systems Incorporated | Determining a work distribution model between a client device and a cloud for an application deployed on the cloud | 
| US8572241B2 (en) | 2010-09-17 | 2013-10-29 | Microsoft Corporation | Integrating external and cluster heat map data | 
| US8619599B1 (en) | 2010-09-17 | 2013-12-31 | Marvell International Ltd. | Packet processor verification methods and systems | 
| US9176677B1 (en) | 2010-09-28 | 2015-11-03 | Emc Corporation | Virtual provisioning space reservation | 
| US9154394B2 (en) | 2010-09-28 | 2015-10-06 | Brocade Communications Systems, Inc. | Dynamic latency-based rerouting | 
| US8413145B2 (en) | 2010-09-30 | 2013-04-02 | Avaya Inc. | Method and apparatus for efficient memory replication for high availability (HA) protection of a virtual machine (VM) | 
| US20120084445A1 (en) | 2010-10-05 | 2012-04-05 | Brock Scott L | Automatic replication and migration of live virtual machines | 
| EP2439637A1 (en) | 2010-10-07 | 2012-04-11 | Deutsche Telekom AG | Method and system of providing access to a virtual machine distributed in a hybrid cloud network | 
| US8626891B2 (en) | 2010-11-03 | 2014-01-07 | International Business Machines Corporation | Configured management-as-a-service connect process based on tenant requirements | 
| US8676710B2 (en) | 2010-11-22 | 2014-03-18 | Netapp, Inc. | Providing security in a cloud storage environment | 
| US8612615B2 (en) | 2010-11-23 | 2013-12-17 | Red Hat, Inc. | Systems and methods for identifying usage histories for producing optimized cloud utilization | 
| US8625595B2 (en) | 2010-11-29 | 2014-01-07 | Cisco Technology, Inc. | Fiber channel identifier mobility for fiber channel and fiber channel over ethernet networks | 
| US8533285B2 (en) | 2010-12-01 | 2013-09-10 | Cisco Technology, Inc. | Directing data flows in data centers with clustering services | 
| US20120159112A1 (en) | 2010-12-15 | 2012-06-21 | Hitachi, Ltd. | Computer system management apparatus and management method | 
| US9460176B2 (en) | 2010-12-29 | 2016-10-04 | Sap Se | In-memory database for multi-tenancy | 
| US8706772B2 (en) | 2010-12-30 | 2014-04-22 | Sap Ag | Strict tenant isolation in multi-tenant enabled systems | 
| US8495356B2 (en) | 2010-12-31 | 2013-07-23 | International Business Machines Corporation | System for securing virtual machine disks on a remote shared storage subsystem | 
| US20120179909A1 (en) | 2011-01-06 | 2012-07-12 | Pitney Bowes Inc. | Systems and methods for providing individual electronic document secure storage, retrieval and use | 
| US8625597B2 (en) | 2011-01-07 | 2014-01-07 | Jeda Networks, Inc. | Methods, systems and apparatus for the interconnection of fibre channel over ethernet devices | 
| US9106579B2 (en) | 2011-01-07 | 2015-08-11 | Jeda Networks, Inc. | Methods, systems and apparatus for utilizing an iSNS server in a network of fibre channel over ethernet devices | 
| US8559433B2 (en) | 2011-01-07 | 2013-10-15 | Jeda Networks, Inc. | Methods, systems and apparatus for the servicing of fibre channel fabric login frames | 
| US9071630B2 (en) | 2011-01-07 | 2015-06-30 | Jeda Networks, Inc. | Methods for the interconnection of fibre channel over ethernet devices using a trill network | 
| US8811399B2 (en) | 2011-01-07 | 2014-08-19 | Jeda Networks, Inc. | Methods, systems and apparatus for the interconnection of fibre channel over ethernet devices using a fibre channel over ethernet interconnection apparatus controller | 
| US9178944B2 (en) | 2011-01-07 | 2015-11-03 | Jeda Networks, Inc. | Methods, systems and apparatus for the control of interconnection of fibre channel over ethernet devices | 
| US9071629B2 (en) | 2011-01-07 | 2015-06-30 | Jeda Networks, Inc. | Methods for the interconnection of fibre channel over ethernet devices using shortest path bridging | 
| US8559335B2 (en) | 2011-01-07 | 2013-10-15 | Jeda Networks, Inc. | Methods for creating virtual links between fibre channel over ethernet nodes for converged network adapters | 
| US9379955B2 (en) | 2011-01-28 | 2016-06-28 | Telefonaktiebolaget Lm Ericsson (Publ) | Method for queuing data packets and node | 
| US9225656B2 (en) | 2011-02-07 | 2015-12-29 | Brocade Communications Systems, Inc. | Quality of service in a heterogeneous network | 
| US8805951B1 (en) | 2011-02-08 | 2014-08-12 | Emc Corporation | Virtual machines and cloud storage caching for cloud computing applications | 
| WO2012109401A1 (en) | 2011-02-09 | 2012-08-16 | Avocent | Infrastructure control fabric system and method | 
| US8272104B2 (en) | 2011-02-25 | 2012-09-25 | Lianhong Art Co., Ltd. | Hinge-slide cover mounting structure using a sheet metal bracket mechanism | 
| CN102682012A (en) | 2011-03-14 | 2012-09-19 | 成都市华为赛门铁克科技有限公司 | Method and device for reading and writing data in file system | 
| US8687491B2 (en) | 2011-04-05 | 2014-04-01 | Vss Monitoring, Inc. | Systems, apparatus, and methods for managing an overflow of data packets received by a switch | 
| US8839363B2 (en) | 2011-04-18 | 2014-09-16 | Bank Of America Corporation | Trusted hardware for attesting to authenticity in a cloud environment | 
| US8806015B2 (en) | 2011-05-04 | 2014-08-12 | International Business Machines Corporation | Workload-aware placement in private heterogeneous clouds | 
| US9253252B2 (en) | 2011-05-06 | 2016-02-02 | Citrix Systems, Inc. | Systems and methods for cloud bridging between intranet resources and cloud resources | 
| US9379970B2 (en) | 2011-05-16 | 2016-06-28 | Futurewei Technologies, Inc. | Selective content routing and storage protocol for information-centric network | 
| US8924392B2 (en) | 2011-05-23 | 2014-12-30 | Cisco Technology, Inc. | Clustering-based resource aggregation within a data center | 
| US8984104B2 (en) | 2011-05-31 | 2015-03-17 | Red Hat, Inc. | Self-moving operating system installation in cloud-based network | 
| US9104460B2 (en) | 2011-05-31 | 2015-08-11 | Red Hat, Inc. | Inter-cloud live migration of virtualization systems | 
| US8837322B2 (en) | 2011-06-20 | 2014-09-16 | Freescale Semiconductor, Inc. | Method and apparatus for snoop-and-learn intelligence in data plane | 
| US8751675B2 (en) | 2011-06-21 | 2014-06-10 | Cisco Technology, Inc. | Rack server management | 
| US8537810B2 (en) | 2011-06-29 | 2013-09-17 | Telefonaktiebolaget L M Ericsson (Publ) | E-tree using two pseudowires between edge routers with enhanced learning methods and systems | 
| US8990292B2 (en) | 2011-07-05 | 2015-03-24 | Cisco Technology, Inc. | In-network middlebox compositor for distributed virtualized applications | 
| US20130036213A1 (en) | 2011-08-02 | 2013-02-07 | Masum Hasan | Virtual private clouds | 
| US20130036212A1 (en) | 2011-08-02 | 2013-02-07 | Jibbe Mahmoud K | Backup, restore, and/or replication of configuration settings in a storage area network environment using a management interface | 
| US9141785B2 (en) | 2011-08-03 | 2015-09-22 | Cloudbyte, Inc. | Techniques for providing tenant based storage security and service level assurance in cloud storage environment | 
| US20140156557A1 (en) | 2011-08-19 | 2014-06-05 | Jun Zeng | Providing a Simulation Service by a Cloud-Based Infrastructure | 
| US8595460B2 (en) | 2011-08-26 | 2013-11-26 | Vmware, Inc. | Configuring object storage system for input/output operations | 
| US8775773B2 (en) | 2011-08-26 | 2014-07-08 | Vmware, Inc. | Object storage system | 
| US8630983B2 (en) | 2011-08-27 | 2014-01-14 | Accenture Global Services Limited | Backup of data across network of devices | 
| US9250969B2 (en) | 2011-08-30 | 2016-02-02 | At&T Intellectual Property I, L.P. | Tagging a copy of memory of a virtual machine with information for fetching of relevant portions of the memory | 
| US9063822B2 (en) | 2011-09-02 | 2015-06-23 | Microsoft Technology Licensing, Llc | Efficient application-aware disaster recovery | 
| US8793443B2 (en) | 2011-09-09 | 2014-07-29 | Lsi Corporation | Methods and structure for improved buffer allocation in a storage controller | 
| US8819476B2 (en) | 2011-09-26 | 2014-08-26 | Imagine Communications Corp. | System and method for disaster recovery | 
| US8560663B2 (en) | 2011-09-30 | 2013-10-15 | Telefonaktiebolaget L M Ericsson (Publ) | Using MPLS for virtual private cloud network isolation in openflow-enabled cloud computing | 
| CN103036930B (en) | 2011-09-30 | 2015-06-03 | 国际商业机器公司 | Method and apparatus for managing storage device | 
| US9785491B2 (en) | 2011-10-04 | 2017-10-10 | International Business Machines Corporation | Processing a certificate signing request in a dispersed storage network | 
| US8804572B2 (en) | 2011-10-25 | 2014-08-12 | International Business Machines Corporation | Distributed switch systems in a trill network | 
| US8789179B2 (en) | 2011-10-28 | 2014-07-22 | Novell, Inc. | Cloud protection techniques | 
| US8819661B2 (en) | 2011-11-28 | 2014-08-26 | Echostar Technologies L.L.C. | Systems and methods for determining times to perform software updates on receiving devices | 
| US8832249B2 (en) | 2011-11-30 | 2014-09-09 | At&T Intellectual Property I, L.P. | Methods and apparatus to adjust resource allocation in a distributive computing network | 
| US20130152076A1 (en) | 2011-12-07 | 2013-06-13 | Cisco Technology, Inc. | Network Access Control Policy for Virtual Machine Migration | 
| US9113376B2 (en) | 2011-12-09 | 2015-08-18 | Cisco Technology, Inc. | Multi-interface mobility | 
| WO2013095381A1 (en) | 2011-12-20 | 2013-06-27 | Intel Corporation | Method and system for data de-duplication | 
| US8718064B2 (en) | 2011-12-22 | 2014-05-06 | Telefonaktiebolaget L M Ericsson (Publ) | Forwarding element for flexible and extensible flow processing software-defined networks | 
| US8730980B2 (en) | 2011-12-27 | 2014-05-20 | Cisco Technology, Inc. | Architecture for scalable virtual network services | 
| US9838269B2 (en) | 2011-12-27 | 2017-12-05 | Netapp, Inc. | Proportional quality of service based on client usage and system metrics | 
| US8683296B2 (en) | 2011-12-30 | 2014-03-25 | Streamscale, Inc. | Accelerated erasure coding system and method | 
| US8555339B2 (en) | 2012-01-06 | 2013-10-08 | International Business Machines Corporation | Identifying guests in web meetings | 
| US8908698B2 (en) | 2012-01-13 | 2014-12-09 | Cisco Technology, Inc. | System and method for managing site-to-site VPNs of a cloud managed network | 
| US8732291B2 (en) | 2012-01-13 | 2014-05-20 | Accenture Global Services Limited | Performance interference model for managing consolidated workloads in QOS-aware clouds | 
| US9529348B2 (en) | 2012-01-24 | 2016-12-27 | Emerson Process Management Power & Water Solutions, Inc. | Method and apparatus for deploying industrial plant simulators using cloud computing technologies | 
| US9223564B2 (en) | 2012-01-26 | 2015-12-29 | Avago Technologies General Ip (Singapore) Pte. Ltd. | Update systems responsive to ongoing processing at a storage system | 
| US9367360B2 (en) | 2012-01-30 | 2016-06-14 | Microsoft Technology Licensing, Llc | Deploying a hardware inventory as a cloud-computing stamp | 
| US8660129B1 (en) | 2012-02-02 | 2014-02-25 | Cisco Technology, Inc. | Fully distributed routing over a user-configured on-demand virtual network for infrastructure-as-a-service (IaaS) on hybrid cloud networks | 
| US8788658B2 (en) | 2012-02-03 | 2014-07-22 | International Business Machines Corporation | Allocation and balancing of storage resources | 
| US9749403B2 (en) | 2012-02-10 | 2017-08-29 | International Business Machines Corporation | Managing content distribution in a wireless communications environment | 
| US20130212065A1 (en) | 2012-02-15 | 2013-08-15 | Flybits, Inc. | Zone Oriented Applications, Systems and Methods | 
| JP2015510198A (en) | 2012-02-16 | 2015-04-02 | エンパイア テクノロジー ディベロップメント エルエルシー | Local access to cloud-based storage | 
| US8953463B2 (en) | 2012-02-29 | 2015-02-10 | Hamilton Sundstrand Corporation | Channel interleaved multiplexed databus | 
| US9244951B2 (en) | 2012-03-08 | 2016-01-26 | International Business Machines Corporation | Managing tenant-specific data sets in a multi-tenant environment | 
| JP5910727B2 (en) | 2012-03-14 | 2016-04-27 | 日本電気株式会社 | Operation management apparatus, operation management method, and program | 
| US9164795B1 (en) | 2012-03-30 | 2015-10-20 | Amazon Technologies, Inc. | Secure tunnel infrastructure between hosts in a hybrid network environment | 
| US8930747B2 (en) | 2012-03-30 | 2015-01-06 | Sungard Availability Services, Lp | Private cloud replication and recovery | 
| US8966466B2 (en) | 2012-04-04 | 2015-02-24 | Avago Technologies General Ip (Singapore) Pte. Ltd. | System for performing firmware updates on a number of drives in an array with minimum interruption to drive I/O operations | 
| US8856339B2 (en) | 2012-04-04 | 2014-10-07 | Cisco Technology, Inc. | Automatically scaled network overlay with heuristic monitoring in a hybrid cloud environment | 
| US9313048B2 (en) | 2012-04-04 | 2016-04-12 | Cisco Technology, Inc. | Location aware virtual service provisioning in a hybrid cloud environment | 
| US9201704B2 (en) | 2012-04-05 | 2015-12-01 | Cisco Technology, Inc. | System and method for migrating application virtual machines in a network environment | 
| US9203784B2 (en) | 2012-04-24 | 2015-12-01 | Cisco Technology, Inc. | Distributed virtual switch architecture for a hybrid cloud | 
| US8918510B2 (en) | 2012-04-27 | 2014-12-23 | Hewlett-Packard Development Company, L. P. | Evaluation of cloud computing services | 
| EP2680155A1 (en) | 2012-05-02 | 2014-01-01 | Agora Tech Developments Ltd. | Hybrid computing system | 
| US9223634B2 (en) | 2012-05-02 | 2015-12-29 | Cisco Technology, Inc. | System and method for simulating virtual machine migration in a network environment | 
| US8855116B2 (en) | 2012-05-15 | 2014-10-07 | Cisco Technology, Inc. | Virtual local area network state processing in a layer 2 ethernet switch | 
| US8949677B1 (en) | 2012-05-23 | 2015-02-03 | Amazon Technologies, Inc. | Detecting anomalies in time series data | 
| GB2502337A (en) | 2012-05-25 | 2013-11-27 | Ibm | System providing storage as a service | 
| US8990639B1 (en) | 2012-05-31 | 2015-03-24 | Amazon Technologies, Inc. | Automatic testing and remediation based on confidence indicators | 
| US8959185B2 (en) | 2012-06-06 | 2015-02-17 | Juniper Networks, Inc. | Multitenant server for virtual networks within datacenter | 
| US9445302B2 (en) | 2012-06-14 | 2016-09-13 | Sierra Wireless, Inc. | Method and system for wireless communication with machine-to-machine devices | 
| US20140007189A1 (en) | 2012-06-28 | 2014-01-02 | International Business Machines Corporation | Secure access to shared storage resources | 
| US8677485B2 (en) | 2012-07-13 | 2014-03-18 | Hewlett-Packard Development Company, L.P. | Detecting network anomaly | 
| US9390055B2 (en) | 2012-07-17 | 2016-07-12 | Coho Data, Inc. | Systems, methods and devices for integrating end-host and network resources in distributed memory | 
| US8711708B2 (en) | 2012-07-24 | 2014-04-29 | Accedian Networks Inc. | Automatic setup of reflector instances | 
| US9960982B2 (en) | 2012-07-24 | 2018-05-01 | Accedian Networks Inc. | Multi-hop reflector sessions | 
| CN104520806B (en) | 2012-08-01 | 2016-09-14 | 英派尔科技开发有限公司 | Abnormality detection for cloud monitoring | 
| US9251103B2 (en) | 2012-08-08 | 2016-02-02 | Vmware, Inc. | Memory-access-resource management | 
| US9075638B2 (en) | 2012-08-14 | 2015-07-07 | Atlassian Corporation Pty Ltd. | Efficient hosting of virtualized containers using read-only operating systems | 
| US9819737B2 (en) | 2012-08-23 | 2017-11-14 | Cisco Technology, Inc. | System and method for policy based fibre channel zoning for virtualized and stateless computing in a network environment | 
| US9280504B2 (en) | 2012-08-24 | 2016-03-08 | Intel Corporation | Methods and apparatus for sharing a network interface controller | 
| US9378060B2 (en) | 2012-08-28 | 2016-06-28 | Oracle International Corporation | Runtime co-location of executing logic and frequently-accessed application data | 
| US9009704B2 (en) | 2012-09-07 | 2015-04-14 | Red Hat, Inc. | Application partitioning in a multi-tenant platform-as-a-service environment in a cloud computing system | 
| WO2014052485A1 (en) | 2012-09-26 | 2014-04-03 | Huawei Technologies Co. Ltd. | Overlay virtual gateway for overlay networks | 
| US9262423B2 (en) | 2012-09-27 | 2016-02-16 | Microsoft Technology Licensing, Llc | Large scale file storage in cloud computing | 
| US8924720B2 (en) | 2012-09-27 | 2014-12-30 | Intel Corporation | Method and system to securely migrate and provision virtual machine images and content | 
| KR102050725B1 (en) | 2012-09-28 | 2019-12-02 | 삼성전자 주식회사 | Computing system and method for managing data in the system | 
| US8918586B1 (en) | 2012-09-28 | 2014-12-23 | Emc Corporation | Policy-based storage of object fragments in a multi-tiered storage system | 
| TW201415365A (en) | 2012-10-15 | 2014-04-16 | Askey Computer Corp | Method for updating operating system and handheld electronic apparatus | 
| US9015212B2 (en) | 2012-10-16 | 2015-04-21 | Rackspace Us, Inc. | System and method for exposing cloud stored data to a content delivery network | 
| US9369255B2 (en) | 2012-10-18 | 2016-06-14 | Massachusetts Institute Of Technology | Method and apparatus for reducing feedback and enhancing message dissemination efficiency in a multicast network | 
| US8948181B2 (en) | 2012-10-23 | 2015-02-03 | Cisco Technology, Inc. | System and method for optimizing next-hop table space in a dual-homed network environment | 
| US9003086B1 (en) | 2012-10-27 | 2015-04-07 | Twitter, Inc. | Dynamic distribution of replicated data | 
| US8726342B1 (en) | 2012-10-31 | 2014-05-13 | Oracle International Corporation | Keystore access control system | 
| US9298525B2 (en) | 2012-12-04 | 2016-03-29 | Accenture Global Services Limited | Adaptive fault diagnosis | 
| US8996969B2 (en) | 2012-12-08 | 2015-03-31 | Lsi Corporation | Low density parity check decoder with miscorrection handling | 
| US8924950B2 (en) | 2012-12-17 | 2014-12-30 | Itron, Inc. | Utility node software/firmware update through a multi-type package | 
| US9207882B2 (en) | 2012-12-18 | 2015-12-08 | Cisco Technology, Inc. | System and method for in-band LUN provisioning in a data center network environment | 
| US9621460B2 (en) | 2013-01-14 | 2017-04-11 | Versa Networks, Inc. | Connecting multiple customer sites over a wide area network using an overlay network | 
| US9141554B1 (en) | 2013-01-18 | 2015-09-22 | Cisco Technology, Inc. | Methods and apparatus for data processing using data compression, linked lists and de-duplication techniques | 
| US9094285B2 (en) | 2013-01-25 | 2015-07-28 | Argela Yazilim ve Bilisim Teknolojileri San. ve Tic. A.S. | Automatic discovery of multiple controllers in Software Defined Networks (SDNs) | 
| US9015527B2 (en) | 2013-01-29 | 2015-04-21 | Hewlett-Packard Development Company, L.P. | Data backup and recovery | 
| US9083633B2 (en) | 2013-02-04 | 2015-07-14 | Cisco Technology, Inc. | System and method for distributed netflow exporter with a single IP endpoint in a network environment | 
| US9274818B2 (en) | 2013-02-06 | 2016-03-01 | International Business Machines Corporation | Reliable and scalable image transfer for data centers with low connectivity using redundancy detection | 
| US9043668B2 (en) | 2013-02-08 | 2015-05-26 | Seagate Technology Llc | Using ECC data for write deduplication processing | 
| US9060019B2 (en) | 2013-02-25 | 2015-06-16 | Quantum RDL, Inc. | Out-of band IP traceback using IP packets | 
| US20140244897A1 (en) | 2013-02-26 | 2014-08-28 | Seagate Technology Llc | Metadata Update Management In a Multi-Tiered Memory | 
| US10706025B2 (en) | 2013-02-28 | 2020-07-07 | Amazon Technologies, Inc. | Database system providing single-tenant and multi-tenant environments | 
| US9218246B2 (en) | 2013-03-14 | 2015-12-22 | Microsoft Technology Licensing, Llc | Coordinating fault recovery in a distributed system | 
| US8996837B1 (en) | 2013-03-15 | 2015-03-31 | Emc Corporation | Providing multi-tenancy within a data storage apparatus | 
| US10514977B2 (en) | 2013-03-15 | 2019-12-24 | Richard B. Jones | System and method for the dynamic analysis of event data | 
| US9448877B2 (en) | 2013-03-15 | 2016-09-20 | Cisco Technology, Inc. | Methods and apparatus for error detection and correction in data storage systems using hash value comparisons | 
| US9531620B2 (en) | 2013-03-15 | 2016-12-27 | Ixia | Control plane packet traffic statistics | 
| US9880773B2 (en) | 2013-03-27 | 2018-01-30 | Vmware, Inc. | Non-homogeneous disk abstraction for data oriented applications | 
| US9258185B2 (en) | 2013-04-10 | 2016-02-09 | Cisco Technology, Inc. | Fibre channel over Ethernet support in a trill network environment | 
| US9483431B2 (en) | 2013-04-17 | 2016-11-01 | Apeiron Data Systems | Method and apparatus for accessing multiple storage devices from multiple hosts without use of remote direct memory access (RDMA) | 
| US9756128B2 (en) | 2013-04-17 | 2017-09-05 | Apeiron Data Systems | Switched direct attached shared storage architecture | 
| US20140324862A1 (en) | 2013-04-30 | 2014-10-30 | Splunk Inc. | Correlation for user-selected time ranges of values for performance metrics of components in an information-technology environment with log data from that information-technology environment | 
| US9392022B2 (en) | 2013-05-03 | 2016-07-12 | Vmware, Inc. | Methods and apparatus to measure compliance of a virtual computing environment | 
| US9203738B2 (en) | 2013-05-21 | 2015-12-01 | Cisco Technology, Inc. | Optimal forwarding for trill fine-grained labeling and VXLAN interworking | 
| US9007922B1 (en) | 2013-05-23 | 2015-04-14 | Juniper Networks, Inc. | Systems and methods for testing and analyzing controller-based networks | 
| US8832330B1 (en) | 2013-05-23 | 2014-09-09 | Nimble Storage, Inc. | Analysis of storage system latency by correlating activity of storage system components with latency measurements | 
| US9014007B2 (en) | 2013-05-31 | 2015-04-21 | Dell Products L.P. | VXLAN based multicasting systems having improved load distribution | 
| US8661299B1 (en) | 2013-05-31 | 2014-02-25 | Linkedin Corporation | Detecting abnormalities in time-series data from an online professional network | 
| US9270754B2 (en) | 2013-06-06 | 2016-02-23 | Cisco Technology, Inc. | Software defined networking for storage area networks | 
| US20140366155A1 (en) | 2013-06-11 | 2014-12-11 | Cisco Technology, Inc. | Method and system of providing storage services in multiple public clouds | 
| US9304815B1 (en) | 2013-06-13 | 2016-04-05 | Amazon Technologies, Inc. | Dynamic replica failure detection and healing | 
| EP2910003B1 (en) | 2013-06-18 | 2016-11-23 | Telefonaktiebolaget LM Ericsson (publ) | Duplicate mac address detection | 
| US20140376550A1 (en) | 2013-06-24 | 2014-12-25 | Vmware, Inc. | Method and system for uniform gateway access in a virtualized layer-2 network domain | 
| US9244761B2 (en) | 2013-06-25 | 2016-01-26 | Microsoft Technology Licensing, Llc | Erasure coding across multiple zones and sub-zones | 
| US20150003458A1 (en) | 2013-06-27 | 2015-01-01 | Futurewei Technologies, Inc. | Boarder Gateway Protocol Signaling to Support a Very Large Number of Virtual Private Networks | 
| WO2014210483A1 (en) | 2013-06-28 | 2014-12-31 | Huawei Technologies Co., Ltd. | Multiprotocol label switching transport for supporting a very large number of virtual private networks | 
| US9148290B2 (en) | 2013-06-28 | 2015-09-29 | Cisco Technology, Inc. | Flow-based load-balancing of layer 2 multicast over multi-protocol label switching label switched multicast | 
| US9749231B2 (en) | 2013-07-02 | 2017-08-29 | Arista Networks, Inc. | Method and system for overlay routing with VXLAN on bare metal servers | 
| US9231863B2 (en) | 2013-07-23 | 2016-01-05 | Dell Products L.P. | Systems and methods for a data center architecture facilitating layer 2 over layer 3 communication | 
| WO2015023288A1 (en) | 2013-08-15 | 2015-02-19 | Hewlett-Packard Development Company, L.P. | Proactive monitoring and diagnostics in storage area networks | 
| WO2015029104A1 (en) | 2013-08-26 | 2015-03-05 | 株式会社日立製作所 | Vertically integrated system and firmware update method | 
| GB201315435D0 (en) | 2013-08-30 | 2013-10-16 | Ibm | Cache management in a computerized system | 
| US9565105B2 (en) | 2013-09-04 | 2017-02-07 | Cisco Technology, Inc. | Implementation of virtual extensible local area network (VXLAN) in top-of-rack switches in a network environment | 
| US20150081880A1 (en) | 2013-09-17 | 2015-03-19 | Stackdriver, Inc. | System and method of monitoring and measuring performance relative to expected performance characteristics for applications and software architecture hosted by an iaas provider | 
| US9503523B2 (en) | 2013-09-20 | 2016-11-22 | Cisco Technology, Inc. | Hybrid fibre channel storage with end-to-end storage provisioning and external connectivity in a storage area network environment | 
| US9083454B2 (en) | 2013-10-01 | 2015-07-14 | Ixia | Systems and methods for beamforming measurements | 
| US9547654B2 (en) | 2013-10-09 | 2017-01-17 | Intel Corporation | Technology for managing cloud storage | 
| US9021296B1 (en) | 2013-10-18 | 2015-04-28 | Hitachi Data Systems Engineering UK Limited | Independent data integrity and redundancy recovery in a storage system | 
| US9264494B2 (en) | 2013-10-21 | 2016-02-16 | International Business Machines Corporation | Automated data recovery from remote data object replicas | 
| US9699032B2 (en) | 2013-10-29 | 2017-07-04 | Virtual Instruments Corporation | Storage area network queue depth profiler | 
| ES2779551T3 (en) * | 2013-10-29 | 2020-08-18 | Huawei Tech Co Ltd | Data processing system and data processing method | 
| US9529652B2 (en) | 2013-11-07 | 2016-12-27 | Salesforce.Com, Inc. | Triaging computing systems | 
| US10628411B2 (en) | 2013-11-20 | 2020-04-21 | International Business Machines Corporation | Repairing a link based on an issue | 
| JP6629729B2 (en) | 2013-11-26 | 2020-01-15 | コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. | Automatic setting of window width / level based on reference image context in radiation report | 
| US9946889B2 (en) | 2013-11-27 | 2018-04-17 | Nakivo, Inc. | Systems and methods for multi-tenant data protection application | 
| JP2015122640A (en) | 2013-12-24 | 2015-07-02 | 日立金属株式会社 | Relay system and switch device | 
| WO2015100656A1 (en) | 2013-12-31 | 2015-07-09 | 华为技术有限公司 | Method and device for implementing virtual machine communication | 
| US9882841B2 (en) | 2014-01-23 | 2018-01-30 | Virtual Instruments Corporation | Validating workload distribution in a storage area network | 
| WO2015119934A1 (en) | 2014-02-04 | 2015-08-13 | Dipankar Sarkar | System and method for reliable multicast data transport | 
| US9319288B2 (en) | 2014-02-12 | 2016-04-19 | Vmware, Inc. | Graphical user interface for displaying information related to a virtual machine network | 
| WO2015138245A1 (en) | 2014-03-08 | 2015-09-17 | Datawise Systems, Inc. | Methods and systems for converged networking and storage | 
| US9887008B2 (en) | 2014-03-10 | 2018-02-06 | Futurewei Technologies, Inc. | DDR4-SSD dual-port DIMM device | 
| US20150261446A1 (en) | 2014-03-12 | 2015-09-17 | Futurewei Technologies, Inc. | Ddr4-onfi ssd 1-to-n bus adaptation and expansion controller | 
| US9374324B2 (en) | 2014-03-14 | 2016-06-21 | International Business Machines Corporation | Determining virtual adapter access controls in a computing environment | 
| US9548890B2 (en) * | 2014-03-17 | 2017-01-17 | Cisco Technology, Inc. | Flexible remote direct memory access resource configuration in a network environment | 
| US9436411B2 (en) | 2014-03-28 | 2016-09-06 | Dell Products, Lp | SAN IP validation tool | 
| US10187088B2 (en) | 2014-04-21 | 2019-01-22 | The Regents Of The University Of California | Cost-efficient repair for storage systems using progressive engagement | 
| WO2015170942A1 (en) | 2014-05-09 | 2015-11-12 | 엘지전자 주식회사 | Method and apparatus for power saving mode operation in wireless lan | 
| US20150341238A1 (en) | 2014-05-21 | 2015-11-26 | Virtual Instruments Corporation | Identifying slow draining devices in a storage area network | 
| US20150341237A1 (en) | 2014-05-22 | 2015-11-26 | Virtual Instruments Corporation | Binning of Network Transactions in a Storage Area Network | 
| US10216853B2 (en) | 2014-06-27 | 2019-02-26 | Arista Networks, Inc. | Method and system for implementing a VXLAN control plane | 
| US20170068630A1 (en) | 2014-06-30 | 2017-03-09 | Hewlett Packard Enterprise Development Lp | Runtime drive detection and configuration | 
| WO2016003489A1 (en) | 2014-06-30 | 2016-01-07 | Nicira, Inc. | Methods and systems to offload overlay network packet encapsulation to hardware | 
| US9424151B2 (en) | 2014-07-02 | 2016-08-23 | Hedvig, Inc. | Disk failure recovery for virtual disk with policies | 
| US9734007B2 (en) | 2014-07-09 | 2017-08-15 | Qualcomm Incorporated | Systems and methods for reliably storing data using liquid distributed storage | 
| US10282100B2 (en) | 2014-08-19 | 2019-05-07 | Samsung Electronics Co., Ltd. | Data management scheme in virtualized hyperscale environments | 
| US9763518B2 (en) | 2014-08-29 | 2017-09-19 | Cisco Technology, Inc. | Systems and methods for damping a storage system | 
| US10380026B2 (en) | 2014-09-04 | 2019-08-13 | Sandisk Technologies Llc | Generalized storage virtualization interface | 
| US20160088083A1 (en) | 2014-09-21 | 2016-03-24 | Cisco Technology, Inc. | Performance monitoring and troubleshooting in a storage area network environment | 
| US9858104B2 (en) | 2014-09-24 | 2018-01-02 | Pluribus Networks, Inc. | Connecting fabrics via switch-to-switch tunneling transparent to network servers | 
| WO2016045055A1 (en) * | 2014-09-25 | 2016-03-31 | Intel Corporation | Network communications using pooled memory in rack-scale architecture | 
| KR102320044B1 (en) * | 2014-10-02 | 2021-11-01 | 삼성전자주식회사 | Pci device, interface system including same, and computing system including same | 
| US9832031B2 (en) | 2014-10-24 | 2017-11-28 | Futurewei Technologies, Inc. | Bit index explicit replication forwarding using replication cache | 
| US10701151B2 (en) | 2014-10-27 | 2020-06-30 | Netapp, Inc. | Methods and systems for accessing virtual storage servers in a clustered environment | 
| US9588690B2 (en) | 2014-11-19 | 2017-03-07 | International Business Machines Corporation | Performance-based grouping of storage devices in a storage system | 
| US9602197B2 (en) | 2014-11-26 | 2017-03-21 | Brocade Communications Systems, Inc. | Non-intrusive diagnostic port for inter-switch and node link testing | 
| US9853873B2 (en) | 2015-01-10 | 2017-12-26 | Cisco Technology, Inc. | Diagnosis and throughput measurement of fibre channel ports in a storage area network environment | 
| US9678762B2 (en) | 2015-01-21 | 2017-06-13 | Cisco Technology, Inc. | Dynamic, automated monitoring and controlling of boot operations in computers | 
| US9489137B2 (en) | 2015-02-05 | 2016-11-08 | Formation Data Systems, Inc. | Dynamic storage tiering based on performance SLAs | 
| US9733968B2 (en) | 2015-03-16 | 2017-08-15 | Oracle International Corporation | Virtual machine (VM) migration from switched fabric based computing system to external systems | 
| US9582377B1 (en) | 2015-03-19 | 2017-02-28 | Amazon Technologies, Inc. | Dynamic sizing of storage capacity for a remirror buffer | 
| US9900250B2 (en) | 2015-03-26 | 2018-02-20 | Cisco Technology, Inc. | Scalable handling of BGP route information in VXLAN with EVPN control plane | 
| US9830240B2 (en) | 2015-05-14 | 2017-11-28 | Cisco Technology, Inc. | Smart storage recovery in a distributed storage system | 
| US10222986B2 (en) | 2015-05-15 | 2019-03-05 | Cisco Technology, Inc. | Tenant-level sharding of disks with tenant-specific storage modules to enable policies per tenant in a distributed storage system | 
| US11588783B2 (en) | 2015-06-10 | 2023-02-21 | Cisco Technology, Inc. | Techniques for implementing IPV6-based distributed storage space | 
| US9859974B2 (en) | 2015-06-25 | 2018-01-02 | International Business Machines Corporation | Rerouting bus data signals from faulty signal carriers to existing healthy signal carriers | 
| US9983959B2 (en) | 2015-06-29 | 2018-05-29 | Microsoft Technology Licensing, Llc | Erasure coding of data within a group of storage units based on connection characteristics | 
| US20170010874A1 (en) | 2015-07-06 | 2017-01-12 | Cisco Technology, Inc. | Provisioning storage devices in a data center | 
| US20170010930A1 (en) | 2015-07-08 | 2017-01-12 | Cisco Technology, Inc. | Interactive mechanism to view logs and metrics upon an anomaly in a distributed storage system | 
| US9575828B2 (en) | 2015-07-08 | 2017-02-21 | Cisco Technology, Inc. | Correctly identifying potential anomalies in a distributed storage system | 
| US10778765B2 (en) | 2015-07-15 | 2020-09-15 | Cisco Technology, Inc. | Bid/ask protocol in scale-out NVMe storage | 
| US10860520B2 (en) * | 2015-11-18 | 2020-12-08 | Oracle International Corporation | Integration of a virtualized input/output device in a computer system | 
| US9892075B2 (en) | 2015-12-10 | 2018-02-13 | Cisco Technology, Inc. | Policy driven storage in a microserver computing environment | 
| US10002247B2 (en) | 2015-12-18 | 2018-06-19 | Amazon Technologies, Inc. | Software container registry container image deployment | 
| US10423568B2 (en) * | 2015-12-21 | 2019-09-24 | Microsemi Solutions (U.S.), Inc. | Apparatus and method for transferring data and commands in a memory management environment | 
| US9396251B1 (en) | 2016-01-07 | 2016-07-19 | International Business Machines Corporation | Detecting and tracking virtual containers | 
| US10248468B2 (en) * | 2016-01-11 | 2019-04-02 | International Business Machines Corporation | Using hypervisor for PCI device memory mapping | 
| US10210121B2 (en) | 2016-01-27 | 2019-02-19 | Quanta Computer Inc. | System for switching between a single node PCIe mode and a multi-node PCIe mode | 
| US10140172B2 (en) | 2016-05-18 | 2018-11-27 | Cisco Technology, Inc. | Network-aware storage repairs | 
| US10664169B2 (en) | 2016-06-24 | 2020-05-26 | Cisco Technology, Inc. | Performance of object storage system by reconfiguring storage devices based on latency that includes identifying a number of fragments that has a particular storage device as its primary storage device and another number of fragments that has said particular storage device as its replica storage device | 
- 
        2016
        
- 2016-06-06 US US15/174,718 patent/US20170351639A1/en not_active Abandoned
 
 - 
        2019
        
- 2019-08-16 US US16/542,952 patent/US10872056B2/en active Active
 
 
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title | 
|---|---|---|---|---|
| US20080288661A1 (en) * | 2007-05-16 | 2008-11-20 | Michael Galles | Method and system to map virtual i/o devices and resources to a standard i/o bus | 
| US20140164666A1 (en) * | 2012-12-07 | 2014-06-12 | Hon Hai Precision Industry Co., Ltd. | Server and method for sharing peripheral component interconnect express interface | 
| US20140189278A1 (en) * | 2012-12-27 | 2014-07-03 | Huawei Technologies Co., Ltd. | Method and apparatus for allocating memory space with write-combine attribute | 
| US20160294983A1 (en) * | 2015-03-30 | 2016-10-06 | Mellanox Technologies Ltd. | Memory sharing using rdma | 
| US20170277655A1 (en) * | 2016-03-25 | 2017-09-28 | Microsoft Technology Licensing, Llc | Memory sharing for working data using rdma | 
Cited By (22)
| Publication number | Priority date | Publication date | Assignee | Title | 
|---|---|---|---|---|
| US10503922B2 (en) * | 2017-05-04 | 2019-12-10 | Dell Products L.P. | Systems and methods for hardware-based security for inter-container communication | 
| US20180322299A1 (en) * | 2017-05-04 | 2018-11-08 | Dell Products L.P. | Systems and methods for hardware-based security for inter-container communication | 
| US20180335956A1 (en) * | 2017-05-17 | 2018-11-22 | Dell Products L.P. | Systems and methods for reducing data copies associated with input/output communications in a virtualized storage environment | 
| US11086813B1 (en) * | 2017-06-02 | 2021-08-10 | Sanmina Corporation | Modular non-volatile memory express storage appliance and method therefor | 
| US20190050341A1 (en) * | 2018-03-30 | 2019-02-14 | Intel Corporation | Memory-addressed maps for persistent storage device | 
| US10635598B2 (en) * | 2018-03-30 | 2020-04-28 | Intel Corporation | Memory-addressed maps for persistent storage device | 
| US20220138102A1 (en) * | 2019-05-28 | 2022-05-05 | Micron Technology, Inc. | Intelligent Content Migration with Borrowed Memory | 
| US12019549B2 (en) * | 2019-05-28 | 2024-06-25 | Micron Technology, Inc. | Intelligent content migration with borrowed memory | 
| US11893425B2 (en) * | 2020-09-25 | 2024-02-06 | Intel Corporation | Disaggregated computing for distributed confidential computing environment | 
| US12164973B2 (en) | 2020-09-25 | 2024-12-10 | Intel Corporation | Disaggregated computing for distributed confidential computing environment | 
| US20220100582A1 (en) * | 2020-09-25 | 2022-03-31 | Intel Corporation | Disaggregated computing for distributed confidential computing environment | 
| US11941457B2 (en) | 2020-09-25 | 2024-03-26 | Intel Corporation | Disaggregated computing for distributed confidential computing environment | 
| US12405838B2 (en) | 2020-09-25 | 2025-09-02 | Intel Corporation | Disaggregated computing for distributed confidential computing environment | 
| US11989595B2 (en) | 2020-09-25 | 2024-05-21 | Intel Corporation | Disaggregated computing for distributed confidential computing environment | 
| US12260263B2 (en) | 2020-09-25 | 2025-03-25 | Intel Corporation | Disaggregated computing for distributed confidential computing environment | 
| US12033005B2 (en) | 2020-09-25 | 2024-07-09 | Intel Corporation | Disaggregated computing for distributed confidential computing environment | 
| US12229605B2 (en) | 2020-09-25 | 2025-02-18 | Intel Corporation | Disaggregated computing for distributed confidential computing environment | 
| US12093748B2 (en) | 2020-09-25 | 2024-09-17 | Intel Corporation | Disaggregated computing for distributed confidential computing environment | 
| CN113297114A (en) * | 2021-05-21 | 2021-08-24 | 清创网御(合肥)科技有限公司 | Method for supporting multiple processes and multiple threads based on PCIE (peripheral component interface express) independent IO (input/output) of encryption card | 
| US20220391348A1 (en) * | 2021-06-04 | 2022-12-08 | Microsoft Technology Licensing, Llc | Userspace networking with remote direct memory access | 
| US12066973B2 (en) * | 2021-06-04 | 2024-08-20 | Microsoft Technology Licensing, Llc | Userspace networking with remote direct memory access | 
| US11972112B1 (en) * | 2023-01-27 | 2024-04-30 | Dell Products, L.P. | Host IO device direct read operations on peer memory over a PCIe non-transparent bridge | 
Also Published As
| Publication number | Publication date | 
|---|---|
| US10872056B2 (en) | 2020-12-22 | 
| US20190370216A1 (en) | 2019-12-05 | 
Similar Documents
| Publication | Publication Date | Title | 
|---|---|---|
| US10872056B2 (en) | Remote memory access using memory mapped addressing among multiple compute nodes | |
| US11102117B2 (en) | In NIC flow switching | |
| JP6016984B2 (en) | Local service chain using virtual machines and virtualized containers in software defined networks | |
| US8521941B2 (en) | Multi-root sharing of single-root input/output virtualization | |
| US9154451B2 (en) | Systems and methods for sharing devices in a virtualization environment | |
| CN115668886A (en) | Resource allocation and software execution for switch management | |
| US9548890B2 (en) | Flexible remote direct memory access resource configuration in a network environment | |
| EP3776230A1 (en) | Virtual rdma switching for containerized applications | |
| US10911405B1 (en) | Secure environment on a server | |
| US9910687B2 (en) | Data flow affinity for heterogenous virtual machines | |
| US11412059B2 (en) | Technologies for paravirtual network device queue and memory management | |
| US20230109396A1 (en) | Load balancing and networking policy performance by a packet processing pipeline | |
| US9344376B2 (en) | Quality of service in multi-tenant network | |
| US10103992B1 (en) | Network traffic load balancing using rotating hash | |
| US10761939B1 (en) | Powering-down or rebooting a device in a system fabric | |
| US10719475B2 (en) | Method or apparatus for flexible firmware image management in microserver | |
| US20180091447A1 (en) | Technologies for dynamically transitioning network traffic host buffer queues | |
| US20230185624A1 (en) | Adaptive framework to manage workload execution by computing device including one or more accelerators | |
| EP4187868A1 (en) | Load balancing and networking policy performance by a packet processing pipeline | |
| US20240354143A1 (en) | Techniques for cooperative host/guest networking | |
| US20240211392A1 (en) | Buffer allocation | |
| Nanos et al. | Xen2MX: towards high-performance communication in the cloud | 
Legal Events
| Date | Code | Title | Description | 
|---|---|---|---|
| AS | Assignment | 
             Owner name: CISCO TECHNOLOGY, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BORIKAR, SAGAR;REEL/FRAME:038821/0619 Effective date: 20160523  | 
        |
| STPP | Information on status: patent application and granting procedure in general | 
             Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION  | 
        |
| STPP | Information on status: patent application and granting procedure in general | 
             Free format text: AWAITING RESPONSE FOR INFORMALITY, FEE DEFICIENCY OR CRF ACTION  | 
        |
| STCB | Information on status: application discontinuation | 
             Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION  |