US11086703B2 - Distributed input/output virtualization - Google Patents

Distributed input/output virtualization Download PDF

Info

Publication number
US11086703B2
US11086703B2 US16/055,247 US201816055247A US11086703B2 US 11086703 B2 US11086703 B2 US 11086703B2 US 201816055247 A US201816055247 A US 201816055247A US 11086703 B2 US11086703 B2 US 11086703B2
Authority
US
United States
Prior art keywords
host computing
computing device
dvc
virtualized
virtualization
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US16/055,247
Other versions
US20180341536A1 (en
Inventor
Yves Tchapda
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Micron Technology Inc
Original Assignee
Micron Technology Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Micron Technology Inc filed Critical Micron Technology Inc
Priority to US16/055,247 priority Critical patent/US11086703B2/en
Assigned to MICRON TECHNOLOGY, INC. reassignment MICRON TECHNOLOGY, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: TCHAPDA, YVES
Assigned to MORGAN STANLEY SENIOR FUNDING, INC., AS COLLATERAL AGENT reassignment MORGAN STANLEY SENIOR FUNDING, INC., AS COLLATERAL AGENT SUPPLEMENT NO. 10 TO PATENT SECURITY AGREEMENT Assignors: MICRON TECHNOLOGY, INC.
Assigned to JPMORGAN CHASE BANK, N.A.., AS COLLATERAL AGENT reassignment JPMORGAN CHASE BANK, N.A.., AS COLLATERAL AGENT SUPPLEMENT NO. 1 TO PATENT SECURITY AGREEMENT Assignors: MICRON TECHNOLOGY, INC.
Publication of US20180341536A1 publication Critical patent/US20180341536A1/en
Assigned to MICRON TECHNOLOGY, INC. reassignment MICRON TECHNOLOGY, INC. RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: MORGAN STANLEY SENIOR FUNDING, INC., AS COLLATERAL AGENT
Assigned to MICRON TECHNOLOGY, INC. reassignment MICRON TECHNOLOGY, INC. RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: JPMORGAN CHASE BANK, N.A., AS COLLATERAL AGENT
Application granted granted Critical
Publication of US11086703B2 publication Critical patent/US11086703B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/0706Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment
    • G06F11/0712Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment in a virtual computing platform, e.g. logically partitioned systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/16Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0895Configuration of virtualised networks or elements, e.g. virtualised network function or OpenFlow elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/0706Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment
    • G06F11/0709Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment in a distributed system consisting of a plurality of standalone computer nodes, e.g. clusters, client-server systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1415Saving, restoring, recovering or retrying at system level
    • G06F11/142Reconfiguring to eliminate the error
    • G06F11/1423Reconfiguring to eliminate the error by reconfiguration of paths
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3003Monitoring arrangements specially adapted to the computing system or computing system component being monitored
    • G06F11/3006Monitoring arrangements specially adapted to the computing system or computing system component being monitored where the computing system is distributed, e.g. networked systems, clusters, multiprocessor systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3003Monitoring arrangements specially adapted to the computing system or computing system component being monitored
    • G06F11/301Monitoring arrangements specially adapted to the computing system or computing system component being monitored where the computing system is a virtual computing platform, e.g. logically partitioned systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/32Monitoring with visual or acoustical indication of the functioning of the machine
    • G06F11/324Display of status information
    • G06F11/327Alarm or error message display
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/10Program control for peripheral devices
    • G06F13/102Program control for peripheral devices where the programme performs an interfacing function, e.g. device driver
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/20Handling requests for interconnection or transfer for access to input/output bus
    • G06F13/24Handling requests for interconnection or transfer for access to input/output bus using interrupt
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/40Bus structure
    • G06F13/4004Coupling between buses
    • G06F13/4022Coupling between buses using switching circuits, e.g. switching matrix, connection or expansion network
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/42Bus transfer protocol, e.g. handshake; Synchronisation
    • G06F13/4282Bus transfer protocol, e.g. handshake; Synchronisation on a serial bus, e.g. I2C bus, SPI bus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/30098Register arrangements
    • G06F9/3012Organisation of register space, e.g. banked or distributed register file
    • G06F9/3013Organisation of register space, e.g. banked or distributed register file according to data content, e.g. floating-point registers, address registers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline, look ahead
    • G06F9/3854Instruction completion, e.g. retiring, committing or graduating
    • G06F9/3856Reordering of instructions, e.g. using queues or age tags
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45579I/O management, e.g. providing access to device drivers or storage

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Quality & Reliability (AREA)
  • Mathematical Physics (AREA)
  • Computer Hardware Design (AREA)
  • Computing Systems (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer And Data Communications (AREA)
  • Bus Control (AREA)

Abstract

The present disclosure includes apparatuses and methods related to distributed input/output (I/O) virtualization. A number of embodiments include an apparatus comprising a host computing device, a distributed virtualization controller (DVC) disposed on the host computing device, and a virtualized input/output (I/O) device in communication with the DVC.

Description

PRIORITY INFORMATION
This application is a Continuation of U.S. application Ser. No. 15/041,207, filed Feb. 11, 2016, the contents of which are included herein by reference.
TECHNICAL FIELD
The present disclosure relates generally to distributed computing architectures, and more particularly, to systems, methods, and apparatuses related to distributed input/output (I/O) virtualization in computing architectures.
BACKGROUND
Distributed computing architectures are typically characterized by the sharing of components of a software system and/or hardware system among multiple host computing devices (e.g., physical computing resources, computers, servers, clusters, etc. that are connected to a computer network). For example, a distributed computing architecture can include a plurality of host computing devices that share one or more software components and/or physical computing resources (e.g., access to hardware components). The host computing devices can be distributed within a limited geographic area, or they may be widely distributed across various geographic areas. To facilitate sharing of the software and/or physical computing resources, the host computing devices can be in communication with a network switch, management host, and/or other device(s) that can allow for interaction between the host computing devices.
In the example of a host computing device configured to be in communication with a network switch, the switch can route data packets from an output of one host computing device to an input of one or more other host computing devices. In this manner, various software components and/or hardware components may be shared among host computing devices in a distributed computing architecture.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a block diagram of a system for distributed I/O virtualization in accordance with a number of embodiments of the present disclosure.
FIG. 2 is a block diagram of a distributed virtualization controller architecture in accordance with a number of embodiments of the present disclosure.
FIG. 3 is a diagram illustrating a queuing interface for a distributed I/O virtualization system in accordance with a number of embodiments of the present disclosure.
DETAILED DESCRIPTION
The present disclosure includes systems, methods, and apparatuses related to distributed input/output (I/O) virtualization in computing architectures. A number of embodiments include an apparatus comprising a host computing device, a distributed virtualization controller (DVC) disposed on the host device, and a virtualized input/output (I/O) device in communication with the DVC.
A number of embodiments of the present disclosure include a method for distributed input/output (I/O) virtualization comprising intercepting an input/output (I/O) transaction at a distributed virtualization controller (DVC) disposed on a host computing device, identifying, in a virtualization layer associated with the DVC, a physical I/O to receive the I/O transaction, and forwarding the I/O transaction to a physical layer associated with the DVC.
As discussed above, certain software components and/or physical computing resources are commonly shared among host computing devices in a distributed computing architecture. However, some conventional distributed computing architectures do not share input/output (I/O) devices. For example, in some conventional distributed computing architectures, there are one or more respective, dedicated I/O devices associated with each host computing device.
Some attempts to allow I/O devices to be shared host computing devices have been made using what is referred to as I/O virtualization. I/O virtualization can allow host computing devices associated with a particular I/O device to be shared among a plurality of host computing devices. Although I/O virtualization can allow an I/O device to appear to function as more than one I/O device even though each virtual I/O device is associated with a particular host computing device, some currently available schemes involving I/O virtualization can suffer from a number of shortcomings.
For example, some approaches to I/O virtualization have limited scalability because they are either software-based or rely on a centralized controller to provide I/O virtualization functionality. However, such approaches can be inadequate as bandwidth increases and I/O processing requirements become more stringent. In addition, as the number of host computing devices (e.g., servers and/or clusters) in a distributed computing architecture increases, system requirements likewise increase, further compounding adverse effects in the performance of such systems. In order to address these shortcomings, some approaches to I/O virtualization have included adding an additional controller or controllers, for example, a centralized virtual controller. However, adding additional controllers adds expense and complexity to the system, and can suffer from limited scalability.
In contrast, embodiments of the present disclosure include a distributed computing architecture where the virtualization functionality is distributed to host computing devices in the distributed computing architecture and implemented in hardware. In at least one embodiment, the distributed computing architecture described herein can include the use of a protocol with a multi-queue interface (e.g., non-volatile memory host controller interface working group (NVMe), Intel® virtual machine device queues (VMDq), etc.), or can include the use of a multi-function peripheral component interconnect express (PCIe) I/O such as a single root I/O virtualization compliant I/O device.
Some embodiments of the present disclosure can allow for increased performance in comparison to some previous approaches that rely on centralized virtualization controllers, for example, because, in contrast to such approaches, transaction completions do not necessarily traverse the switch fabric multiple times. For example, an I/O transaction from a virtualized I/O may only traverse the network switch once. As a result, latency can be improved in comparison to some previous approaches. In addition, some embodiments can allow for improvements to scalability versus some previous approaches, because distributing the virtualization functionality to one or more host computing devices in the distributed computing architecture does not require additional controllers. Further, a performance footprint of a distributed I/O virtualization architecture can be increased because the virtualization functionality is portioned across multiple devices.
In some embodiments, memory allocated for virtualization can be distributed, as opposed to being associated with a single location, as in some previous approaches. For example, in some embodiments, memory allocated for virtualization can be provided by a plurality of host computing devices in a distributed computing architecture. In some embodiments, some portion of the memory associated with one or more of the host computing devices can be used to augment the virtualization functionality.
In addition, embodiments of the present disclosure can allow for decrease in complexity associated with deploying a plurality of centralized controllers and/or in managing errors. Moreover, extra switching ports that can be used by centralized controllers in some previous approaches can be made available for additional host computing devices and/or I/Os, and/or the size of the switching fabric could be reduced, thereby decreasing costs associated with the distributed computing architecture.
In the following detailed description of the present disclosure, reference is made to the accompanying drawings that form a part hereof, and in which is shown by way of illustration how one or more embodiments of the disclosure may be practiced. These embodiments are described in sufficient detail to enable those of ordinary skill in the art to practice the embodiments of this disclosure, and it is to be understood that other embodiments may be utilized and that process, electrical, and/or structural changes may be made without departing from the scope of the present disclosure. As used herein, designators such as “N”, “M”, etc., particularly with respect to reference numerals in the drawings, indicate that a number of the particular feature so designated can be included. As used herein, “a number of” a particular thing can refer to one or more of such things (e.g., a number of memory arrays can refer to one or more memory arrays). A “plurality of” is intended to refer to more than one of such things.
The figures herein follow a numbering convention in which the first digit or digits correspond to the drawing figure number and the remaining digits identify an element or component in the drawing. Similar elements or components between different figures may be identified by the use of similar digits. For example, 150 may reference element “50” in FIG. 1, and a similar element may be referenced as 250 in FIG. 2. As will be appreciated, elements shown in the various embodiments herein can be added, exchanged, and/or eliminated so as to provide a number of additional embodiments of the present disclosure. In addition, as will be appreciated, the proportion and the relative scale of the elements provided in the figures are intended to illustrate certain embodiments of the present invention, and should not be taken in a limiting sense.
FIG. 1 is a block diagram of a system for distributed I/O virtualization in accordance with a number of embodiments of the present disclosure. As illustrated in FIG. 1, a plurality of host computing devices 120-1, 120-2, . . . , 120-N, each including a respective distributed virtualization controller (DVC) 128-1, 128-2, 128-N are communicatively coupled to a plurality of virtualized I/O devices 126. The plurality of host computing devices 120-1, 120-2, . . . , 120-N can be communicatively coupled to a plurality of virtualized I/O devices 126 through a switch 122 (e.g., a network switch). The virtualized I/O devices 126 can be network interface cards, storage devices, graphics rendering devices, or other virtualized I/O devices. In the example of FIG. 1, a single switch 122 is illustrated; however, other switching topologies can be used. For example, a multi-stage/cascading switching topology such as a tree structure (e.g., fat tree) can be used. In some embodiments, the system includes a management host 124 that includes a DVC 128-M.
In some embodiments, the DVCs 128-1, 128-2, . . . , 128-N can be configured to provide various functionality that allows I/O devices among the virtualized I/O devices 126 to be effectively shared between the host computing devices 120-1, 120-2, . . . , 120-N, and/or the management host computing device 124. For example, each respective DVC 128-1, 128-2, . . . , 128-N can provide virtualization functionality to the host computing device on which it is disposed for one or more of the virtualized I/O devices 126. That is, in some embodiments, DVC 128-1 disposed on host computing device 120-1 can be configured such that one or more of the virtualized I/O devices 126 can be used by host computing device 120-1 while DVC 128-2 can be configured such that one or more of the virtualized I/O devices 126 can be used by host computing device 120-2. In some embodiments, each respective DVC 128-1, 128-2, . . . , 128-N can be in communication with the management host computing device DVC 128-M through the switch 122 to coordinate virtualization functionality among the respective host computing devices 120-1, 120-2, . . . , 120-N of the system. The functionality of the DVCs 128-1, 128-2, . . . , 128-N are described in more detail in connection with FIG. 2, herein.
FIG. 2 is a block diagram of a distributed virtualization controller (DVC) architecture in accordance with a number of embodiments of the present disclosure. As illustrated in FIG. 2, the DVC includes a virtualization layer 230 and a physical layer 240. The virtualization layer 240 can communicate with system software (e.g., operating system software, BIOS software, etc.) running on the host computing devices (e.g., host computing devices 120-1, 120-2, . . . , 120-N). In some embodiments, the virtualization layer 240 can expose one or more peripheral component interconnect (PCI) and/or one or more peripheral component interconnect express (PCIe) functions to the system software. Embodiments disclosed herein make reference to PCIe for simplicity; however, as one of ordinary skill in the art would appreciate, other interconnection systems such as PCI, among others, are contemplated.
The virtualization layer 230 can include a virtual I/O configuration space interface 232, a virtual I/O register interface 234, virtual layer I/O processing 236, and/or a virtual layer queuing interface 238. The physical layer 240 can include a physical I/O register interface 242, error processing engine 244, physical layer I/O processing 246, and/or physical layer queuing interface 248. In some embodiments, the components, blocks, and/or engines of the virtualization layer 230 and the physical layer 240 can include hardware and/or software, but include at least hardware, configured to perform certain tasks or actions. For example, the components of the virtualization layer 230 and the physical layer 240 can be in the form of an application specific integrated circuit (ASIC).
In some embodiments, the system software can detect a PCIe function that has been exposed by the DVC, and, in response, configures a virtual I/O device. Detection and configuration of the PCIe function can be carried out during enumeration, for example. The system software can then load any relevant device driver(s) for the virtual I/O device and attach them to the appropriate network driver stack. For example, if the I/O is a network interface card (NIC), the system software can load a NIC driver and attach it to the network driver stack. In some embodiments, the newly loaded driver configures the virtual I/O device as if it was directly addressing the physical I/O device. In some embodiments, the driver may set up various registers on its virtual I/O device.
In some embodiments, the DVC can receive support from a multi-queue interface. Since the host computing devices expect the I/O to support a queuing interface, the virtualization layer 230 can expose a queuing interface to the host computing devices. In some embodiments, virtual queuing interface 238 can be independent of a queuing interface associated with a physical I/O. For example, a single queue associated with the physical I/O being virtualized can be assigned to a DVC; however, the DVC can still expose multiple queues to the host computing devices.
The virtualization layer 230 can include virtual layer I/O processing 236. In some embodiments, virtual layer I/O processing 236 can include intercepting I/O transactions from the host computing devices by the DVC. These transactions can then be processed locally, for example, by virtual layer I/O processing 236. In some embodiments, virtual layer I/O processing 236 can be carried out using a push or pull methodology for I/O processing. For example, whether a push or pull methodology is used for I/O processing can be transparent to the architecture described herein.
Virtual layer I/O processing 236 is responsible for examining an I/O transaction from the host computing devices and determining if the I/O transaction is to be forwarded to a physical I/O. In some embodiments, the virtual layer I/O processing 236 block is responsible for identifying the physical I/O to receive the I/O transaction. For example, virtual layer I/O processing 236 can identify a physical I/O devices among a plurality of physical I/O devices in the system that is to receive a particular I/O transaction.
Upon completion of the I/O transaction, a notification can be sent to the host computing devices to indicate that the transaction has completed. The method of notification depends on schemes that are supported by the device drivers; however, in at least one embodiment, the notification can be in the form of an interrupt. For example, interrupt mechanisms such as INTx, MSI, and MSI-X can be used to provide the notification. Embodiments are not so limited; however, and polling (e.g., polled I/O or software-driven I/O) mechanisms may be used to provide the notification as well.
In some embodiments, the DVC is responsible for virtualizing I/Os to a single host computing device. This can simplify error isolation because each host computing device can be isolate from problems that occur on other host computing devices in the distributed computing architecture.
The physical layer 240 of the DVC can include a physical layer queuing interface 248 that can interface with actual, physical hardware in the system, and can be configured by management software. For example, during system initialization, management software can create device queues in the DVC. In some embodiments, the DVC can be disposed on a card that is coupled to host computing devices. For example, the DVC can be on a card that is physically coupled to host computing devices. The queues can be part of a multi-queue interface on a single device, or the queues can be part of a multi-queue interface on a multi-function I/O device (e.g., a single root I/O virtualization (SR-IOV) device).
In some embodiments, at least one queue on the DVC is assigned to physical I/O that supports a multi-queue interface. In the example of a SR-IOV-compliant I/O device, a virtual function on the physical I/O is mapped to a virtual I/O device on the DVC. In some embodiments, each I/O queue register (e.g., each I/O queue base address register) can be mapped 1:1 to physical layer queuing interface 248. For example, a first I/O queue register can be mapped to a respective queue on the DVC through the physical layer queuing interface.
The DVC can also include physical layer I/O processing 246. In some embodiments, when the virtualization layer 230 identifies a physical I/O to receive an I/O transaction, the transaction can be forwarded to the physical layer 240. In some embodiments, physical layer I/O processing 246 on the physical layer 240 can modify the address that points to the data associated with the transaction and can post the transaction to the physical I/O. In some embodiments, the transaction can be a storage command for a host bus adapter (HBA) and/or a descriptor for a NIC.
In some embodiments, the address can be a 64-bit memory address with 8 bits reserved for the routing field. The high address bits can be used for routing and/or to allow the I/O to access data from memory associated with one or more of the host computing devices. For example, address modification on the transaction can for routing purposes and can allow the I/O to access the data from the relevant storage location (e.g., host computing device memory). Embodiments are not so limited; however, and the size of the memory address and/or routing field can be smaller or larger provided the address includes enough bits to be useful for addressing purposes.
In some embodiments, distributed I/O virtualization allows for in-flight address modification. For example, an address that points to the data associated with an I/O transaction can be modified as it being fetched by the I/O device. This can allow for a reduction in the memory used by a virtualization system. In some embodiments, in-flight address modification can be dictated by the I/O interface protocol.
Physical layer I/O processing 246 can receive and/or process notifications from a physical I/O that a transaction is complete. In some embodiments, a transaction notification at the physical layer 240 is independent of a transaction notification at the virtual layer 230. In some embodiments, a notification can be implied based on a response from the physical I/O, while a notification of completion to the host computing device can be an explicit interrupt mechanism.
In some embodiments, the physical layer 240 of the DVC includes a physical I/O register interface 242. The physical I/O register interface can provide access to physical I/O registers to the DVC. For example, in some embodiments, physical I/O registers that are not part of the queuing interface are only written by the management host (e.g., management host 124 illustrated in FIG. 1). In some embodiments, the DVC can access the physical I/O registers as read-only. For example, physical I/O register interface 242 can allow the DVC to receive values associated with the physical I/O registers such that the DVC can communicate with the physical I/O registers without having to modify the values.
The physical layer 240 of the DVC can include error processing engine 244. In some embodiments, error processing engine 244 can emulate a response to indicate an error if communication between the DVC and one or more virtualized I/Os fails. For example, error processing engine 244 can emulate a completion abort (CA) or unsupported request (UR) response to indicate the error to the driver. For example, a PCIe CA or PCIe UR response can be emulated by the error processing engine 244. In some embodiments, this can prevent failure of the entire system and/or can prevent operating system crashes on the host computing devices. Should a failure occur, a recovery mechanism can be initiated on the physical I/O by the management host. For example, the management host can initiate a recovery mechanism in the form of a reset and/or power cycle of the physical I/O in response to a failure of the physical I/O or an indication that the physical I/O is non-responsive. In some embodiments, the recovery mechanism can be initiated using an out-of-band control path (e.g., a path that is independent of an in-band data path).
In some embodiments, quality of service (QoS) can be implemented on the virtualization layer 230 and the physical layer 240. On the virtual layer, the DVC can implement any QoS scheme (e.g., round robin, weighted round robin, WeightedFair queuing, etc.) across multiple queues associated with a virtual I/O. In some embodiments, these queues can be instanced or assigned to one or more virtual machines. Each queue can be assigned a weight or a strict priority scheme can be imposed.
At the physical layer 240, each physical I/O can be configured with arbitration priorities for each queue mapped to a DVC. For example, the physical I/O device can then use the queue priority to differentiate traffic between the host computing devices. In some embodiments, the DVC can implement an adaptive approach where one or more parameters can be adjusted to fit various QoS requirements.
FIG. 3 is a diagram illustrating a queuing interface for a distributed I/O virtualization system in accordance with a number of embodiments of the present disclosure. In the example of FIG. 3, a system with N host computing devices and M I/Os is illustrated from the perspective of the DVC and the I/Os. As illustrated in FIG. 3, qDVC[I/Ox] represents a queue on the DVC that is mapped to I/Ox. Address registers associated with each I/O queue 350-1, 350-2, . . . , 350-N can be mapped to each respective DVC 328-1, 328-2, . . . , 328-N.
In some embodiments, each I/O register 350-1, 350-2, . . . , 350-N can have a plurality of queue base address registers (QBAR) associated therewith. For example, a first I/O register 350-1 can have a first QBAR qDVC0[I/Ox] 351-1, a second QBAR qDVC1[I/Ox] 351-2, and an (n−1)th QBAR qDVCN-1[I/Ox] 351-N associated therewith. Similarly, additional I/O registers (e.g., I/O registers 350-2, . . . , 350-N) can have a plurality of QBARs associated therewith.
As illustrated in FIG. 3, the QBARS associated with the various I/Os 350-1, 350-2, . . . , 350-N can be mapped to respective DVCs 328-1, 328-2, . . . , 328-N such that a respective QBAR (e.g., QBAR 351-1, 351-2, . . . , 351-N, 352-1, 352-2, . . . 352-N) associated with each I/O 350-1, 350-2, . . . , 350-N is exposed to the respective DVC 328-1, 328-2, . . . , 328-N. For example, QBAR 351-1 associated with I/O 350-1 can be mapped to the first queue of DVC0 328-1, QBAR 351-2 associated with I/O 350-1 can be mapped to the first queue of DVC1 328-2, and QBAR 351-N associated with I/O 350-1 can be mapped to the first queue of DVCN-1 328-N. Similarly, a QBAR 352-1 associated with I/O 350-2 can be mapped to the second queue of DVC0 328-1, QBAR 352-2 associated with I/O 350-2 can be mapped to the second queue of DVC1 328-2, and QBAR 352-N associated with I/O 350-2 can be mapped to the second queue of DVCN-1 328-N. This mapping can continue for an nth DVC and an nth I/O such that QBAR 353-1 associated with I/O 353-N can be mapped to an nth queue of DVC0 328-1, QBAR 353-2 associated with I/O 353-N can be mapped to an nth queue of DVC1 328-2, and QBAR 353-N associated with I/O 353-N can be mapped to an nth queue of DVCN-1 328-N.
Although specific embodiments have been illustrated and described herein, those of ordinary skill in the art will appreciate that an arrangement calculated to achieve the same results can be substituted for the specific embodiments shown. This disclosure is intended to cover adaptations or variations of one or more embodiments of the present disclosure. It is to be understood that the above description has been made in an illustrative fashion, and not a restrictive one. Combination of the above embodiments, and other embodiments not specifically described herein will be apparent to those of skill in the art upon reviewing the above description. The scope of the one or more embodiments of the present disclosure includes other applications in which the above structures and methods are used. Therefore, the scope of one or more embodiments of the present disclosure should be determined with reference to the appended claims, along with the full range of equivalents to which such claims are entitled.
In the foregoing Detailed Description, some features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the disclosed embodiments of the present disclosure have to use more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment.

Claims (20)

What is claimed is:
1. An apparatus, comprising:
a management host computing device configured to coordinate virtualization functionality among a plurality of host computing devices communicatively coupled to the management host computing device;
a virtualized input/output (I/O) device sharing resources between the management host computing device and the plurality of host computing devices, wherein the virtualized I/O device is a network interface card (NIC) or a graphics rendering device; and
respective distributed virtualization controllers (DVCs) physically disposed on each of the management host computing device and the plurality of host computing devices, wherein:
the respective DVCs comprise respective application specific integrated circuits (ASICs); and
each respective DVC is to virtualize the I/O device to the host computing device on which the DVC is physically disposed.
2. The apparatus of claim 1, wherein the management host computing device is in communication with a network switch.
3. The apparatus of claim 2, wherein the management host computing device is configured to initiate a recovery mechanism on a physical I/O device in communication with the DVC in response to an error condition.
4. The apparatus of claim 1, wherein the DVC is configured to expose at least one queue to the management host computing device and at least one of the plurality of host computing devices.
5. The apparatus of claim 4, wherein the at least one queue is part of a multi-queue interface.
6. The apparatus of claim 4, wherein the at least one queue is associated with a multi-function I/O device.
7. The apparatus of claim 1, wherein each respective DVC comprises a virtualization layer and a physical layer, the virtualization layer exposes at least one peripheral device to the management host computing device and at least one of the plurality of host computing devices, and the physical layer provides an interface between a physical I/O device and the management host computing device and at least one of the plurality of host computing devices.
8. The apparatus of claim 7, wherein the at least one peripheral device is a peripheral component interconnect express (PCIe) device.
9. The apparatus of claim 1, wherein each respective DVC configures a virtual I/O device based at least in part on the management host computing device or at least one of the plurality of computing devices detecting a peripheral device.
10. The apparatus of claim 1, wherein each respective DVC is configured to map a function associated with a physical I/O device to a virtualized I/O device on the respective DVC to provide communication between the virtualized I/O device and the respective DVC.
11. A system, comprising:
a first host computing device including a first distributed virtualization controller (DVC) comprising circuitry physically disposed on the first host computing device;
a second host computing device including a second DVC comprising circuitry physically disposed on the second host computing device, wherein the first DVC is configured to virtualize the at least one virtualized I/O device to the first host computing device, and wherein the second DVC is configured to virtualize the at least one virtualized I/O device to the second host computing device;
a management host computing device including a third DVC comprising circuitry physically disposed on the management host computing device, wherein the management host computing device is configured to coordinate virtualization functionality among the first host computing device and the second host computing device, and wherein the first DVC, the second DVC, and the third DVC each comprise respective application specific integrated circuits (ASICs);
a virtualized input/output (I/O) device sharing resources between the first host computing device and the second host computing device, wherein the virtualized I/O device is a network interface card (NIC) or a graphics rendering device; and
a network switch in communication with the first host computing device and the second host computing device.
12. The system of claim 11, wherein the management host computing device is in communication with the first host computing device and the second host computing device via a switch.
13. The system of claim 11, further comprising a plurality of queue base address registers (QBARs) associated with the at least one virtualized I/O device.
14. The system of claim 13, wherein a first QBAR associated with the at least one virtualized I/O device is mapped to the first host computing device, and a second QBAR associated with the at least one virtualized I/O device is mapped to the second host computing device.
15. The system of claim 11, wherein an I/O transaction from the I/O device traverses the network switch only once.
16. A method, comprising:
receiving an input/output (I/O) transaction via a virtualized input/output (I/O) device configured to share resources between a first computing device and a second computing device at a distributed virtualization controller (DVC) comprising an application-specific integrated circuit physically coupled to a host computing device, wherein the virtualized I/O device is a network interface card (NIC) or a graphics rendering device;
coordinating, by a management host computing device comprising a management host DVC comprising circuitry disposed on the management host, wherein the management host is communicatively coupled to the first computing device and the second computing device, virtualization functionality of the first computing device and the second computing device; and
virtualizing the I/O transaction to the host computing device on which the DVC is physically coupled.
17. The method of claim 16, further comprising modifying an address associated with the I/O transaction.
18. The method of claim 17, further comprising modifying an address associated with the I/O transaction concurrently with receiving the I/O transaction at the DVC.
19. The method of claim 16, further comprising generating, by the DVC, an error indication in response to an error being detected by the DVC.
20. The method of claim 19, further comprising initiating a recovery mechanism on the physical I/O in response to the error indication being generated.
US16/055,247 2016-02-11 2018-08-06 Distributed input/output virtualization Active 2036-06-13 US11086703B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/055,247 US11086703B2 (en) 2016-02-11 2018-08-06 Distributed input/output virtualization

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US15/041,207 US10073725B2 (en) 2016-02-11 2016-02-11 Distributed input/output virtualization
US16/055,247 US11086703B2 (en) 2016-02-11 2018-08-06 Distributed input/output virtualization

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US15/041,207 Continuation US10073725B2 (en) 2016-02-11 2016-02-11 Distributed input/output virtualization

Publications (2)

Publication Number Publication Date
US20180341536A1 US20180341536A1 (en) 2018-11-29
US11086703B2 true US11086703B2 (en) 2021-08-10

Family

ID=59561528

Family Applications (2)

Application Number Title Priority Date Filing Date
US15/041,207 Active 2036-06-09 US10073725B2 (en) 2016-02-11 2016-02-11 Distributed input/output virtualization
US16/055,247 Active 2036-06-13 US11086703B2 (en) 2016-02-11 2018-08-06 Distributed input/output virtualization

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US15/041,207 Active 2036-06-09 US10073725B2 (en) 2016-02-11 2016-02-11 Distributed input/output virtualization

Country Status (6)

Country Link
US (2) US10073725B2 (en)
EP (1) EP3414669A4 (en)
KR (1) KR101942228B1 (en)
CN (1) CN108701115A (en)
TW (1) TWI649658B (en)
WO (1) WO2017139116A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10380042B2 (en) * 2016-11-22 2019-08-13 International Business Machines Corporation Dispatch input/output traffic to channels with multiple channel communication array queues
US20180335971A1 (en) * 2017-05-16 2018-11-22 Cisco Technology, Inc. Configurable virtualized non-volatile memory express storage
US10884878B2 (en) * 2018-06-07 2021-01-05 International Business Machines Corporation Managing a pool of virtual functions
US11314833B1 (en) 2021-08-24 2022-04-26 metacluster lt, UAB Adaptive data collection optimization

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050044301A1 (en) 2003-08-20 2005-02-24 Vasilevsky Alexander David Method and apparatus for providing virtual computing services
US20080307116A1 (en) 2005-10-27 2008-12-11 International Business Machines Corporation Routing Mechanism in PCI Multi-Host Topologies Using Destination ID Field
US20090150563A1 (en) 2007-12-07 2009-06-11 Virtensys Limited Control path I/O virtualisation
WO2010063985A1 (en) 2008-12-01 2010-06-10 Virtensys Limited Method and apparatus for i/o device virtualization
CN102207896A (en) 2010-03-31 2011-10-05 微软公司 Virtual machine crash file generation techniques
US20120131590A1 (en) 2010-11-24 2012-05-24 International Business Machines Corporation Managing virtual functions of an input/output adapter
US20120263191A1 (en) * 2011-04-12 2012-10-18 Red Hat Israel, Inc. Mechanism For Managing Quotas In a Distributed Virtualization Environment
US20140201736A1 (en) 2013-01-11 2014-07-17 Red Hat Israel, Ltd. Mechanism For Managing Storage Connections In A Distributed Virtualization Environment
US20150301759A1 (en) * 2012-12-31 2015-10-22 Huawei Technologies Co., Ltd. Method and system for sharing storage resource
US20150355982A1 (en) * 2014-06-07 2015-12-10 Vmware, Inc. Vm and host management function availability during management network failure in host computing systems in a failover cluster
US20160203080A1 (en) * 2015-01-12 2016-07-14 Avago Technologies General Ip (Singapore) Pte. Ltd Multi-node cache coherency with input output virtualization
US20160203008A1 (en) * 2014-05-15 2016-07-14 Nutanix, Inc. Mechanism for performing rolling updates with data unavailability check in a networked virtualization environment for storage management

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007249441A (en) * 2006-03-15 2007-09-27 Hitachi Ltd Virtualization system and failure coping method
KR100956640B1 (en) * 2007-11-06 2010-05-11 전자부품연구원 Self-Control Common Apparatus of Resource and Method Thereof
US8868675B2 (en) * 2008-12-04 2014-10-21 Cisco Technology, Inc. Network optimization using distributed virtual resources
US8412818B2 (en) 2010-12-21 2013-04-02 Qualcomm Incorporated Method and system for managing resources within a portable computing device
US9952889B2 (en) * 2015-11-11 2018-04-24 Nutanix, Inc. Connection management

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050044301A1 (en) 2003-08-20 2005-02-24 Vasilevsky Alexander David Method and apparatus for providing virtual computing services
US20080307116A1 (en) 2005-10-27 2008-12-11 International Business Machines Corporation Routing Mechanism in PCI Multi-Host Topologies Using Destination ID Field
US20090150563A1 (en) 2007-12-07 2009-06-11 Virtensys Limited Control path I/O virtualisation
WO2010063985A1 (en) 2008-12-01 2010-06-10 Virtensys Limited Method and apparatus for i/o device virtualization
CN102207896A (en) 2010-03-31 2011-10-05 微软公司 Virtual machine crash file generation techniques
US20110246986A1 (en) 2010-03-31 2011-10-06 Microsoft Corporation Virtual Machine Crash File Generation Techniques
US20120131590A1 (en) 2010-11-24 2012-05-24 International Business Machines Corporation Managing virtual functions of an input/output adapter
US20120263191A1 (en) * 2011-04-12 2012-10-18 Red Hat Israel, Inc. Mechanism For Managing Quotas In a Distributed Virtualization Environment
US20150301759A1 (en) * 2012-12-31 2015-10-22 Huawei Technologies Co., Ltd. Method and system for sharing storage resource
US20140201736A1 (en) 2013-01-11 2014-07-17 Red Hat Israel, Ltd. Mechanism For Managing Storage Connections In A Distributed Virtualization Environment
US20160203008A1 (en) * 2014-05-15 2016-07-14 Nutanix, Inc. Mechanism for performing rolling updates with data unavailability check in a networked virtualization environment for storage management
US20150355982A1 (en) * 2014-06-07 2015-12-10 Vmware, Inc. Vm and host management function availability during management network failure in host computing systems in a failover cluster
US20160203080A1 (en) * 2015-01-12 2016-07-14 Avago Technologies General Ip (Singapore) Pte. Ltd Multi-node cache coherency with input output virtualization

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
Chinese Office Action from related Chinese Patent Application No. 201780010837.6, dated May 29, 2019, 4 pages.
Chinese Office Action from related Chinese Patent Application No. 201780010837.6, dated Oct. 11, 2019, 13 pages.
Communication pursuant to Article 94(3) EPC from related European patent application No. 17750569.0, dated Apr. 30, 2020, 5 pages.
Decision of Rejection from related Chinese Patent Application No. 201780010837.6, dated Jan. 3, 2020, 10 pages.
European Search Report and Opinion from related EP Patent Application No. 17750569.0—1224/3414669, dated Aug. 14, 2019, 7 pages.
International Search Report and Written Opinion from related international application No. PCT/US2017/015585, dated May 12, 2017, 14 pp.
Office Action from related Taiwanese patent application No. 106104350, dated Jul. 4, 2018, 16 pp.
Suzuki, Jun, et al., "Multi-root Share of Single-Root I/O Virtualization (SR/IOV) Compliant PCI Express Device", 2010 IEEE 18th Annual Symposium on, IEEE, Piscataway, NJ, USA, Aug. 18, 2010, pp. 25-31.

Also Published As

Publication number Publication date
US20180341536A1 (en) 2018-11-29
TWI649658B (en) 2019-02-01
CN108701115A (en) 2018-10-23
KR20180102224A (en) 2018-09-14
US10073725B2 (en) 2018-09-11
WO2017139116A1 (en) 2017-08-17
EP3414669A1 (en) 2018-12-19
EP3414669A4 (en) 2019-09-11
US20170235584A1 (en) 2017-08-17
TW201732633A (en) 2017-09-16
KR101942228B1 (en) 2019-04-11

Similar Documents

Publication Publication Date Title
US10095645B2 (en) Presenting multiple endpoints from an enhanced PCI express endpoint device
US11086703B2 (en) Distributed input/output virtualization
US9734096B2 (en) Method and system for single root input/output virtualization virtual functions sharing on multi-hosts
US7913024B2 (en) Differentiating traffic types in a multi-root PCI express environment
US8144582B2 (en) Differentiating blade destination and traffic types in a multi-root PCIe environment
US9552216B2 (en) Pass-through network interface controller configured to support latency sensitive virtual machines
US9858102B2 (en) Data path failover method for SR-IOV capable ethernet controller
US8103810B2 (en) Native and non-native I/O virtualization in a single adapter
US9996484B1 (en) Hardware acceleration for software emulation of PCI express compliant devices
US8972611B2 (en) Multi-server consolidated input/output (IO) device
US10474606B2 (en) Management controller including virtual USB host controller
WO2012114211A1 (en) Low latency precedence ordering in a pci express multiple root i/o virtualization environment
US8996734B2 (en) I/O virtualization and switching system
US20220327080A1 (en) PCIe DEVICE AND OPERATING METHOD THEREOF
US9612877B1 (en) High performance computing in a virtualized environment
CN113312141A (en) Virtual serial port for virtual machines
US8527745B2 (en) Input/output device including a host interface for processing function level reset requests and updating a timer value corresponding to a time until application hardware registers associated with the function level reset requests are available
US20230350824A1 (en) Peripheral component interconnect express device and operating method thereof
EP3910471A1 (en) Computing device with safe and secure coupling between virtual machines and peripheral component interconnect express device
Tu Memory-Based Rack Area Networking
US20190196866A1 (en) Protected runtime mode
Raj et al. Virtualization Services: Accelerated I/O Support in Multi-Core Systems

Legal Events

Date Code Title Description
AS Assignment

Owner name: MICRON TECHNOLOGY, INC., IDAHO

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:TCHAPDA, YVES;REEL/FRAME:046558/0789

Effective date: 20160217

FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

AS Assignment

Owner name: JPMORGAN CHASE BANK, N.A.., AS COLLATERAL AGENT, ILLINOIS

Free format text: SUPPLEMENT NO. 1 TO PATENT SECURITY AGREEMENT;ASSIGNOR:MICRON TECHNOLOGY, INC.;REEL/FRAME:047630/0756

Effective date: 20181015

Owner name: MORGAN STANLEY SENIOR FUNDING, INC., AS COLLATERAL AGENT, MARYLAND

Free format text: SUPPLEMENT NO. 10 TO PATENT SECURITY AGREEMENT;ASSIGNOR:MICRON TECHNOLOGY, INC.;REEL/FRAME:048102/0420

Effective date: 20181015

Owner name: JPMORGAN CHASE BANK, N.A.., AS COLLATERAL AGENT, I

Free format text: SUPPLEMENT NO. 1 TO PATENT SECURITY AGREEMENT;ASSIGNOR:MICRON TECHNOLOGY, INC.;REEL/FRAME:047630/0756

Effective date: 20181015

Owner name: MORGAN STANLEY SENIOR FUNDING, INC., AS COLLATERAL

Free format text: SUPPLEMENT NO. 10 TO PATENT SECURITY AGREEMENT;ASSIGNOR:MICRON TECHNOLOGY, INC.;REEL/FRAME:048102/0420

Effective date: 20181015

AS Assignment

Owner name: MICRON TECHNOLOGY, INC., IDAHO

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:MORGAN STANLEY SENIOR FUNDING, INC., AS COLLATERAL AGENT;REEL/FRAME:050719/0550

Effective date: 20190731

AS Assignment

Owner name: MICRON TECHNOLOGY, INC., IDAHO

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:JPMORGAN CHASE BANK, N.A., AS COLLATERAL AGENT;REEL/FRAME:051028/0835

Effective date: 20190731

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: ADVISORY ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE