CN107810475B - Method and apparatus for software lifecycle management for virtual computing environments - Google Patents

Method and apparatus for software lifecycle management for virtual computing environments Download PDF

Info

Publication number
CN107810475B
CN107810475B CN201680038585.3A CN201680038585A CN107810475B CN 107810475 B CN107810475 B CN 107810475B CN 201680038585 A CN201680038585 A CN 201680038585A CN 107810475 B CN107810475 B CN 107810475B
Authority
CN
China
Prior art keywords
software
installation
physical
virtual
computing resources
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201680038585.3A
Other languages
Chinese (zh)
Other versions
CN107810475A (en
Inventor
D·纽厄尔
A·潘达
M·卡玛特
R·森
S·穆霍帕迪亚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Weirui LLC
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US15/187,480 external-priority patent/US10740081B2/en
Application filed by Individual filed Critical Individual
Publication of CN107810475A publication Critical patent/CN107810475A/en
Application granted granted Critical
Publication of CN107810475B publication Critical patent/CN107810475B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/60Software deployment
    • G06F8/61Installation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/60Software deployment
    • G06F8/65Updates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • G06F9/5088Techniques for rebalancing the load in a distributed system involving task migration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/4557Distribution of virtual machine instances; Migration and load balancing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1031Controlling of the operation of servers by a load balancer, e.g. adding or removing servers that serve requests

Abstract

Methods and apparatus for software lifecycle management for virtual computing environments are disclosed. An example method includes determining, by executing instructions with a processor (912), a plurality of software updates to be installed on physical computing resources in a virtual server rack system (102), the determining based on a manifest file received from a software manager associated with the virtual server rack system (102), determining, by executing instructions with the processor (912), dependency requirements for installing the software updates identified in the manifest file, determining, by executing instructions with the processor (912), an order for installing the software updates that meets the dependency requirements, and scheduling, by executing instructions with the processor (912), installation of the software updates identified in the manifest file.

Description

Method and apparatus for software lifecycle management for virtual computing environments
RELATED APPLICATIONS
Indian provisional patent application serial No. 3344/CHE/2015 filed on 30/6/2015, U.S. patent application serial No. 15/187,452 filed on 20/6/2016, and U.S. patent application serial No. 15/187,480, all filed on 3/2016, are incorporated herein by reference in their entirety.
Technical Field
The present disclosure relates generally to virtualized computing environments, and more particularly, to methods and apparatus for software lifecycle management for virtualized computing environments.
Background
Virtualizing a computer system provides advantages such as the ability to execute multiple computer systems on a single hardware computer, replicating a computer system, moving a computer system between multiple hardware computers, and the like. An example system FOR virtualizing a computer system is described in U.S. patent application No. 11/903,374, entitled "METHOD AND system FOR MANAGING VIRTUAL AND real machines" (METHOD AND SYSTEM FOR MANAGING VIRTUAL machine AND REAL MACHINES), filed on 21.9.2007, AND U.S. provisional patent application No. 60/919,965 entitled "METHOD AND system FOR MANAGING VIRTUAL AND real machines" (METHOD AND SYSTEM FOR MANAGING VIRTUAL machine AND REAL MACHINES), filed on 26.3.2007, which was assigned to us patent 8,171,485, AND U.S. provisional patent application No. 61/736,422, filed on 12.12.2012, entitled "METHODs AND APPARATUS FOR virtualizing COMPUTING" (METHODs AND APPARATUS FOR VIRTUAL machine matching), "all of which are incorporated herein by reference in their entirety.
"infrastructure as a service" (also commonly referred to as "IaaS") generally describes a suite of technologies provided by a service provider as an overall solution that allows for the flexible creation of virtualized, networked, and consolidated computing platforms (sometimes referred to as "cloud computing platforms"). Enterprises may use IaaS as a business-internal organization cloud computing platform (sometimes also referred to as a "private cloud") that enables application developers to access infrastructure resources, such as virtual servers, storage, and network resources. By providing instant access to the hardware resources required to run an application, the cloud computing platform enables developers to build, deploy, and manage the lifecycle of a web application (or any other type of networked application) on a larger scale and at a faster rate than ever before.
A cloud computing environment may be comprised of a number of processing units (e.g., servers). The processing units may be mounted in standardized frames, known as racks, which provide efficient use of floor space by allowing the processing units to be stacked vertically. The rack may additionally include other components of the cloud computing environment, such as storage devices, networking devices (e.g., switches), and so forth.
Drawings
FIG. 1A is a block diagram of an example environment in which physical racks are prepared by an example system integrator for distribution to customers.
FIG. 1B is a block diagram of an example environment in which an example physical chassis is deployed at an example client (customer premise).
FIG. 2 depicts an example physical rack in an example virtual server rack deployment.
FIG. 3 is a block diagram of an example implementation of the software manager of FIG. 1A and/or FIG. 1B.
FIG. 4 is a block diagram of an example implementation of the example lifecycle manager of FIG. 2.
FIG. 5 is a flow diagram representing example machine readable instructions that may be executed to implement the example software manager of FIG. 1A, FIG. 1B, and/or FIG. 3.
6-8 are flow diagrams representing machine readable instructions that may be executed to implement the example lifecycle manager of FIG. 2 and/or FIG. 4.
FIG. 9 is a block diagram of an example processing platform capable of executing the example machine readable instructions of FIG. 5 to implement the example software manager of FIG. 1A, FIG. 1B, and/or FIG. 3.
FIG. 10 is a block diagram of an example processing platform capable of executing the example machine readable instructions of FIGS. 6-8 to implement the example lifecycle manager of FIG. 2 and/or FIG. 4.
Detailed Description
Cloud computing is based on the deployment of many physical resources across a network, the virtualization of physical resources as virtual resources, and the provisioning of virtual resources for cross-cloud computing services and applications. When starting up a cloud computing environment or adding resources to an established cloud computing environment, data center operators strive to provide cost-effective services while having the resources of the infrastructure (e.g., storage hardware, computing hardware, and networking hardware) cooperate to enable pain-free (pain-free) installation/operation and optimize resources to improve performance. Existing techniques for building and maintaining data centers to provide cloud computing services often lock the data centers as single-source hardware resources because of the need to use custom virtualization software specifically designed for a particular type of hardware. Examples disclosed herein enable the use of vendor-independent (vendor-advertising) virtualization software to build and maintain a datacenter. In this manner, a data center operator may flexibly select any one from a plurality of hardware manufacturers to meet the physical hardware requirements of the data center while enabling the data center operator to initialize, virtualize, and provision new resources with relative ease. That is, a data center operator may use examples disclosed herein to acquire hardware resources from any one of a number of manufacturers without the data center operator being burdened with developing new software to initialize, virtualize, and provision such resources.
Example methods and apparatus disclosed herein facilitate managing a software lifecycle of a data center computing element. For example, the methods and apparatus facilitate updating, patching, upgrading, etc., a plurality of hardware elements that implement a computing element in a cluster of computing elements (e.g., a computing element may be a self-contained physical rack of multiple computing components (e.g., a network switch, a processor, memory, etc.), which may be combined with other self-contained physical racks to form a cloud or cluster of computing resources, in many computing environments, it is undesirable to interrupt, interfere with, etc., the operation of the computing environment (e.g., the computing environment may run 24 hours per day, and there may be no natural downtime, during which the system may be offline for maintenance.) example methods and apparatus facilitate scheduling and performing operations, such as updating, patching, upgrading, etc., by utilizing redundant and/or offline/standby computing resources to reduce and/or eliminate impact on the operating computing environment.
FIG. 1A depicts an example environment 100 in which physical racks 102 are prepared by an example system integrator 104 for distribution to customers. Fig. 1B depicts an example environment 117 in which an example physical rack 102 is deployed at an example client 118.
The example environment 100 of fig. 1 includes an example physical rack 102, an example system integrator 104, one or more example hardware/software vendors 106, an example network 108, an example virtual system solution provider 110, and an example virtual imaging device 112.
The system integrator 104 of the illustrated example receives and fulfills customer orders for computing hardware. The example system integrator 104 of FIG. 1A obtains computer hardware and/or software from other vendors, such as one or more example hardware/software vendors 106, and assembles the various hardware components and/or software into functional computing units to fulfill customer orders. Alternatively, the system integrator 104 may design and/or build some or all of the hardware components and/or software used to assemble the computing unit. According to the illustrated example, the system integrator 104 prepares computing units for other entities (e.g., businesses and/or individuals that do not own/employ and are not owned/employed by the system integrator 104). Alternatively, the system integrator 104 may assemble the computing units for use by the same entity as the system integrator 104 (e.g., the system integrator 104 may be a department of a company where the company orders and/or utilizes the assembled computing units). As used herein, the term "customer" refers to any person and/or entity that receives and/or operates a computing unit provided by the system integrator 104. In some examples, the system integrator 104 is an entity independent of the device manufacturer, such as a white tag device manufacturer that provides hardware without branding. In other examples, the system integrator 104 isAn Original Equipment Manufacturer (OEM) partner or an original equipment manufacturer (ODM) partner that cooperates with an OEM or ODM (e.g., a non-white label equipment manufacturer) that provides branded label hardware. Exemplary OEM/ODM hardware includes OEM/ODM servers (such as Hewlett-
Figure BDA0001531890870000041
(HP) server and
Figure BDA0001531890870000042
servers), and OEM/ODM switches (such as Arista switches) and/or any other OEM/ODM servers, switches or equipment that is branded by the original manufacturer.
According to the illustrated example, one type of computing unit ordered and/or assembled by the example system integrator 104 is the physical rack 102. The physical rack 102 is a combination of computing hardware and installed software that may be utilized by customers to create and/or add to a virtual computing environment. For example, the physical racks 102 may include processing units (e.g., a plurality of blade servers), network switches to interconnect the processing units and connect the physical racks 102 with other computing units (e.g., other computing units of the physical racks 102 in a network environment, such as a cloud computing environment) and/or data storage units (e.g., network attached storage, storage area network hardware, etc.). The physical racks 102 of the illustrated example are prepared by the system integrator 104 in a partially configured state to enable computing devices to be quickly deployed at customer locations (e.g., less than 2 hours). For example, the system integrator 104 may install operating systems, drivers, operating software, management software, and the like. The installed components may be configured with some system details (e.g., system details that facilitate intercommunication between components of the physical rack 102) and/or may be equipped with software to gather more information from customers when the virtual server rack is installed and first powered by the customer.
To assist in the preparation of the physical rack 102 for distribution to customers, the example system integrator 104 utilizes the virtual imaging appliance 112 to prepare and configure operating systems, system configurations, software, etc. on the physical rack 102 prior to shipping the example server rack 102 to a customer. The virtual imaging appliance 112 of the illustrated example is a virtual computing appliance provided by the example virtual system solution provider 110 to the system integrator 104 via the example network 108. The example virtual imaging appliance 112 is executed by the example system integrator 104 in a virtual computing environment of the system integrator 104. For example, the virtual imaging appliance 112 may be a virtual computing image, a virtual application, a container virtual machine image, a software application installed in the operating system of the computing unit of the system integrator 104, or the like. The virtual imaging device 112 may optionally be provided by any other entity and/or may be a physical computing device, may be multiple physical computing devices, and/or may be any combination of virtual and physical computing components.
The virtual imaging appliance 112 of the illustrated example retrieves software images and configuration data from the virtual system solution provider 110 via the network 108 during preparation of the physical rack 102 for installation on the physical rack 102. The virtual imaging appliance 112 of the illustrated example pushes (e.g., transfers, sends, etc.) the software image and configuration data to the components of the physical rack 102. For example, the virtual imaging device 112 of the illustrated example includes multiple network connections (e.g., virtual network connections, physical network connections, and/or any combination of virtual and network connections). For example, the virtual imaging device 112 of the illustrated example is connected to a management interface of one or more network switches installed in the physical rack 102, installs network configuration information on the one or more network switches, and reboots the one or more switches to load the installed configuration to communicatively couple the virtual imaging device 112 with one or more computing units communicatively coupled via the one or more network switches. The example virtual imaging appliance 112 is also connected to a management network interface (e.g., an out-of-band (OOB) interface) of one or more servers installed in the example physical chassis 102 to enable one or more operating systems to be installed (e.g., pre-boot execution environment (PXE) boot with an operating system installer). The example virtual imaging appliance 112 is also used to install virtual environment management components (described in further detail below in conjunction with fig. 3-6) and cause the virtual environment management components to boot so that they may take over deployment of the example server rack 102.
The example virtual imaging appliance 112 is configured to perform many of the operations of the deployment without user intervention and without requiring a user of the example system integrator 104 to manually connect to various interfaces of the components of the example physical rack 102. Further, the user of the example virtual imaging appliance 112 is relieved of the burden of locating the various software images required to configure the example physical rack 102 (e.g., firmware images of one or more network switches, operating system images of one or more servers, one or more operating system drivers for hardware components installed in the physical rack 102, etc.). In addition, the virtual environment management components deployed by the example virtual imaging appliance 112 are configured by the virtual imaging appliance 112 to facilitate easy deployment of the physical rack 102 at the customer location. For example, the virtual management components installed on the physical rack 102 by the example virtual imaging appliance 112 include a graphical user interface that guides the customer through the process to enter configuration parameters (e.g., details of the customer's network, information about the existing virtual environment, etc.). Additionally, the example virtual management component automatically discovers some information about the guest system (e.g., automatically discovers information about the existing virtual environment).
The network 108 of the illustrated example communicatively couples the example system integrator 104 with the virtual system solution provider 110 and communicatively couples the example hardware/software vendor 106 with the example virtual system solution provider 110. According to the illustrated example, the network 108 is the internet. Alternatively, the network 108 may be any type of local area network, wide area network, wireless network, wired network, any combination of networks, and the like. Although the network 108 of FIG. 1A is shown as a single network, the network may be any number and/or type of networks. For example, the network 108 may be implemented by one or more of a local area network, a wide area network, a wireless network, a wired network, a virtual network, and so forth.
Referring to fig. 1B, the example client 118 is where the example physical rack 102 (e.g., deploying multiple physical racks 102) is located. For example, the client 118 may be a data center, a location of a cloud provider, a business location, or any other location where it is desirable to implement a virtual computing environment comprised of one or more physical racks 102. In accordance with the illustrated example, the example client 118 (and the example physical chassis 102 located at the example client 118) are communicatively coupled to the example network 108 to communicatively couple the example client 118 with the example virtual system solution provider 110.
The virtual system solution provider 110 of the illustrated example allocates (e.g., sells) and/or supports the example virtual imaging device 112. The virtual system solution provider 110 of the illustrated example also provides a repository 116 of images and/or other types of software (e.g., virtual machine images, drivers, operating systems, etc.) that may be retrieved by a) the virtual imaging equipment 112 and installed on the physical rack 102 and/or B) retrievable by the example physical rack 102 after the example physical rack 102 is deployed at the example client 118 (as shown in fig. 1B). The virtual system solution provider 110 is optionally implemented by a plurality of entities (e.g., from one or more manufacturers of software) and/or any other type of entity.
The example virtual system solution provider 110 of the illustrated example of FIGS. 1A and 1B includes an example software manager 114 and an example repository 116. The example software manager 114 and the example repository 116 together provide software to the example virtual imaging appliance 112 of fig. 1A) of a) for provisioning the example physical chassis 102 at the example system integrator 104 and/or B) to one or more of the example physical chassis 102 at the example client 118 of fig. 1B for updating, upgrading, patching, etc., computing resources included in the one or more example physical chassis 102.
The example software manager 114 receives software from one or more example hardware/software vendors 106 and stores the data in the example repository 116. The software may include new and/or updated drivers, operating systems, firmware, etc. for the computing resources included in the example physical rack 102. For example, the software may include firmware/operating systems of network switches installed in the physical rack 102, hypervisors (hypervisors) for execution on server hardware installed in the physical rack 102, drivers for storage devices installed in the physical rack 102, security updates for operating systems installed in the computing environment provided by the physical rack 102, and so forth.
The example software manager 114 receives a request for a rack mount image from the example virtual imaging apparatus 112, retrieves the requested one or more images, and transmits the requested one or more images to the example virtual imaging apparatus 112 via the network 108 to facilitate installation of the one or more images on the example physical rack 102 by the example virtual imaging apparatus 112. The example software manager 114 may additionally provide one or more updated images to the example virtual imaging apparatus 112 after receiving updated software from one or more example hardware/software vendors 106. For example, the example virtual imaging device 112 may periodically send a request for updated one or more images and/or when updated images are ready, the example software manager 114 may notify the example virtual imaging device 112 (e.g., after new software has been received, tested, and added to the new images).
The example software manager 114 also receives a request from the example physical rack 102 to update software after the example physical rack 102 has been deployed at the example client 118. For example, when the example physical rack 102 is deployed as part of a cluster of physical racks 102 at the example client 118, one of the physical racks 102 may periodically send a request for an updated software package (e.g., a set of software that includes software associated with a plurality of computing resources installed in the example physical rack 102). In response to such a request, the example software manager 114 retrieves a manifest file that includes a version of the software package so that the physical rack 102 can determine whether the software package includes software that is new, updated, improved, etc., relative to the software currently installed on the computing resources of the example physical rack 102. For example, if the manifest file identifies a version that is newer than the version of the software package currently installed on the example physical rack 102, then the software package includes new software (e.g., new firmware that has been selected for installation on the network switch installed in the example physical rack 102). In some instances, the virtual system solution provider 110 may support a number of different physical rack implementations (e.g., different combinations of computing resources and/or software installed in the example physical rack 102). In such a case, the manifest file may additionally include an identifier of a particular combination of components in the example physical rack 102. For example, the inventory file may identify a Stock Keeping Unit (SKU) associated with the example physical rack 102 to allow the physical rack 102 to confirm that the received inventory file identifies the software for the particular physical rack 102.
An example implementation of the example software manager 114 is described in conjunction with FIG. 3.
The example repository 116 stores software received from one or more example hardware/software vendors 106, as well as a manifest file for the example software generated by the example software manager 114. The repository 116 of the illustrated example is communicatively coupled with the example software manager 114 to allow the example software manager 114 to store and retrieve software. The example repository 116 is a database. Alternatively, the example repository may be any other type of storage, such as a network attached storage, a hard drive, a shared network drive, a file, a folder, and so forth.
Fig. 2 depicts example physical racks 202, 204 in an example deployment of a virtual server rack 206. For example, the physical racks 202, 204 may be ones of the physical racks 102 assembled by the example system integrator 104 of fig. 1A. In the illustrated example, the first physical rack 202 has an example top of rack (ToR) switch a210, an example ToR switch B212, an example management switch 207, and an example server host node (0) 209. In the illustrated example, the management switch 207 and the server host node (0)209 run a Hardware Management System (HMS)208 for the first physical rack 202. The second physical chassis 204 of the illustrated example also has an example ToR switch a 216, an example ToR switch B218, an example management switch 213, and an example server host node (0) 211. In the illustrated example, the management switch 213 and the server host node (0)211 run an HMS 214 for the second physical chassis 204.
In the illustrated example, the management switches 207, 213 of the respective physical chassis 202, 204 run respective out-of-band (OOB) agents and OOB plug-ins (plugins) of the respective HMSs 208, 214. Further, in the illustrated example, the server host nodes (0)209, 211 of the respective physical chassis 202, 204 run respective IB agents, IB plug-ins, HMS service APIs, and aggregators.
In the illustrated example, the HMSs 208, 214 are connected to server management ports (e.g., using Baseboard Management Controllers (BMCs)) of the server host nodes (0)209, 211, to ToR switch management ports (e.g., using 1Gbps links) of the ToR switches 210, 212, 216, 218, and also to backbone switch management ports of one or more backbone (spine) switches 222. These example connections form a non-routable private Internet Protocol (IP) management network for OOB management. The HMS 208, 214 of the illustrated example uses this OOB management interface to the server management port of the server host node (0)209, 211 for server hardware management. In addition, the HMS 208, 214 of the illustrated example uses this OOB management interface to the ToR switch management ports of the ToR switches 210, 212, 216, 218 and to the backbone switch management ports of one or more backbone switches 222 for switch management. In examples disclosed herein, the ToR switches 210, 212, 216, 218 are connected to server Network Interface Card (NIC) ports of server hosts in the physical racks 202, 204 (e.g., using 10Gbps links) for downlink communications and to one or more backbone switches (e.g., using 40Gbps links) for uplink communications. In the example shown, management switch 207, 213 is also connected to ToR switches 210, 212, 216, 218 (e.g., using 10Gbps links) for managing internal communications between switches 207, 213 and ToR switches 210, 212, 216, 218. Further, in the illustrated example, the HMSs 208, 214 have IB connections to individual server nodes of the physical racks 202, 204 (e.g., server nodes in the example physical hardware resources 224, 226). In the illustrated example, the IB connection interfaces to the physical hardware resources 224, 226 via an operating system running on the server node using an OS-specific API (such as a vSphere API), a Command Line Interface (CLI), and/or an interface such as a common information model from the Distributed Management Task Force (DMTF).
The HMSs 208, 214 of the respective physical racks 202, 204 interface with Virtual Rack Managers (VRMs) 225, 227 of the respective physical racks 202, 204 to instantiate and manage the virtual server racks 206 using the physical hardware resources 224, 226 (e.g., processors, network interface cards, servers, switches, storage devices, peripherals, power supplies, etc.) of the physical racks 202, 204. In the illustrated example, the VRM 225 of the first physical rack 202 runs on three server host nodes of the first physical rack 202, one of which is server host node (0) 209. As used herein, the term "host" refers to a functionally indivisible unit of physical hardware resources 224, 226, such as a physical server, that is configured or allocated as a whole to a virtual chassis and/or workload; all power is on or off; or may be considered as a complete functional unit. Further, in the illustrated example, the VRM 227 of the second physical rack 204 runs on three server host nodes of the second physical rack 204, one of which is server host node (0) 211. In the illustrated example, the VRMs 225, 227 of the respective physical racks 202, 204 communicate with each other through one or more backbone switches 222. Further, in the illustrated example, communications between the physical hardware resources 224, 226 of the physical racks 202, 204 are switched between the TOR switches 210, 212, 216, 218 of the physical racks 202, 204 through one or more backbone switches 222. In the illustrated example, each of the TOR switches 210, 212, 216, 218 is connected to each of two backbone switches 222. In other examples, fewer or more backbone switches may be used. For example, when a physical rack is added to the virtual server rack 206, additional backbone switches may be added.
In examples disclosed herein, ToR switches 210, 212, 216, 218 are managed using Command Line Interfaces (CLIs) and APIs. For example, the HMS 208, 214 uses CLI/APIs to populate switch objects corresponding to the ToR switches 210, 212, 216, 218. At HMS startup, the HMS 208, 214 populates the initial swap object with statically available information. In addition, the HMS 208, 214 uses a periodic polling mechanism as part of the HMS switch management application thread to collect statistics and health data (e.g., link status, packet statistics, availability, etc.) from the TOR switches 210, 212, 216, 218. There is also a configuration buffer as part of the switch object that stores configuration information to be applied at the switch.
The example VRMs 225, 227 of the example shown in FIG. 2 include example lifecycle managers (LCMs) 228, 230. The example LCMs 228, 230 are responsible for requesting software from the example virtual system solution provider 110 and managing the installation of the example software. Upon receiving a manifest file identifying information about the software package from the example virtual system solution provider 110, the example LCM 228, 230 determines whether the manifest applies to the example physical rack 202, 204, verifies the existence of dependencies (or resolves such dependencies) required to install the software components of the software package, ensures that there are sufficient computing resources for installing the software components of the software package, scheduling the software components for installation, and performing the installation of the software components.
Fig. 4 shows an example implementation of the LCMs 228, 230. For simplicity, the remainder of this description will refer to the example LCM 228. However, the descriptions and example implementations included therein may also be applied to the example LCM 230.
FIG. 3 is a block diagram of an example implementation of software manager 114 of 1A and/or 1B. The example software manager 114 of FIG. 3 includes an example software receiver 302, an example package manager 304, and an example repository interface 306, and an example request handler 308.
The example software receiver 302 of the illustrated example receives software components (e.g., drivers, firmware, operating systems, applications, etc.) from the example hardware/software vendor 106 and transmits the software components to the example package manager 304. For example, the software receiver 302 may receive notifications from one or more example hardware/software providers 106 when new software is available, and/or may periodically query one or more example hardware/software providers 106 for the availability of new software.
The example package manager 304 receives software from the example software receiver 302, coordinates testing of the software, and adds the software to the example repository 116 after testing. When software is added to the example repository 116, the example package manager 304 adds a reference to the software to a manifest file associated with the software package to which the software was added (e.g., the collection of software for a particular physical rack version/implementation). For example, the package manager 304 may add a new entry to the manifest file of the software and/or may replace a previous version of the software identified in the manifest file with a new version of the software. Testing of software may be accomplished by an administrator installing the software on the test physical rack and verifying that the software is installed as expected and does not interfere with the operation of the test physical rack (e.g., does not cause errors, does not conflict with our software or hardware, etc.). During testing of software, the example package manager 304 collects dependency information (e.g., information about what software components may be needed to install the software). The example package manager 304 stores the dependency information in a manifest file associated with the software package to which the software is added. For example, the example package manager 304 may receive user input identifying software dependencies, may receive input files identifying software dependencies, may monitor software installations to programmatically determine software dependencies, and so on.
The example repository interface 306 interfaces with the example repository 116. For example, repository interface 306 may be a database interface of example software manager 114. Alternatively, the repository interface 306 may be any other type of interface that assists the example package manager 304 and/or the example request processor 308 in storing and/or retrieving manifest files and/or software from the example repository 116.
The example request processor 308 receives a request for a software image and/or an updated software package from the example virtual imaging appliance 112 of fig. 1A and/or the example physical rack 102 at the example client 118 of fig. 1B. In response to the request, the example request processor 308 retrieves the requested information. For example, the request processor 308 may retrieve the manifest file and send the manifest file to the request source (e.g., the example virtual imaging apparatus 112 and/or the example physical rack 102) to allow the request source to determine whether the manifest file identifies software associated with the physical rack 102 that is to be installed on the physical rack 102. For example, the inventory file may identify a SKU that is checked against the SKU associated with the physical rack 102. When the request source indicates that the manifest file identifies the desired software, the request processor 308 may retrieve the software (e.g., software image, software package, etc.) from the example repository 116 and transmit the software to the request source.
A flowchart illustrating example instructions for implementing the example software manager 114 of fig. 1A, 1B, and/or 3 is described in connection with fig. 5.
Although an example manner of implementing the software manager 114 of fig. 1A and/or 1B is illustrated in fig. 3, one or more elements, processes and/or devices illustrated in fig. 3 may be combined, divided, rearranged, omitted, eliminated and/or implemented in any other way. Further, the example software receiver 302, the example package manager 304, the example repository interface 306, the example request processor 308, and/or more generally the example software manager 114 of fig. 3 may be implemented by hardware, software, firmware, and/or any combination of hardware, software, and/or firmware. Thus, for example, any of the example software receiver 302, the example package manager 304, the example repository interface 306, the example request processor 308, and/or the more general example software manager 114 of fig. 3 may be implemented by one or more analog or digital circuits, logic circuits, one or more programmable processors, one or more Application Specific Integrated Circuits (ASICs), one or more Programmable Logic Devices (PLDs), and/or one or more Field Programmable Logic Devices (FPLDs). When any apparatus or system claims of this patent are read to cover a purely software and/or firmware implementation, at least one of the example software receiver 302, the example package manager 304, the example repository interface 306, the example request processor 308, and/or the more generally example software manager 114 of fig. 3 is expressly defined herein to include a tangible computer-readable storage device or storage disk, such as a memory, a Digital Versatile Disk (DVD), a Compact Disk (CD), a blu-ray disk, etc., that stores software and/or firmware. Additionally, the example software manager 114 of fig. 1A and/or 1B may include one or more elements, processes, and/or devices in addition to or in place of those shown in fig. 3, and/or may include more than one of any or all of the illustrated elements, processes, and devices.
Fig. 4 is a block diagram of an example implementation of the LCM 228 in the example VRM 225 of the example physical rack 202 of fig. 2 (e.g., the physical rack 102 deployed at the example client 118). The example LCM 228 includes an example package manager 402, an example lifecycle repository 404, an example user interface 406, an example dependency analyzer 408, an example capacity analyzer 410, and an example installation coordinator 412.
The example package manager 402 interfaces with the example software manager 114 of the example virtual system solution provider 110 of FIG. 1B to receive the manifest file and software to be deployed at the example physical rack 202 and/or at other physical racks 202 deployed in the cluster. The example package manager 402 periodically polls the example software manager 114 to determine if there is an updated manifest file to be analyzed to apply to the example physical rack 202. Optionally, package manager 402 may receive a notification when a new manifest file is available (e.g., if package manager 402 has registered with virtual system solution provider 110 to receive such a notification). When the package manager 402 receives a new manifest, the package manager 402 determines whether the manifest applies to the physical chassis and, if so, the example package manager 402 notifies the example user interface 406 that the new manifest has been received. If the example user interface 406 notifies the package manager 402 that the administrator has approved/scheduled the download of the software package, the example package manager 402 retrieves the software identified by the example manifest file and stores the software in the example lifecycle repository along with the manifest file.
The lifecycle repository 404 of the illustrated example stores the manifest files and software received from the example virtual system solution provider 110 via the example package manager 402. Example lifecycle repository 404 is a software database. Alternatively, lifecycle repository 404 can be implemented by any type of file and/or data store, such as, for example, a network-attached storage, a hard drive, a shared network drive, a file, a folder, and so forth.
The example user interface 406 of FIG. 4 provides notifications to and receives instructions from an administrator of the physical rack 202. For example, when the example package manager 402 notifies the example user interface 406 that a new manifest/software package has been received, the example user interface 406 presents a notification to the administrator and requests administrator input regarding whether the administrator wants to download the software package and/or schedule installation of the software package. When the example user interface 406 receives an instruction from an administrator to download a software package, the example user interface 406 notifies the example package manager 406. When the example user interface 406 receives an instruction from an administrator to schedule an installation of a software package, the example user interface 406 notifies the example dependency analyzer 408, the example capacity analyzer 410, and/or the example installation coordinator 412. The example user interface 406 additionally presents any error notifications generated by the example dependency analyzer 408, the example capacity analyzer 410, and/or the example installation coordinator 412.
The dependency analyzer 408 of the illustrated example receives a notification from the example user interface 406 that an administrator has requested to install a software package. In response to the notification, the example dependency analyzer 408 determines the dependency requirements of the software package by analyzing the manifest file, checks the dependency requirements against the current state of the hardware and software components installed on the physical rack 202, and notifies the installation coordinator 412 of the order in which the software installation is expected and/or required by the dependency requirements of the software package. For example, the dependency analyzer 408 may determine that version 3.0 of the driver requires version 2.0 to be currently installed and add version 2.0 to the installation schedule after determining that version 1.0 is currently installed. In another example, the manifest file may indicate that the web server update requires a database update that is also identified in the manifest file. In this case, the example dependency analyzer 408 will notify the example installation coordinator 412 that, in this example, the database update should be scheduled before updating the web server.
The capacity analyzer 410 of the illustrated example receives a notification from the example user interface 406 that an administrator has requested to install a software package. In response to the notification, the example capacity analyzer 410 retrieves the manifest file for the software package from the example lifecycle repository 404. The example capacity analyzer 410 determines the hardware of the example physical rack 202 (and possibly other physical racks 202 of the cluster)And/or the software computing resources will be affected by the installation of the software package. For example, the capacity analyzer 410 determines which hardware and/or software components will need to be restarted in order to perform the installation of the software package. The capacity analyzer 410 compares the computing resource impact to the available computing resources of the example physical rack 202 (and the cluster of physical racks 202) and the operational requirements of the example physical rack 202 (e.g., a service level agreement indicating the required computing resource availability and/or redundancy). The example capacity analyzer 410 determines whether there are sufficient computing resources to perform the software installation without interfering with the operational requirements. The example capacity analyzer 410 determines available computing resources by determining impacted computing resources (e.g., determining workload domains that have been scheduled for updating) and queries an Application Program Interface (API) associated with the operating environment (e.g., queries VMware)
Figure BDA0001531890870000131
A server). For example, capacity analyzer 410 may determine that two ToR switches 210, 212 are installed in an example physical rack 202, and thus when a software installation requires a switch reboot to update the switches, one of ToR switches 210, 212 may be rebooted once without affecting the performance of physical rack 202. Alternatively, the capacity analyzer 410 may determine that all of the processing resources allocated to a particular workload domain (or any other type of cluster of computing resources) are in use (e.g., that a workload is currently executing on all of the computing resources such that no computing resources may be temporarily deactivated for updating). In this case, the example capacity analyzer 410 will allocate (or attempt to allocate) one or more additional computing resources (e.g., add another server to the workload domain) so that when the computing resources are updated, the execution workload may be temporarily migrated from the computing resources in the workload domain (e.g., one at a time) to the one or more additional computing resources. For example, after a workload is migrated from one of the computing resources, the computing resource may be moved to a maintenance mode, updated, restarted, returned to an operational mode. Thus, in addition to analyzing the capacity for installing software packages, the example capacity analyzer 410 is on demandThe auxiliary capacity is increased. The capacity analyzer 410 communicates information regarding capacity scheduling to the example installation coordinator 412 for use in scheduling the installation (e.g., for notifying the installation coordinator 412 of the availability of additional computing resources that may be used during the installation).
The example installation orchestrator 412 receives the information collected by the example dependency analyzer 408 and the example capacity analyzer 410 and schedules installation of software for the software packages identified in the received manifests. The example installation coordinator 412 schedules (or attempts to schedule) the installation of software to meet dependency requirements and avoid disrupting the operation of the physical rack 202 (and/or physical racks 202). According to the illustrated example, the installation coordinator 412 schedules devices for independent unavailability (e.g., schedules unavailability of redundant devices such that at least one redundant device is always available). Further, the example installation coordinator 412 schedules temporary movement/migration of virtual machines during installation.
A flowchart illustrating example instructions for implementing the example LCM 228 of fig. 2 and/or 4 is described in connection with fig. 6-8.
Although an example manner of implementing the LCM 228 (and/or the example LCM 230) of fig. 2 is shown in fig. 4, one or more elements, processes and/or devices shown in fig. 4 may be combined, divided, rearranged, omitted, eliminated and/or implemented in any other way. Further, the example package manager 402, the example lifecycle repository 404, the example user interface 406, the example dependency analyzer 408, the example capacity analyzer 410, the example installation coordinator 412, and/or, more generally, the example LCM 228 of fig. 4 may be implemented by one or more analog or digital circuits, logic circuits, one or more programmable processors, one or more Application Specific Integrated Circuits (ASICs), one or more Programmable Logic Devices (PLDs), and/or one or more Field Programmable Logic Devices (FPLDs). When any apparatus or system claims of this patent are read to cover a purely software and/or firmware implementation, at least one of the example package manager 402, the example lifecycle repository 404, the example user interface 406, the example dependency analyzer 408, the example capacity analyzer 410, the example installation coordinator 412, and/or, more generally, the example LCM 228 of fig. 4 is expressly defined herein to include a tangible computer-readable storage device or storage disk, such as a memory, a Digital Versatile Disk (DVD), a Compact Disk (CD), a blu-ray disk, etc., that stores software and/or firmware. Further, the example LCM 228 of fig. 2 may include one or more elements, processes, and/or devices in addition to or instead of those shown in fig. 4, and/or may include more than one of any or all of the illustrated elements, processes, and/or devices.
A flowchart representative of example machine readable instructions for implementing the example software manager 114 of FIGS. 1A, 1B, and/or 3 is shown in FIG. 5. In this example, the machine readable instructions comprise a program for execution by a processor (such as the processor 912 shown in the example processor platform 900 discussed below in connection with fig. 5). The program may be embodied in software stored on a tangible computer readable storage medium such as a CD-ROM, a floppy disk, a hard drive, a Digital Versatile Disk (DVD), a blu-ray disk, or a memory associated with the processor 912, but the entire program and/or parts thereof could alternatively be executed by a device other than the processor 912 and/or embodied in firmware or dedicated hardware. Further, although the example program is described with reference to the flowchart shown in FIG. 5, many other methods of implementing the example software manager 114 may alternatively be used. For example, the order of execution of the blocks may be changed and/or some of the blocks described may be changed, eliminated, or combined.
As described above, the example process of fig. 5 may be implemented using coded instructions (e.g., computer and/or machine readable instructions) stored on a tangible computer readable storage medium such as a hard disk drive, a flash memory, a Read Only Memory (ROM), a Compact Disk (CD), a Digital Versatile Disk (DVD), a cache, a Random Access Memory (RAM), and/or any other storage device or storage disk in which information is stored for any duration (e.g., for extended periods of time, permanently, brief instances, for temporarily buffering, and/or for caching of the information). As used herein, the term tangible computer readable storage medium is expressly defined to include any type of computer readable storage device and/or storage disk and to exclude propagating signals and to exclude transmission media. As used herein, "tangible computer-readable storage medium" and "tangible machine-readable storage medium" are used interchangeably. Additionally or alternatively, the example processes of fig. 5 may be implemented using coded instructions (e.g., computer and/or machine readable instructions) stored on a non-transitory computer and/or machine readable medium (e.g., a hard disk drive, a flash memory, a read-only memory, a compact disk, a digital versatile disk, a cache, a random-access memory) and/or a storage disk in which information is stored for any duration (e.g., for extended periods of time, permanently, brief instances, for temporarily buffering, and/or for caching the information).
As used herein, the term non-transitory computer readable medium is expressly defined to include any type of computer readable storage device and/or storage disk and to exclude propagating signals and to exclude transmission media. As used herein, the phrase "at least" when used as a transitional word in the preamble of a claim is open-ended, in the same way that the term "comprising" is open-ended.
The routine of FIG. 5 begins when the example software receiver 302 of the example software manager 114 receives software from a hardware/software vendor (block 502). For example, software may include applications, drivers, operating systems, configuration files, and the like. The software may be received in a notification from a hardware/software provider, may be received by the example software receiver 302 in response to a poll to a software provider, and so on.
The example software receiver 302 then presents a request to include the new software in the appropriate package (block 504). For example, the software receiver 302 may add an item to a task list requesting approval to add software to a package, may present a notification on a graphical user interface, and so forth. The example software receiver 302 determines whether an instruction to add software to the package has been received (block 506). When an instruction has been received to not add software to the package, the example software receiver 302 discards the received software (block 508).
When an instruction to add software to a package has been received (block 506), the example package manager 304 stores the software for the package (e.g., stores the software in the example repository 116) (block 510). The example package manager then marks the software for testing (block 512). For example, an administrator may install software on the example physical rack 102 and/or in the example virtual server rack 206 to verify that the software operates as intended, and does not interfere with other operations, and so forth. The example package manager 304 then determines whether an instruction to continue adding software to the package has been received after the test (block 514). When an instruction has been received to not add software to the package (e.g., because testing of the software identified a problem), the example package manager 304 discards the software (block 508).
When the example package manager 304 receives an instruction to continue adding software to the package (block 514), the example package manager 304 captures the dependencies of the example software (block 516). Dependencies may be captured by tracking tests of the software to track dependencies accessed during the tests, by receiving a dependency record (e.g., a file) identifying dependencies required by the software, by receiving user input identifying dependencies, and so forth. According to the illustrated example, the dependencies are captured by recording the dependencies in a manifest file to be distributed using a package that includes software. Alternatively, the dependencies may be captured in any other manner (e.g., stored in a database that is accessed to build the manifest).
The example repository interface 306 publishes the manifest file generated with the dependency information (block 518). According to the illustrated example, the repository interface 306 stores the manifest (e.g., and the software identified in the manifest) in the example repository 116 of fig. 1 to enable the request handler 308 to service the request. For example, the manifest may be identified as the most current manifest (e.g., replacing the most current previous manifest) such that the software request received by the example request processor 308 is serviced by transmitting the most current manifest and/or software.
According to the illustrated example, when the software package associated with the virtual server chassis is updated, the example request processor 308 updates the virtual server chassis image used by the example virtual imaging appliance 112 of fig. 1 to ensure that the virtual imaging appliance 112 will use the most up-to-date software when deploying the virtual server chassis (block 520).
The request processor 308 of the illustrated example determines whether a packet acceleration instruction is received (block 522). The package acceleration instructions indicate that the software package should be deployed to the virtual server chassis faster than the next scheduled software release. For example, when an existing software package contains a vulnerability patched by the most current software package, the distribution of the package may be expedited. The package acceleration instructions may be received from a user, may be identified in an attribute of the software received by the example software receiver 302, and the like.
When the example request processor 308 determines that a packet acceleration instruction has not been received (block 522), the routine of FIG. 5 terminates. When the example request processor 308 determines that a packet acceleration instruction is received, the example request processor 308 and/or the example repository interface 306 publish the packet for accelerated publication (block 524). According to the illustrated example, the example request processor 308 notifies the example repository interface 306 to store an indication (e.g., a flag) that the packet is scheduled for accelerated publication in the example repository 116. Thus, when the example request processor 308 receives a request (e.g., a software update request) from a virtual server chassis, the example request processor 308 will detect the flag and notify the requesting virtual server chassis that an accelerated package is available (e.g., to suggest that the virtual server chassis should deploy the package, even if not at the time of the scheduled publication). The routine of fig. 5 is then terminated.
6-8 illustrate flowcharts representative of example machine readable instructions to implement the example lifecycle manager 228 of FIG. 2 and/or FIG. 4. In these examples, the machine readable instructions comprise one or more programs that are executed by a processor (such as processor 1012 shown in example processor platform 1000 discussed below in connection with fig. 6-8). The program may be embodied in software stored on a tangible computer readable storage medium such as a CD-ROM, a floppy disk, a hard drive, a Digital Versatile Disk (DVD), a blu-ray disk, or a memory associated with the processor 1012, but the entire program and/or parts thereof could alternatively be executed by a device other than the processor 1012 and/or embodied in firmware or dedicated hardware. In addition, although the example program is described with reference to the flowcharts shown in FIGS. 6-8, many other methods of implementing the example lifecycle manager 228 can alternatively be used. For example, the order of execution of the blocks may be changed and/or some of the blocks described may be changed, eliminated, or combined.
As described above, the example processes of fig. 6-8 may be implemented using coded instructions (e.g., computer and/or machine readable instructions) stored on a tangible computer readable storage medium (e.g., a hard disk drive, a flash memory, a Read Only Memory (ROM), a Compact Disc (CD), a Digital Versatile Disc (DVD), a cache, a Random Access Memory (RAM), and/or any other storage device or storage disk in which information is stored for any duration (e.g., for extended periods of time, permanently, brief instances, for temporarily caching, and/or for caching of the information). As used herein, the term tangible computer readable storage medium is expressly defined to include any type of computer readable storage device and/or storage disk and to exclude propagating signals and to exclude transmission media. As used herein, "tangible computer-readable storage medium" and "tangible machine-readable storage medium" are used interchangeably. 6-8 may be implemented using coded instructions (e.g., computer and/or machine readable instructions) stored on a non-transitory computer and/or machine readable medium (e.g., a hard disk drive, a flash memory, a read-only memory, a compact disk, a digital versatile disk, a cache, a random-access memory, and/or any other storage device or storage disk in which information is stored for any duration (e.g., for extended periods of time, permanently, brief instances, for temporarily caching, and/or for caching the information).
The process of FIG. 6 begins when the example package manager 402 determines whether the received manifest is new (block 602). The example package manager 402 may receive a package in a request to send a request for a new package to the example request handler 308 of the example software manager 112, may receive a manifest from a notification sent by the example request handler 308 to the example package manager 402, and so on. The example package manager 402 may determine whether the manifest is new (e.g., previously unprocessed), but analyze information stored in the lifecycle manager (e.g., analyze a list of processed packages, analyze a version number to determine whether a version number identified in the newly received manifest is greater than a version number of a recently installed package, etc.). When the example package manager 402 determines that the manifest is not new, the package manager 402 discards the manifest (block 604).
When the example package manager 402 determines that the manifest is new (block 602), the example package manager 402 determines whether the manifest revokes previously received manifests (block 606). For example, the manifest may indicate that a previously received manifest has been revoked and should be prevented from installing and/or should uninstall software identified in the revoked manifest, rather than identifying new and/or updated software (block 608).
When the example package manager 402 determines that the list does not revoke the previous list (block 606), the example package manager 402 compares the identification information of the virtual server rack 206 with the identification information contained in the list to determine whether the list identifies the virtual server rack 206 (block 610). For example, virtual server racks using different iterations of different software may be assigned. When the manifest does not match the virtual server chassis 206, the example package manager 402 discards the example manifest (block 606).
When the example package manager 402 determines that the manifest matches the example virtual server chassis 206 (block 610), the example user interface 406 notifies an administrator of the virtual server chassis 206 that a new manifest has been received (block 612). For example, the example user interface 406 may send an electronic message to an administrator, may set a notification, may add a task to a task list, and so on. The example user interface 406 receives a download schedule from an administrator (block 614). Alternatively, the download schedule may be automatically determined by the analytics system to determine when system resources are available to download the software identified in the manifest, and/or the download may be automatically initiated before, while, and/or after the administrator is notified of the new manifest. Using the schedule (or automatically), the example package manager 402 downloads the software components identified in the manifest (block 616). The software components may be downloaded from the example software manager 114, from a software distributor, from a hardware manufacturer (e.g., a manufacturer may assign hardware drivers), and so forth. The example package manager 402 stores the software package in the example lifecycle repository 404 (block 618). The process of fig. 6 then terminates.
FIG. 7 is a flow diagram illustrating example machine readable instructions that may be executed to implement the example lifecycle manager 228 to schedule installation of downloaded software (e.g., software downloaded by the process shown in FIG. 6). The example process of FIG. 7 begins when the example user interface 406 receives an instruction to install a previously downloaded software package to the example lifecycle repository 404 (block 702). The example installation coordinator 412 checks the validity of the downloaded software (block 704). The validity check may include verifying a signature, fingerprint, checksum, hash, etc. against known valid values (e.g., known values disclosed by the example software manager 114 issued by the distributor of the software, etc.). Additionally or alternatively, software may be scanned for viruses, malware, malicious behavior, and the like. Additionally or alternatively, compatibility of software with the example virtual server chassis 206 may be checked by analyzing the software operating on the example virtual server chassis 206 and the configuration installed on the example virtual server chassis 206 (e.g., the example virtual server chassis 206 outside of the inventory configuration implemented by the example virtual imaging device 112 has been customized by an administrator of the example virtual server chassis 206).
The example installation coordinator 412 then determines the physical devices to be affected based on the installation instructions (block 706). The package may include software for one or more physical devices included in the physical racks 202, 204 of the example virtual server rack 206. For example, the package may include software for upgrading the firmware of ToR switches a, B210, 212, 216, 218. In such an example, the example installation coordinator 412 determines that installation of software on ToR switches a, B210, 212, 216, 218 will require restarting each of these switches.
The example dependency analyzer 408 additionally determines which software installation in the package may be performed independently (block 708). Software installation may be performed independently when it is not dependent on another software installation or a process that is not part of the installation. The example dependency analyzer 408 also analyzes the software in the package to determine which software installation depends on other software installations or operations (block 710). For example, the dependency analyzer 408 may analyze dependency information included in a software package manifest file generated by the example software manager 114. According to the illustrated example, the dependency analyzer 408 represents information from blocks 708 and 710 in the sorted software inventory that has dependencies identified in the sorted inventory.
The example installation coordinator 412 analyzes the information collected in blocks 706-710 to generate an installation schedule for the software to be installed in the example virtual server chassis 206 (block 712). According to the illustrated example, the installation coordinator 412 schedules installations of non-dependent installations in parallel when those installations do not affect the same physical device. The installation orchestrator 412 additionally orchestrates the installation of software components to ensure that required dependencies are installed before installing software that requires them. The example installation coordinator 412 performs installation of the software according to the generated schedule. Alternatively, the example installation coordinator 412 may schedule other components to perform the installation (e.g., schedule the software installation using an installation agent of the physical device on which the software is to be installed).
FIG. 8 is a flow diagram illustrating example machine readable instructions that may be executed to implement the example lifecycle manager 228 to install downloaded software (e.g., a software installation scheduled by the example process illustrated in FIG. 7). The example process of FIG. 8 begins when the example installation coordinator 412 determines whether the requested software installation includes a request for an uninterruptible installation (block 802). For example, the example user interface 406 may receive a user request to perform an installation without interruption (e.g., an installation that reduces or eliminates the impact on users of the virtual server racks 206 by minimizing resources that are not available at all). For example, the impact on the user may be eliminated by making not all redundant resources unavailable at any given time (e.g., by updating and restarting ToR switches a, B210, 212, 216, 218 once). While hitless installation reduces the impact on users, the installation process may take longer and thus may not be desirable by an administrator in all cases (e.g., when a software patch is needed to address a vulnerability or problem with the virtual server chassis 206).
When no interrupt-free shutdown is requested (block 802), control proceeds to block 816.
When an uninterrupted shutdown is requested (block 802), the example installation coordinator 412 attempts to schedule the physical devices to be independently unavailable (e.g., so that not all of the standby devices are unavailable at any given time). The example capacity analyzer 410 then determines the computing resource requirements for executing the one or more workloads assigned to the example virtual server chassis 206 (block 806). According to the illustrated example, capacity analyzer 410 determines one or more workload requirements by analyzing one or more service level agreements associated with one or more workloads. Additionally or alternatively, the capacity analyzer 410 may perform an in-situ analysis of one or more workload demands (e.g., by monitoring activity, peak, average, etc. resource utilization (e.g., processor utilization, memory utilization, storage utilization, etc.)). The example capacity analyzer 410 then determines whether installing the schedule would reduce the available computing resources such that the computing requirements of the one or more workloads cannot be met (block 808). The example capacity analyzer 410 considers computing resource requirements as well as required service level agreement requirements, such as required computing resource redundancy (e.g., requirements that all processing resources include redundant processing resources).
When there are not enough computing resources to perform an installation according to the schedule and meet the computing resource requirements of the one or more workloads (block 808), the example installation coordinator 412 adds additional computing resources to the cluster for the one or more workloads (block 812). For example, if a cluster is associated with 8 physical computing resources 224, and all 8 physical computing resources 224 need to meet the requirements of executing one or more workloads on the cluster, then it is not possible to perform updates of the physical computing resources 224, even if only one computing resource would be offline at a time. Accordingly, the installation coordinator 412 temporarily adds another computing resource (e.g., another physical computing resource 224). The computing resources to be added to the cluster may be from a pool of unused, spare, etc. computing resources, and may be from another cluster that includes more computing resources than are needed to meet the demand of one or more workloads executing in the cluster.
After determining that sufficient computing resources exist (block 808) or adding additional computing resources (block 812), the example installation coordinator removes the virtual computing element from the computing resources to be updated (block 814). The virtual compute element may be moved to execute on another compute resource (e.g., a processing resource, a storage resource, a network resource, etc.) available in the cluster and/or another compute resource that has been added to the cluster.
After moving the virtual machine resources (block 814) or determining that no interrupt-free shutdown is requested (block 802), the example installation coordinator 412 installs the software package on the currently selected computing resource (block 816). According to the illustrated example, the installation includes any required reboots to prepare the software for execution.
After the computing resources are updated, if the virtual computing resources are moved out of the updated physical computing resources in block 814, the example installation coordinator 412 moves the virtual computing resources back to the updated physical computing resources (block 818).
The example installation coordinator 412 determines whether there are additional physical computing resources to be updated (block 820). When there are additional computing resources to update, the example installation coordinator 412 selects the next physical computing resource (block 822) and control returns to either block 814 (if no interrupt-free shutdown is requested) or block 816 (if no interrupt-free shutdown is requested).
When there are no other computing resources to update (block 824), the example installation coordinator 412 determines whether a computing resource is added to the computing cluster (e.g., if block 812 is performed) (block 824). The process of fig. 8 terminates when no additional computing resources are added to the cluster. When additional computing resources are added to the cluster, the example installation coordinator 412 returns the added computing resources to their previous state (block 826), and the process of FIG. 8 terminates. For example, the installation coordinator 412 may move the computing resources back to an idle state, a standby state, another cluster that includes excess computing resources, and so on.
FIG. 9 is a block diagram of an example processor platform 900 capable of executing the instructions of FIG. 5 to implement the software manager 114 of FIG. 1A, FIG. 1B, and/or FIG. 3. The processor platform 900 may be, for example, a server, a personal computer, a mobile device (e.g., a cell phone)Smart phones, such as iPadsTMA tablet computer), a Personal Digital Assistant (PDA), an internet appliance, a DVD player, a CD player, a digital video recorder, a blu-ray player, a game player, a personal video recorder, a set-top box, or any other type of computing device.
The processor platform 900 of the illustrated example includes a processor 912. The processor 912 of the illustrated example is hardware. For example, the processor 912 may be implemented by one or more integrated circuits, logic circuits, microprocessors or controllers from any desired family or manufacturer. The example processor 912 includes the example software receiver 302, the example package manager 304, the example repository interface 306, and the example request processor 308.
The processor 912 of the illustrated example includes local memory 913 (e.g., a cache). The processor 912 of the illustrated example communicates with a main memory including a volatile memory 914 and a non-volatile memory 916 via a bus 918. The volatile memory 914 may be implemented by Synchronous Dynamic Random Access Memory (SDRAM), Dynamic Random Access Memory (DRAM), RAMBUS Dynamic Random Access Memory (RDRAM), and/or any other type of random access memory device. The non-volatile memory 916 may be implemented by flash memory and/or any other desired type of memory device. Access to the main memory 914, 916 is controlled by a memory controller.
The processor platform 900 of the illustrated example also includes interface circuitry 920. The interface circuit 920 may be implemented by any type of interface standard, such as an Ethernet interface, a Universal Serial Bus (USB), and/or a PCI Express interface.
In the illustrated example, one or more input devices 922 are connected to the interface circuit 920. One or more input devices 922 allow a user to enter data and commands into the processor 1012. The one or more input devices may be implemented by, for example, an audio sensor, a microphone, a camera (stop-motion or video), a keyboard, a button, a mouse, a touch screen, a track pad, a track ball, an equivalent, and/or a voice recognition system.
One or more output devices 924 are also connected to the interface circuit 920 of the illustrated example. The output devices 1024 may be implemented, for example, by display devices (e.g., Light Emitting Diodes (LEDs), Organic Light Emitting Diodes (OLEDs), liquid crystal displays, cathode ray tube displays (CRTs), touch screens, tactile output devices, printers, and/or speakers). The interface circuit 920 of the illustrated example thus typically includes a graphics driver card, a graphics driver chip, or a graphics driver processor.
The interface circuit 920 of the illustrated example also includes a communication device (such as a transmitter, receiver, transceiver, modem, and/or network interface card) to facilitate exchange of data with peripheral machines (e.g., any kind of computing device) via a network 926 (e.g., an ethernet connection, a Digital Subscriber Line (DSL), a telephone line, coaxial cable, a cellular telephone system, etc.).
The processor platform 900 of the illustrated example also includes one or more mass storage devices 928 for storing software and/or data. Examples of such mass storage devices 928 include floppy disk drives, hard disk drives, optical disk drives, blu-ray disk drives, RAID systems, and Digital Versatile Disk (DVD) drives.
The encoded instructions 932 of fig. 5 may be stored in the mass storage device 928, the volatile memory 914, the non-volatile memory 916, and/or on a removable tangible computer-readable storage medium, such as a CD or DVD.
FIG. 10 is a block diagram of an example processor platform 1000 capable of executing the instructions of FIGS. 6-8 to implement the lifecycle manager 228 of FIG. 2 and/or FIG. 4. The processor platform 1000 may be, for example, a server, a personal computer, a mobile device (e.g., a cell phone, a smart phone, such as an iPad), a smart phone, aTMA tablet computer), a Personal Digital Assistant (PDA), an internet appliance, a DVD player, a CD player, a digital video recorder, a blu-ray player, a game player, a personal video recorder, a set-top box, or any other type of computing device.
The processor platform 1000 of the illustrated example includes a processor 1012. The processor 1012 of the illustrated example is hardware. For example, the processor 1012 may be implemented by one or more integrated circuits, logic circuits, microprocessors or controllers from any desired family or manufacturer. The example processor 1012 includes the example package manager 402, the example user interface 406, the example dependency analyzer 408, the example capacity analyzer 410, and the example installation coordinator 412.
The processor 1012 of the illustrated example includes local memory 1013 (e.g., a cache). The processor 1012 of the illustrated example communicates with a main memory including a volatile memory 1014 and a non-volatile memory 1016 via a bus 1018. The volatile memory 1014 may be implemented by Synchronous Dynamic Random Access Memory (SDRAM), Dynamic Random Access Memory (DRAM), RAMBUS Dynamic Random Access Memory (RDRAM), and/or any other type of random access memory device. The non-volatile memory 1016 may be implemented by flash memory and/or any other desired type of memory device. Access to the main memory 1014, 1016 is controlled by a memory controller.
The processor platform 1000 of the illustrated example also includes an interface circuit 1020. The interface circuit 1020 may be implemented by any type of interface standard, such as an Ethernet interface, a Universal Serial Bus (USB), and/or a PCI Express interface.
In the illustrated example, one or more input devices 1022 are connected to the interface circuit 1020. One or more input devices 1022 allow a user to enter data and commands into the processor 1012. The one or more input devices may be implemented by, for example, an audio sensor, a microphone, a camera (stop-motion or video), a keyboard, a button, a mouse, a touch screen, a track pad, a track ball, an equivalent, and/or a voice recognition system.
One or more output devices 1024 are also connected to the interface circuit 1020 of the illustrated example. The output devices 1024 may be implemented, for example, by display devices (e.g., Light Emitting Diodes (LEDs), Organic Light Emitting Diodes (OLEDs), liquid crystal displays, cathode ray tube displays (CRTs), touch screens, tactile output devices, printers, and/or speakers). Thus, the interface circuit 1020 of the illustrated example generally includes a graphics driver card, a graphics driver chip, or a graphics driver processor.
The interface circuit 1020 of the illustrated example also includes a communication device (such as a transmitter, receiver, transceiver, modem, and/or network interface card) to facilitate exchange of data with peripheral machines (e.g., any other kind of computing device) via a network 1026 (e.g., an ethernet connection, a Digital Subscriber Line (DSL), a telephone line, coaxial cable, a cellular telephone system, etc.).
The processor platform 1000 of the illustrated example also includes one or more mass storage devices 1028 for storing software and/or data. Examples of such mass storage devices 1028 include floppy disk drives, hard disk drives, compact disk drives, blu-ray disk drives, RAID systems, and Digital Versatile Disk (DVD) drives.
The coded instructions 1032 of fig. 6-8 may be stored in the mass storage device 1028, the volatile memory 1014, the non-volatile memory 1016, and/or on a removable tangible computer-readable storage medium such as a CD or DVD.
From the foregoing, it will be appreciated that the above disclosed methods, apparatus and articles of manufacture facilitate updating software, firmware, patches, drivers, etc. of computing resources included in a virtual server rack architecture. In some examples, software updates are deployed to various physical computing resources included in a virtual server chassis while minimizing impact on the operation of those computing resources. In some examples, the lifecycle manager manages the software installation process to schedule software updates for the heterogeneous computing environment to ensure that dependencies and software execution requirements are met.
Although certain example methods, apparatus, and articles of manufacture have been disclosed herein, the scope of coverage of this patent is not limited thereto. On the contrary, this patent covers all methods, apparatus, and articles of manufacture fairly falling within the scope of the appended claims either literally or under the doctrine of equivalents.

Claims (14)

1. A method of updating software in a virtual server rack system (206), the method comprising:
determining a plurality of software updates to be installed on physical computing resources (210, 212, 224) in a physical rack (202) of the virtual server rack system (206), the determination based on a manifest file received from a software manager (114) associated with the virtual server rack system (206);
determining dependency requirements for installing the plurality of software updates identified in the manifest file;
identifying the physical computing resources (210, 212, 224) in the physical rack (202) that will be temporarily affected when the software update identified in the manifest file is installed;
comparing the computing resource impact of the affected physical computing resources (224) on the operational requirements of the physical rack (202) at the time of installing the software update;
determining, based on the computing resource impact of the impacted physical computing resources (210, 212, 224), an order of installing the software updates that meets the dependency requirements and meets the operational requirements of the physical rack (202); and
arranging installation of the software updates identified in the manifest file according to the determined order of installation.
2. The method of claim 1, further comprising confirming that the manifest file is associated with the virtual server rack system (206).
3. The method of claim 1, wherein the dependency requirements include an indication that installing a software update requires a prior installation of the identified software version.
4. The method of claim 1, wherein determining the order of installation comprises: an order is determined that results in first software that is dependent on the installation of second software being executed after the installation of the second software.
5. The method of claim 1, further comprising identifying devices (210, 212) in the virtual server rack system (206) that have redundancy, the software update to be installed on the virtual server rack system (206), and arranging for independent unavailability of those devices (210, 212).
6. The method of claim 1, wherein determining the order of installation comprises:
determining that a device (210, 212) of the identified type will reboot after installing the software update;
determining that the virtual server rack system (206) includes two or more devices (210, 212) of the identified type; and
the order is determined such that at least one of those devices (210, 212) is available during installation of the software update.
7. The method of claim 1, further comprising:
determining whether sufficient computing resources for installing the software update are available in a virtual environment of the virtual server rack system (206);
identifying spare computing resources available for use during installation of the software update when insufficient computing resources are available in the virtual environment; and
adding the spare computing resource to the virtual environment during installation of the software update.
8. The method of claim 7, further comprising:
migrating workloads executing on current computing resources to the standby computing resources; and
installing the software update on the current computing resource.
9. The method of claim 8, further comprising: after completing installation of the software update, migrating the workload back to the current computing resource.
10. The method of claim 8, wherein the computing resource is a processing unit, and wherein adding the spare computing resource comprises adding an unallocated processing unit to the workload.
11. The method of claim 7, wherein determining whether sufficient computing resources (224) for installing the software update are available in the virtual environment of the virtual server rack system (206) is performed in response to determining that non-disruptive installation is requested.
12. The method of claim 11, further comprising, in response to determining that the non-disruptive installation is requested, scheduling installation of a current computing resource (224) to be completed before installation begins on a second computing resource (224) of the same type as the current computing resource (224).
13. A tangible computer readable storage medium (932) comprising instructions that, when executed, cause a machine (900, 1000) to perform the method of any of claims 1-12.
14. An apparatus (900, 1000) to update software in a virtual server rack system (206), configured to perform operations according to the method of any of claims 1-12.
CN201680038585.3A 2015-06-30 2016-06-29 Method and apparatus for software lifecycle management for virtual computing environments Active CN107810475B (en)

Applications Claiming Priority (7)

Application Number Priority Date Filing Date Title
IN3344/CHE/2015 2015-06-30
IN3344CH2015 2015-06-30
US15/187,480 2016-06-20
US15/187,480 US10740081B2 (en) 2015-06-30 2016-06-20 Methods and apparatus for software lifecycle management of a virtual computing environment
US15/187,452 US10635423B2 (en) 2015-06-30 2016-06-20 Methods and apparatus for software lifecycle management of a virtual computing environment
US15/187,452 2016-06-20
PCT/US2016/040205 WO2017004269A1 (en) 2015-06-30 2016-06-29 Methods and apparatus for software lifecycle management of a virtual computing environment

Publications (2)

Publication Number Publication Date
CN107810475A CN107810475A (en) 2018-03-16
CN107810475B true CN107810475B (en) 2021-09-10

Family

ID=56413887

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201680038585.3A Active CN107810475B (en) 2015-06-30 2016-06-29 Method and apparatus for software lifecycle management for virtual computing environments

Country Status (2)

Country Link
CN (1) CN107810475B (en)
WO (1) WO2017004269A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109343808A (en) * 2018-09-06 2019-02-15 郑州云海信息技术有限公司 A kind of long-range KVM mouse mode adaptation method, device, terminal and storage medium
US11314500B2 (en) 2020-07-09 2022-04-26 Nutanix, Inc. System and method for modularizing update environment in life cycle manager
US11803368B2 (en) 2021-10-01 2023-10-31 Nutanix, Inc. Network learning to control delivery of updates

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1521172A2 (en) * 2003-09-02 2005-04-06 Microsoft Corporation Software decomposition into components
CN104699508A (en) * 2015-03-25 2015-06-10 南京大学 System and method for quickly arranging and updating virtual environment in cloud computing platform

Family Cites Families (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6381742B2 (en) * 1998-06-19 2002-04-30 Microsoft Corporation Software package management
US20030204838A1 (en) * 2002-04-30 2003-10-30 Eric Caspole Debugging platform-independent software applications and related code components
US7584467B2 (en) * 2003-03-17 2009-09-01 Microsoft Corporation Software updating system and method
US7574707B2 (en) * 2003-07-28 2009-08-11 Sap Ag Install-run-remove mechanism
US8151196B2 (en) * 2005-06-07 2012-04-03 Rockwell Automation Technologies, Inc. Abstracted display building method and system
US7694298B2 (en) * 2004-12-10 2010-04-06 Intel Corporation Method and apparatus for providing virtual server blades
US20080201705A1 (en) * 2007-02-15 2008-08-21 Sun Microsystems, Inc. Apparatus and method for generating a software dependency map
US8171485B2 (en) 2007-03-26 2012-05-01 Credit Suisse Securities (Europe) Limited Method and system for managing virtual and real machines
JP4873423B2 (en) * 2007-12-27 2012-02-08 東芝ソリューション株式会社 Virtualization program, simulation apparatus, and virtualization method
EP2248041B1 (en) * 2008-02-26 2015-04-29 VMWare, Inc. Extending server-based desktop virtual machine architecture to client machines
US8296267B2 (en) * 2010-10-20 2012-10-23 Microsoft Corporation Upgrade of highly available farm server groups
US8640118B2 (en) * 2011-05-24 2014-01-28 International Business Machines Corporation Managing firmware on a system board
US8935375B2 (en) * 2011-12-12 2015-01-13 Microsoft Corporation Increasing availability of stateful applications
CN103257870A (en) * 2012-02-21 2013-08-21 F5网络公司 Service upgrade for management program or hardware manager
US8938730B2 (en) * 2012-12-17 2015-01-20 Itron, Inc. Utilizing a multi-system set configuration to update a utility node system set
CN103176833B (en) * 2013-03-11 2016-12-28 华为技术有限公司 A kind of data transmission method for uplink based on virtual machine, method of reseptance and system
US9098322B2 (en) * 2013-03-15 2015-08-04 Bmc Software, Inc. Managing a server template
US9268592B2 (en) * 2013-06-25 2016-02-23 Vmware, Inc. Methods and apparatus to generate a customized application blueprint
US9576153B2 (en) * 2013-08-23 2017-02-21 Cellco Partnership Device and method for providing information from a backend component to a frontend component by a secure device management abstraction and unification module
US9262220B2 (en) * 2013-11-15 2016-02-16 International Business Machines Corporation Scheduling workloads and making provision decisions of computer resources in a computing environment
US9513962B2 (en) * 2013-12-03 2016-12-06 International Business Machines Corporation Migrating a running, preempted workload in a grid computing system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1521172A2 (en) * 2003-09-02 2005-04-06 Microsoft Corporation Software decomposition into components
CN104699508A (en) * 2015-03-25 2015-06-10 南京大学 System and method for quickly arranging and updating virtual environment in cloud computing platform

Also Published As

Publication number Publication date
CN107810475A (en) 2018-03-16
WO2017004269A1 (en) 2017-01-05

Similar Documents

Publication Publication Date Title
US10635423B2 (en) Methods and apparatus for software lifecycle management of a virtual computing environment
US10901721B2 (en) Methods and apparatus for version aliasing mechanisms and cumulative upgrades for software lifecycle management
US11444765B2 (en) Methods and apparatus to manage credentials in hyper-converged infrastructures
US11675585B2 (en) Methods and apparatus to deploy workload domains in virtual server racks
US10348574B2 (en) Hardware management systems for disaggregated rack architectures in virtual server rack deployments
US11405274B2 (en) Managing virtual network functions
US20220019474A1 (en) Methods and apparatus to manage workload domains in virtual server racks
US10313479B2 (en) Methods and apparatus to manage workload domains in virtual server racks
US10044795B2 (en) Methods and apparatus for rack deployments for virtual computing environments
US10656983B2 (en) Methods and apparatus to generate a shadow setup based on a cloud environment and upgrade the shadow setup to identify upgrade-related errors
CN107810475B (en) Method and apparatus for software lifecycle management for virtual computing environments

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: California, USA

Patentee after: Weirui LLC

Country or region after: U.S.A.

Address before: California, USA

Patentee before: VMWARE, Inc.

Country or region before: U.S.A.

CP03 Change of name, title or address