US20140181817A1 - Methods and apparatus to manage execution of virtual machine workflows - Google Patents
Methods and apparatus to manage execution of virtual machine workflows Download PDFInfo
- Publication number
- US20140181817A1 US20140181817A1 US14/105,069 US201314105069A US2014181817A1 US 20140181817 A1 US20140181817 A1 US 20140181817A1 US 201314105069 A US201314105069 A US 201314105069A US 2014181817 A1 US2014181817 A1 US 2014181817A1
- Authority
- US
- United States
- Prior art keywords
- blueprint
- skill
- manager
- workflow
- machine
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000000034 method Methods 0.000 title claims abstract description 28
- 238000003860 storage Methods 0.000 claims description 33
- 230000004044 response Effects 0.000 claims description 4
- 238000011439 discrete element method Methods 0.000 description 51
- 230000009471 action Effects 0.000 description 9
- 238000004519 manufacturing process Methods 0.000 description 9
- 238000012360 testing method Methods 0.000 description 9
- 238000004891 communication Methods 0.000 description 8
- 238000011161 development Methods 0.000 description 7
- 238000013515 script Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 6
- 230000008569 process Effects 0.000 description 6
- 230000008859 change Effects 0.000 description 5
- 238000012545 processing Methods 0.000 description 5
- 230000008901 benefit Effects 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 3
- 230000003139 buffering effect Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000009434 installation Methods 0.000 description 2
- 230000006855 networking Effects 0.000 description 2
- 230000001902 propagating effect Effects 0.000 description 2
- 238000011084 recovery Methods 0.000 description 2
- 239000013589 supplement Substances 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 239000008241 heterogeneous mixture Substances 0.000 description 1
- 230000000977 initiatory effect Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
- 230000003362 replicative effect Effects 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 238000013341 scale-up Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000000153 supplemental effect Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5011—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
- G06F9/5022—Mechanisms to release resources
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/34—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
- G06F11/3466—Performance evaluation by tracing or monitoring
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/45575—Starting, stopping, suspending or resuming virtual machine instances
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/15—Use in a specific computing environment
- G06F2212/152—Virtualized environment, e.g. logically partitioned system
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5061—Partitioning or combining of resources
- G06F9/5077—Logical partitioning of resources; Management or configuration of virtualized resources
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/54—Interprogram communication
- G06F9/542—Event management; Broadcasting; Multicasting; Notifications
Definitions
- This disclosure relates generally to virtual computing, and, more particularly, to methods and apparatus to manage virtual machines.
- Virtualizing computer systems provides benefits such as the ability to execute multiple computer systems on a single hardware computer, replicating computer systems, moving computer systems among multiple hardware computers, and so forth.
- Example systems for virtualizing computer systems are described in U.S. patent application Ser. No. 11/903,374, entitled “METHOD AND SYSTEM FOR MANAGING VIRTUAL AND REAL MACHINES,” filed Sep. 21, 2007, and granted as U.S. Pat. No. 8,171,485, U.S. Provisional Patent Application No. 60/919,965, entitled “METHOD AND SYSTEM FOR MANAGING VIRTUAL AND REAL MACHINES,” filed Mar. 26, 2007, and U.S. Provisional Patent Application No. 61/736,422, entitled “METHODS AND APPARATUS FOR VIRTUALIZED COMPUTING,” filed Dec. 12, 2012, all three of which are hereby incorporated herein by reference in their entirety.
- IaaS infrastructure-as-a-Service
- a service provider generally describes a suite of technologies provided by a service provider as an integrated solution to allow for elastic creation of a virtualized, networked, and pooled computing platform (sometimes referred to as a “cloud computing platform”).
- Enterprises may use IaaS as a business-internal organizational cloud computing platform (sometimes referred to as a “private cloud”) that gives an application developer access to infrastructure resources, such as virtualized servers, storage, and networking resources.
- infrastructure resources such as virtualized servers, storage, and networking resources.
- FIG. 1 is an illustration of an example system constructed in accordance with the teachings of this disclosure for managing a cloud computing platform.
- FIG. 2 illustrates the generation of an example multi-machine blueprint by the example blueprint manager of FIG. 1 .
- FIG. 3 is a block diagram of example components of an example implementation of the blueprint manager of FIG. 1 .
- FIG. 4 is a block diagram of an example implementation of the resource manager of FIG. 1
- FIGS. 5-9 are flowcharts representative of example machine readable instructions that may be executed to implement the cloud manager, the blueprint manager, and/or the resource manager of FIGS. 1-4 .
- FIGS. 10-20 illustrate example graphical user interfaces that may be provided by the cloud manager 138 to facilitate configuring and operating multi-machine blueprints.
- FIG. 21 is a block diagram of an example processing platform capable of executing the example machine readable instructions of FIGS. 5-9 to implement the example cloud manager of FIGS. 1 , 2 , 3 , and/or 4 .
- Cloud computing platforms may provide many powerful capabilities for performing computing operations. However, taking advantage of these computing capabilities manually may be complex and/or require significant training and/or expertise.
- Methods and apparatus disclosed herein facilitate the management of virtual machine resources in cloud computing platforms. For example, as disclosed in detail herein, methods and apparatus disclosed herein provide for automation of management tasks such as provisioning multiple virtual machines for a multiple-machine computing system (e.g., a group of servers that inter-operate), linking provisioned virtual machines and tasks to desired systems to execute those virtual machines or tasks, and/or reclaiming cloud computing resources that are no longer in use.
- the improvements to cloud management systems e.g., the vCloud Automation Center (vCAC) from VMware®
- interfaces, portals, etc. disclosed herein may be utilized individually and/or in any combination. For example, all or a subset of the described improvements may be utilized.
- FIG. 1 depicts an example system 100 constructed in accordance with the teachings of this disclosure for managing a cloud computing platform.
- the example system 100 includes an application director 106 and a cloud manager 138 to manage a cloud computing platform provider 110 as described in more detail below.
- the example system 100 facilitates management of the cloud provider 110 and does not include the cloud provider 110 .
- the system 100 could be included in the cloud provider 110 .
- the cloud computing platform provider 110 provisions virtual computing resources (e.g., virtual machines, or “VMs,” 114 ) that may be accessed by users of the cloud computing platform 110 (e.g., users associated with an administrator 116 and/or a developer 118 ) and/or other programs, software, device. etc.
- virtual computing resources e.g., virtual machines, or “VMs,” 114
- users of the cloud computing platform 110 e.g., users associated with an administrator 116 and/or a developer 118
- An example application 102 of FIG. 1 includes multiple VMs 114 .
- the example VMs 114 of FIG. 1 provide different functions within the application 102 (e.g., services, portions of the application 102 , etc.).
- One or more of the VMs 114 of the illustrated example are customized by an administrator 116 and/or a developer 118 of the application 102 relative to a stock or out-of-the-box (e.g., commonly available purchased copy) version of the services and/or application components.
- the services executing on the example VMs 114 may have dependencies on other ones of the VMs 114 .
- the example cloud computing platform provider 110 may provide multiple deployment environments 112 , for example, for development, testing, staging, and/or production of applications.
- the administrator 116 , the developer 118 , other programs, and/or other devices may access services from the cloud computing platform provider 110 , for example, via REST (Representational State Transfer) APIs (Application Programming Interface) and/or via any other client-server communication protocol.
- REST Real-Representational State Transfer
- Example implementations of a REST API for cloud computing services includes a vCloud Administrator Center (vCAC) API and a vCloud Director API available from VMware, Inc.
- the example cloud computing platform provider 110 provisions virtual computing resources (e.g., the VMs 114 ) to provide the deployment environments 112 in which the administrator 116 and/or developer 118 can deploy multi-tier application(s).
- virtual computing resources e.g., the VMs 114
- One particular example implementation of a deployment environment that may be used to implement the deployment environments 112 of FIG. 1 is vCloud DataCenter cloud computing services available from VMware, Inc.
- the example application director 106 of FIG. 1 which may be running in one or more VMs, orchestrates deployment of multi-tier applications onto one of the example deployment environments 112 .
- the example application director 106 includes a topology generator 120 , a deployment plan generator 122 , and a deployment director 124 .
- the example topology generator 120 generates a basic blueprint 126 that specifies a logical topology of an application to be deployed.
- the example basic blueprint 126 generally captures the structure of an application as a collection of application components executing on virtual computing resources.
- the basic blueprint 126 generated by the example topology generator 120 for an online store application may specify a web application (e.g., in the form of a Java web application archive or “WAR” file comprising dynamic web pages, static web pages, Java servlets, Java classes, and/or other property, configuration and/or resources files that make up a Java web application) executing on an application server (e.g., Apache Tomcat application server) that uses a database (e.g., MongoDB) as a data store.
- a web application e.g., in the form of a Java web application archive or “WAR” file comprising dynamic web pages, static web pages, Java servlets, Java classes, and/or other property, configuration and/or resources files that make up a Java web application
- the term “application” generally refers to a logical deployment unit, comprised of one or more application packages and their dependent middleware and/or operating systems. Applications may be distributed across multiple VMs. Thus, in the example described above, the term “application” refers to the entire online store application, including application server and database components, rather than just the web application itself. In some instances, the application may include the underlying hardware (e.g., virtual computing hardware) utilized to implement the components.
- underlying hardware e.g., virtual computing hardware
- the example basic blueprint 126 of FIG. 1 may be assembled from items (e.g., templates) from a catalog 130 , which is a listing of available virtual computing resources (e.g., VMs, networking, storage) that may be provisioned from the cloud computing platform provider 110 and available application components (e.g., software services, scripts, code components, application-specific packages) that may be installed on the provisioned virtual computing resources.
- the example catalog 130 may be pre-populated and/or customized by an administrator 116 (e.g., IT or system administrator) that enters in specifications, configurations, properties, and/or other details about items in the catalog 130 .
- the example blueprints 126 may define one or more dependencies between application components to indicate an installation order of the application components during deployment. For example, since a load balancer usually cannot be configured until a web application is up and running, the developer 118 may specify a dependency from an Apache service to an application code package.
- the example deployment plan generator 122 of the example application director 106 of FIG. 1 generates a deployment plan 128 based on the basic blueprint 126 that includes deployment settings for the basic blueprint 126 (e.g., virtual computing resources' cluster size, CPU, memory, networks) and an execution plan of tasks having a specified order in which virtual computing resources are provisioned and application components are installed, configured, and started.
- the example deployment plan 128 of FIG. 1 provides an IT administrator with a process-oriented view of the basic blueprint 126 that indicates discrete actions to be performed to deploy the application.
- Different deployment plans 128 may be generated from a single basic blueprint 126 to test prototypes (e.g., new application versions), to scale up and/or scale down deployments, and/or to deploy the application to different deployment environments 112 (e.g., testing, staging, production).
- the deployment plan 128 is separated and distributed as local deployment plans having a series of tasks to be executed by the VMs 114 provisioned from the deployment environment 112 .
- Each VM 114 coordinates execution of each task with a centralized deployment module (e.g., the deployment director 124 ) to ensure that tasks are executed in an order that complies with dependencies specified in the application blueprint 126 .
- the example deployment director 124 of FIG. 1 executes the deployment plan 128 by communicating with the cloud computing platform provider 110 via a cloud interface 132 to provision and configure the VMs 114 in the deployment environment 112 .
- the example cloud interface 132 of FIG. 1 provides a communication abstraction layer by which application director 106 may communicate with a heterogeneous mixture of cloud provider 110 and deployment environments 112 .
- the deployment director 124 provides each VM 114 with a series of tasks specific to the receiving VM 114 (herein referred to as a “local deployment plan”). Tasks are executed by the VMs 114 to install, configure, and/or start one or more application components.
- a task may be a script that, when executed by a VM 114 , causes the VM 114 to retrieve and install particular software packages from a central package repository 134 .
- the example deployment director 124 coordinates with the VMs 114 to execute the tasks in an order that observes installation dependencies between VMs 114 according to deployment plan 128 .
- the application director 106 may be utilized to monitor and/or modify (e.g., scale) the deployment.
- the example cloud manager 138 of FIG. 1 interacts with the components of the system 100 (e.g., the application director 106 and the cloud provider 110 ) to facilitate the management of the resources of the cloud provider 110 .
- the example cloud manager 138 includes a blueprint manager 140 to facilitate the creation and management of multi-machine blueprints and a resource manager 144 to reclaim unused cloud resources.
- the cloud manager 138 may additionally include other components for managing a cloud environment.
- the example blueprint manager 140 of the illustrated example manages the creation of multi-machine blueprints that define the attributes of multiple virtual machines as a single container that can be provisioned, deployed, managed, etc. as a single unit.
- a multi-machine blueprint may include definitions for multiple basic blueprints that make up a service (e.g., an e-commerce provider that includes web servers, application servers, and database servers).
- a basic blueprint is a definition of policies (e.g., hardware policies, security policies, network policies, etc.) for a single machine (e.g., a single virtual machine such as a web server virtual machine). Accordingly, the blueprint manager 140 facilitates more efficient management of multiple virtual machines than manually managing (e.g., deploying) virtual machine basic blueprints individually. The management of multi-machine blueprints is described in further detail in conjunction with FIG. 2 .
- the example blueprint manager 140 of FIG. 1 additionally annotates basic blueprints and/or multi-machine blueprints to control how workflows associated with the basic blueprints and/or multi-machine blueprints are executed.
- a workflow is a series of actions and decisions to be executed in a virtual computing platform.
- the example system 100 includes first and second distributed execution manager(s) (DEM(s)) 146 A and 146 B to execute workflows.
- the first DEM 146 A includes a first set of characteristics and is physically located at a first location 148 A.
- the second DEM 146 B includes a second set of characteristics and is physically located at a second location 148 B.
- the location and characteristics of a DEM may make that DEM more suitable for performing certain workflows.
- a DEM may include hardware particularly suited for performance of certain tasks (e.g., high-end calculations), may be located in a desired area (e.g., for compliance with local laws that require certain operations to be physically performed within a country's boundaries), may specify a location or distance to other DEMS for selecting a nearby DEM (e.g., for reducing data transmission latency), etc.
- the example blueprint manager 140 annotates basic blueprints and/or multi-machine blueprints with skills that can be performed by a DEM that is labeled with the same skill.
- the resource manager 144 of the illustrated example facilitates recovery of cloud computing resources of the cloud provider 110 that are no longer being activity utilized.
- Automated reclamation may include identification, verification and/or reclamation of unused, underutilized, etc. resources to improve the efficiency of the running cloud infrastructure. Resource reclamation is described in further detail in conjunction with FIG. 4 .
- FIG. 2 illustrates the generation of a multi-machine blueprint by the example blueprint manager 140 of FIG. 1 .
- three example basic blueprints (a web server blueprint 202 , an application server blueprint 204 , and a database server blueprint 206 ) have been created (e.g., by the topology generator 120 ).
- the web server blueprint 202 , the application server blueprint 204 , and the database server blueprint 206 may define the components of an e-commerce online store.
- the example blueprint manager 140 provides a user interface for a user of the blueprint manager 140 (e.g., the administrator 116 , the developer 118 , etc.) to specify blueprints (e.g., basic blueprints and/or multi-machine blueprints) to be assigned to an instance of a multi-machine blueprint 208 .
- the user interface may include a list of previously generated basic blueprints (e.g., the web server blueprint 202 , the application server blueprint 204 , the database server blueprint 206 , etc.) to allow selection of desired blueprints.
- the blueprint manager 140 combines the selected blueprints into the definition of the multi-machine blueprint 208 and stores information about the blueprints in a multi-machine blueprint record defining the multi-machine blueprint 208 .
- the blueprint manager 140 may additionally include a user interface to specify other characteristics corresponding to the multi-machine blueprint 208 .
- a creator of the multi-machine blueprint 208 may specify a minimum and maximum number of each blueprint component of the multi-machine blueprint 208 that may be provisioned during provisioning of the multi-machine blueprint 208 .
- any number of virtual machines may be managed collectively.
- the multiple virtual machines corresponding to the multi-machine blueprint 208 may be provisioned based on an instruction to provision the multi-machine blueprint 208 , may be power cycled by an instruction, may be shut down by an instruction, may be booted by an instruction, etc.
- an instruction to provision the multi-machine blueprint 208 may result in the provisioning of a multi-machine service 210 that includes web server(s) 210 A, application server(s) 210 B, and database server 210 C.
- the number of machines provisioned for each blueprint may be specified during the provisioning of the multi-machine blueprint 208 (e.g., subject to the limits specified during creation or management of the multi-machine blueprint 208 ).
- the multi-machine blueprint 208 maintains the reference to the basic blueprints 202 , 204 , and 206 . Accordingly, changes made to the blueprints (e.g., by a manager of the blueprints different than the manager of the multi-machine blueprint 208 ) may be incorporated into future provisionings of the multi-machine blueprint 208 . Accordingly, an administrator maintaining the source blueprints (e.g., an administrator charged with managing the web server blueprint 202 ) may change or update the source blueprint and the changes may be propagated to the machines provisioned from the multi-machine blueprint 210 .
- an administrator maintaining the source blueprints e.g., an administrator charged with managing the web server blueprint 202
- the changes may be propagated to the machines provisioned from the multi-machine blueprint 210 .
- the updated disk image is utilized when deploying the multi-machine blueprint 210 .
- the blueprints may specify that the machines 210 A, 210 B, and 210 C of the multi-machine service 210 provisioned from the multi-machine blueprint 208 operate in different environments.
- some components may be physical machines, some may be on-premise virtual machines, and some may be virtual machines at a cloud service.
- multi-machine blueprints may be generated to provide one or more varied or customized services. For example, if virtual machines deployed in the various States of the United States require different settings, a multi-machine blueprint could be generated for each state.
- the multi-machine blueprints could reference the same build profile and/or disk image, but may include different settings specific to each state.
- the deployment workflow may include an operation to set a locality setting of an operating system to identify a particular State in which a resource is physically located.
- a single disk image may be utilized for multiple multi-machine blueprints reducing the amount of storage space for storing disk images compared with storing a disk image for each customized setting.
- FIG. 3 is a block diagram of an example implementation of the example blueprint manager 140 of FIG. 1 .
- the example blueprint manager 140 of FIG. 3 is structured to manage the execution of blueprint (e.g., basic blueprint and/or multi-machine blueprints) workflows by distributed execution managers (e.g., DEMs 146 A and 146 B).
- the example blueprint manager 140 of FIG. 3 includes a user interface 302 , a workflow manager 304 , and queue manager 308 .
- the user interface 302 of the illustrated example receives information from a user (e.g., the administrator 116 and/or the developer 118 ) indicating the assignment of skills to workflows and requests to execute workflows by DEMs.
- a skill is a characteristic, pre-requisite, capability, etc. of a DEM that makes it more suitable and/or desirable for executing workflows assigned the same skill.
- a skill may indicate any information that is to be matched between DEMs and workflows during execution of workflows by DEMs (e.g., a physical location, a geographical area, a computing hardware capability, a communication capability, an installed software component, etc.).
- DEMs may be tagged with skills during their initial configuration. Tagging the DEM with the skill indicates that the DEM is capable of executing workflows that are also tagged with the skill.
- the user interface 302 of the illustrated example passes information about skills assigned to workflows to the workflow manager 304 .
- the user interface 302 also receives requests to remove an assignment of skills and passes the removal to the workflow manager 304 .
- the example workflow manager 304 labels, tags, or otherwise assigns (or removes an assignment) received workflow skills to an identified workflow.
- the workflow manager 304 may store an indication of the skills assignment in the repository 134 .
- the workflow manager 304 passes workflows that have been tagged or otherwise requested for execution to the queue manager 308 .
- the queue manager 308 of the illustrated example stores information about workflows that are awaiting execution and provides the information to DEMs that are ready to execute a workflow. For example, as a DEM has availability to execute a workflow, the DEM contacts the blueprint manager 140 and requests information about available workflows. The DEM of the illustrated example also provides information about skills that have previously been assigned to the workflow. The example queue manager 308 of FIG. 3 retrieves workflows that are awaiting execution and provides a list of workflows to the requesting DEM. The list of workflows may be sorted based on the skills assigned to the workflow and the skills assigned to the DEM, so that the DEM may choose to execute a workflow that is most closely matched with the skills of the DEM.
- the workflow with the most matching skills may be first in the list. Accordingly, the workflows may be executed by the first available DEM that is most capable of executing the workflow. Because the DEMs of the illustrated example contact the example blueprint manager 140 when they are available for executing workflows, a dispatcher may not be needed and the DEMs may be kept busy without human intervention. Alternatively, workflows could be dispatched or assigned to available DEMs by the queue manager 308 . In another alternative, rather than providing a list of workflows, the queue manager 308 could provide a single workflow that has been selected as most desirable (e.g., based on matching skills) for execution by a requesting DEM.
- FIG. 4 is a block diagram of an example implementation of the example resource manager 144 of FIG. 1 .
- the example resource manager 144 of FIG. 4 includes a resource reclaimer 402 , a notifier 404 , a user interface 406 , and an archiver 408 .
- the resource reclaimer 402 of the illustrated example identifies potentially inactive, unused, underused, etc. resources by comparing an activity time to a threshold. For example, the resource reclaimer 402 may identify inactive resources by reviewing logs indicating the last time that a virtual machine was powered on, the last time that a virtual machine was remotely accessed, the amount of system resources consumed, etc. The information from the logs is analyzed (e.g., by comparing the information to a threshold) to determine if the virtual machine appears to be inactive and/or previsioned with excess resources (e.g., where four virtual machines are provisioned but only three are utilized).
- excess resources e.g., where four virtual machines are provisioned but only three are utilized.
- the resource reclaimer 402 determines that a virtual machine may be inactive, the resource reclaimer 402 communicates the information to the notifier 404 . Additionally or alternatively, the resource reclaimer 402 removes the virtual machine to free the computing resources currently assigned to the virtual machine (e.g., after a number of notifications have been sent and/or a user has confirmed that the virtual machine is no longer needed).
- computing resources are assigned to virtual machines that are backups of active virtual machines.
- ghost machines are replicas of active/live virtual machines. The ghost machines may be made live if the active/live virtual machine terminates unexpectedly or for other reasons. Because the ghost machines are typically not in use, they might appear as unused resources that should be reclaimed to be used by other cloud customers. However, this may not be desirable where the ghost resources are utilized as backups to be activated when needed.
- the example resource reclaimer 402 detects tags associated with the backup virtual machines that indicate that the backup virtual machines should not be identified as inactive virtual machines. When a virtual machine is detected as a backup virtual machine, the machine is not identified as potentially inactive.
- the notifier 404 of the illustrated example notifies an owner of a virtual machine when the resource reclaimer 402 determines that the virtual machine is inactive.
- the notifier 404 may send an email to the identified owner of the virtual machine.
- any other communication may be sent to the owner and/or the inactive machine may be identified on a list without sending a separate communication to the owner.
- the message may include information such as Machine Name, Virtual Machine Status, Reclamation Requestor, Machine Owner, Request Date, Reason for Reclamation Request, Daily Cost, Deadline to respond, etc.
- the message may include a link or other user interface element that allows the user to indicate whether or not the identified virtual machine should remain in use.
- Other parties than the owner of the resource may be notified. For example, a group owner, a manager of the owner, a system administrator, etc.
- the user interface 406 receives instructions from and conveys information to a user (e.g., the administrator 116 and/or the developer 118 ) of the resource manager 144 .
- the user interface 406 may provide an interface by which a user is to request that reclamation processing be performed (e.g., inactive and/or underused virtual machine resources should be identified).
- the user interface 406 may also display a status of a reclamation process including virtual machines identified as potentially inactive.
- the example user interface 406 additionally provides an interface for a user to configure options associated with the reclamation.
- a user may configure the amount of time between successive notifications to the virtual machine owner, the amount of time allowed for an owner to respond before reclaiming resources, the amount of inactivity that will trigger identification of a virtual machine as potentially inactive and/or under-utilized, whether or not virtual machines that are inactivated are archived and for how long, etc.
- the user interface 406 prompts a user with a list of potentially inactive virtual machines and requests that the user select the virtual machines for which the owner should be notified.
- the archiver 408 of the illustrated example archives virtual machines that are reclaimed according to policies configured for the resource manager 144 and/or the virtual machine to be reclaimed (e.g., policies set in a multi-machine blueprint for the virtual machine). Archiving reclaimed virtual machines facilitates the recovery of virtual machines that may later be determined to be active and/or for which the contents are still desired.
- the archiver 408 of the illustrated example stores a log of reclamation operations.
- the log message may contain the following information: Action Date, Machine Name, Machine Owner, Action, User initiating action, Description, Prior Status of Reclamation Request, etc.
- FIGS. 1-4 While an example manner of implementing the cloud manager 138 , the blueprint manager 140 , and the resource manager 144 are illustrated in FIGS. 1-4 , one or more of the elements, processes and/or devices illustrated in FIGS. 1-4 may be combined, divided, re-arranged, omitted, eliminated and/or implemented in any other way. Further, the example user interface 302 , the example workflow manager 304 , and the example queue manager 308 of FIG. 3 and/or, more generally, the blueprint manager 140 , the example resource reclaimer 402 , the example notifier 404 , the example user interface 406 , the example archiver 408 of FIG.
- the example resource manager 144 may be implemented by hardware, software, firmware and/or any combination of hardware, software and/or firmware.
- the example resource manager 144 could be implemented by one or more analog or digital circuit(s), logic circuits, programmable processor(s), application specific integrated circuit(s) (ASIC(s)), programmable logic device(s) (PLD(s)) and/or field programmable logic device(s) (FPLD(s)). None of the apparatus or system claims of this patent are to be construed to cover a purely software and/or firmware implementation. Rather, at least one of the example user interface 302 , the example workflow manager 304 , and the example queue manager 308 of FIG.
- the example resource manager 144 is/are hereby expressly defined to include a tangible computer readable storage device or storage disk such as a memory, a digital versatile disk (DVD), a compact disk (CD), a Blu-ray disk, etc. storing the software and/or firmware to preclude interpreting any claim of this patent as purely software.
- the example machine cloud manager 138 , the example blueprint manager 140 , and/or the example resource manager 144 of FIG. 1 may include one or more elements, processes and/or devices in addition to, or instead of, those illustrated in FIGS. 1-4 , and/or may include more than one of any or all of the illustrated elements, processes and devices.
- FIGS. 5-9 Flowcharts representative of example machine readable instructions for implementing the cloud manager 138 , the blueprint manager 140 , and/or the resource manager 144 of FIGS. 1-4 are shown in FIGS. 5-9 .
- the machine readable instructions comprise a program for execution by a processor such as the processor 2112 shown in the example processor platform 2100 discussed below in connection with FIG. 21 .
- the program may be embodied in software stored on a tangible computer readable storage medium such as a CD-ROM, a floppy disk, a hard drive, a digital versatile disk (DVD), a Blu-ray disk, or a memory associated with the processor 2112 , but the entire program and/or parts thereof could alternatively be executed by a device other than the processor 2112 and/or embodied in firmware or dedicated hardware.
- a tangible computer readable storage medium such as a CD-ROM, a floppy disk, a hard drive, a digital versatile disk (DVD), a Blu-ray disk, or a memory associated with the processor 2112 , but the entire program and/or parts thereof could alternatively be executed by a device other than the processor 2112 and/or embodied in firmware or dedicated hardware.
- a device such as a CD-ROM, a floppy disk, a hard drive, a digital versatile disk (DVD), a Blu-ray disk, or a memory associated with the processor 2112
- DVD digital versatile disk
- FIGS. 5-9 may be implemented using coded instructions (e.g., computer and/or machine readable instructions) stored on a tangible computer readable storage medium such as a hard disk drive, a flash memory, a read-only memory (ROM), a compact disk (CD), a digital versatile disk (DVD), a cache, a random-access memory (RAM) and/or any other storage device or storage disk in which information is stored for any duration (e.g., for extended time periods, permanently, for brief instances, for temporarily buffering, and/or for caching of the information).
- a tangible computer readable storage medium such as a hard disk drive, a flash memory, a read-only memory (ROM), a compact disk (CD), a digital versatile disk (DVD), a cache, a random-access memory (RAM) and/or any other storage device or storage disk in which information is stored for any duration (e.g., for extended time periods, permanently, for brief instances, for temporarily buffering, and/or for caching of the information).
- tangible computer readable storage medium and “tangible machine readable storage medium” are used interchangeably. Additionally or alternatively, the example processes of FIGS. 5-9 may be implemented using coded instructions (e.g., computer and/or machine readable instructions) stored on a non-transitory computer and/or machine readable medium such as a hard disk drive, a flash memory, a read-only memory, a compact disk, a digital versatile disk, a cache, a random-access memory and/or any other storage device or storage disk in which information is stored for any duration (e.g., for extended time periods, permanently, for brief instances, for temporarily buffering, and/or for caching of the information).
- coded instructions e.g., computer and/or machine readable instructions
- a non-transitory computer and/or machine readable medium such as a hard disk drive, a flash memory, a read-only memory, a compact disk, a digital versatile disk, a cache, a random-access memory and/or any other storage device or storage disk in which
- non-transitory computer readable medium is expressly defined to include any type of computer readable storage device and/or storage disk and to exclude propagating signals and to exclude transmission media.
- phrase “at least” is used as the transition term in a preamble of a claim, it is open-ended in the same manner as the term “comprising” is open ended.
- the example program of FIG. 5 begins at block 502 when the blueprint manager 140 of the cloud manager 138 of FIG. 1 receives an instruction to create a multi-machine blueprint (e.g., the multi-machine blueprint 208 of FIG. 2 .
- the blueprint manager 140 displays a list of available blueprints (block 504 ).
- blueprint manager 140 may display a list including the web server blueprint 202 , the application server blueprint 204 , the database server blueprint 206 , and other available blueprints.
- the blueprint manager 140 receives an identification of one or more blueprints selected for inclusion in the multi-machine blueprint (block 506 ).
- the blueprint manager 140 then generates and stores the definition for the multi-machine blueprint that references the selected blueprints in a repository (e.g., the repository 134 ) (block 508 ).
- the program of FIG. 5 then ends.
- the program of FIG. 6 begins at block 602 when the blueprint manager 140 receives an instruction to provision a multi-machine blueprint (e.g., the multi-machine blueprint 208 of FIG. 2 ).
- the instruction may, alternatively, be any other instruction associated with the multi-machine blueprint 208 (e.g., power on, reboot, shutdown, etc.).
- a single user instruction may cause an action to be performed for all of the machines covered by the multi-machine blueprint (e.g., rather than separate user instructions for each machine or basic blueprint).
- the blueprint manager 140 receives an indication of quantities of machines to be provisioned (e.g., via a user interface provided by the blueprint manager 140 ) (block 604 ).
- the blueprint manager 140 then retrieves the first blueprint definition included in the multi-machine blueprint 208 (block 606 ).
- the multi-machine blueprint 208 may include an indication of the order in which the blueprints of the multi-machine blueprint 208 are to be provisioned.
- the blueprint manager 140 then provisions the selected blueprint with a specified number of machines according to the blueprint definition (block 608 ). For example, according to the example of FIG. 2 , the blueprint manager 140 provisions four web servers 210 A based on a specification of four machines in the multi-machine blueprint 208 and based on the web server blueprint 202 .
- the blueprint manager 140 determines if there are additional blueprints to be provisioned (or another action instructed) (block 610 ). When there are additional blueprints to be provisioned, the blueprint manager 140 selects the next blueprint (block 612 ) and control returns to block 608 to provision the next blueprint. When there are no additional blueprints to be provisioned, the program of FIG. 6 ends.
- one or more of the provisioned machines may be encapsulated in an application collection (e.g., a VMWare vApp).
- an application collection e.g., a VMWare vApp
- the machines of the web servers 210 A may be collected into a web servers vApp
- the machines of the application servers 210 B may be collected into an application servers vApp
- the database servers 210 C may be collected into a database servers vApp.
- the multiple collections associated with the multi-machine blueprint may be collected into a multi-machine collection (e.g., a multi-machine vApp).
- a multi-machine vApp e.g., a multi-machine vApp
- a multi-machine vApp may be generated based on the web servers vApp, the application servers vApp, and the database servers vApp.
- the multi-machine vApp may then be added to a catalog to allow administrators to deploy the multiple machines from a catalog (e.g., a vApp catalog).
- the collection may be migrated to a different computing type (e.g., from a physical computing type to a cloud environment).
- a different computing type e.g., from a physical computing type to a cloud environment.
- individual components may be migrated or all components may be migrated.
- the abstraction provided by multi-machine blueprints enables components of a multi-machine blueprint to be provisioned on different types of computing resources (e.g., physical resources, virtual resources, cloud resources, etc.).
- the web servers 210 A may be provisioned on physical computing resources while the application servers 210 B are provisioned in a first cloud service and the database servers 210 C are provisioned in a second cloud service.
- the components of the multi-machine blueprint may be provisioned on different resources at different times. For example, during testing, the components of a multi-machine blueprint may be provisioned on virtual computing resources and, when testing is completed, a production system may be provisioned on physical computer resources.
- the blueprint manager 140 may monitor the provisioned systems to check for compliance with the configuration of the multi-machine blueprint. For example, the blueprint manager 140 may periodically or aperiodically monitor the provisioned systems for changes. When a change is detected, the blueprint manager 140 may automatically revert the change, provide a notification, etc. For example, when the multi-machine blueprint is provisioned utilizing the vCAC, a user may accidently, maliciously, etc. make changes via vCenter (e.g., changes to applications, changes to network configurations, etc.). The blueprint manager 140 may periodically review the provisioned systems to determine if they match the multi-machine blueprint configuration and revert the configuration when a difference is detected (e.g., when the network configuration has been modified outside of the multi-machine blueprint configuration).
- vCenter e.g., changes to applications, changes to network configurations, etc.
- FIG. 7 is a flowchart of an example program to assign skills to workflows and DEMs.
- the program of FIG. 7 begins at block 702 when the user interface 302 receives an identification of a skill to be assigned to a workflow.
- the skill may be a location, characteristic, specification, requirement, etc. that may be specified by selection from a list of skills, typed input of the name of the skill, etc.
- the skill may be entered by clicking an “Add Skill” button displayed on the user interface 302 .
- the user interface 302 sends the skill to the workflow manager 304 .
- the workflow manager 304 tags the appropriate workflow with the skill (block 706 ).
- Tagging the workflow may be performed by storing an association of the skill with the workflow in a database (e.g., the repository 134 ). Tagging the workflow with the skill indicates that the workflow is to be performed by a DEM that is also tagged with the skill.
- the queue manager 308 then adds the workflow to a queue for execution by an available DEM (block 708 ).
- FIG. 8 is a flowchart of an example program to distribute workflows to DEMs for execution.
- the example program of FIG. 8 begins at block 802 when the queue manager 308 receives a request to execute for an available workflow (e.g., a workflow that is ready for execution).
- the queue manager 308 determines if the DEM is tagged with a skill (block 804 ).
- the queue manager 308 retrieves workflows that have been tagged with the skill(s) tagged to the DEM (block 806 ).
- the queue manager 308 transmits a list of the retrieved workflows to the DEM (block 808 ).
- the queue manager 308 transmits a list of workflows that are not tagged with skills to the DEM (block 810 ). While the foregoing example transmits only workflows with matching skills (or no skills) to the requesting DEM, other arrangements may be utilized. For example, a list of all available workflows ordered or ranked by the matching skills may be transmitted to the DEM, a single workflow that has been matched to the DEM based on the skills may be transmitted to the DEM, etc.
- workflows having a mandatory skill may be included in a list of available workflows sent to DEMs matching the mandatory skill and may not be included in a list of available workflows sent to DEMs that do not match the mandatory skill.
- workflows having skills identified as desirable but not mandatory may be included in a list of available workflows sent to DEMs that do not match the desirable skill.
- the list of available workflows may be ranked based on the desirable skill to increase the chances that a DEM having the matching skills will select the workflow for execution.
- the queue manager 308 receives an identification of workflow selected for execution by the requesting DEM (block 812 ). The queue manager 308 then removes the workflow from the queue to ensure that the workflow is not selected by execution by another DEM (block 814 ).
- FIG. 9 is a flowchart of an example program to reclaim virtual machine computing resources from inactive virtual machines.
- the example program of FIG. 9 begins when the user interface 406 receives an instruction to perform a reclamation (block 902 ).
- the reclamation may be a workflow for which execution is requested.
- the resource reclaimer 402 selects a first virtual machine in a provisioned pool of virtual machines (block 904 ).
- the example resource reclaimer 402 determines if characteristics associated with the virtual machine indicate that the virtual machine may be inactive (block 906 ). For example, the resource reclaimer 402 may determine if an action (e.g., power on, reboot, perform operation) has not been performed within a threshold period of time. When the characteristics do not meet the threshold, control proceeds to block 916 , which is described below.
- the notifier 404 determines if a notification has already been sent to the owner of the virtual machine (block 908 ). When a notification has not been sent, the notifier 404 sends a communication to the owner of the virtual machine indicating that the virtual machine is suspected of being inactive and requesting that the owner take action to maintain the virtual machine (block 918 ).
- the notifier 404 determines if a notification period has expired (block 910 ).
- a user e.g., the user requesting the reclamation
- parameters indicating the amount of time that the system should wait following notification before determining that no response will be received and de-provisioning the virtual machine computing resources.
- the resource reclaimer 402 reclaims the computing resources assigned to the inactive virtual machine by de-provisioning or uninstalling the inactive virtual machine (block 912 ). For example, resource reclaimer 402 may return the computing resources to a pool of resources available to other existing and new virtual machines (e.g., virtual machines in a cloud).
- the archiver 408 archives the inactive virtual machine in case the owner of the virtual machine or another party determines that the information contained in the virtual machine is wanted (block 914 ).
- the archiving may be performed according to archiving policies identified in a blueprint associated with the virtual machine, according to instructions from a user received via the user interface 406 , and/or according to a policy for the resource manager 144 . Control then proceeds to block 916 .
- resource reclaimer 402 determines if there are additional virtual machines to be checked for inactivity. When there are additional virtual machines, the next virtual machine is selected and control returns to block 906 to analyze the next virtual machine for inactivity. When there are no additional virtual machines, the program of FIG. 9 ends.
- FIGS. 10-12 illustrate example graphical user interfaces that may be provided by the cloud manager 138 to facilitate creation of a multi-machine blueprint.
- An example graphical user interface 1000 illustrated in FIG. 10 includes a user input 1002 for requesting addition of a blueprint to a new multi-machine blueprint. For example, when the user input 1002 is selected, a listing of available blueprints in a catalog may be displayed and a user (e.g., an administrator) may select blueprint(s) for addition to the multi-machine blueprint.
- the example graphical user interface 1000 includes a listing 1004 of the blueprints that have been added to the multi-machine blueprint being generated.
- the listing 1004 additionally includes user interface elements 1006 for allowing a user to specify configuration parameters for each of the added blueprints.
- a user may specify a component name, a minimum number of machines, a maximum number of machines, a startup ordering, and/or a shutdown ordering.
- a user selects an OK button 1008 to proceed to the example user interface 1100 of FIG. 11 .
- the example user interface 1100 includes user interface elements 1102 to allow a user to specify provisioning processing scripts to be performed during provisioning, user interface elements 1104 to allow a user to specify startup processing scripts to be performed upon startup of the multi-machines, and user interface elements 1106 to allow a user to specify shutdown processing scripts to be performed upon shutdown of the multi-machines.
- a user selects an OK button 1108 to proceed to the example user interface 1200 of FIG. 12 .
- the example user interface 1200 includes user interface elements 1202 to allow a user to specify security settings for the multi-machine blueprint that is being generated. While example security settings are illustrated, any number or type(s) of security settings may be provided. According to the illustrated example, after specifying the security settings, a user selects an OK button 1208 to cause the multi-machine blueprint generation to be completed. For example, in response to selection of the OK button 1208 , the multi-machine blueprint may be generated and stored in a catalog to allow a user to select to provision the multi-machine blueprint.
- FIGS. 13-17 illustrate example graphical user interfaces that may be provided by the cloud manager 138 to facilitate provisioning and configuration of a provisioned multi-machine blueprint.
- An example graphical user interface 1300 illustrated in FIG. 13 provides a listing of available resources (including multi-machine blueprints) that may be provisioned. After selecting a resource that is a multi-machine blueprint, the cloud manager 138 displays the user interface 1400 of FIG. 14 to allow configuration of the provisioning.
- the example illustrated in FIG. 14 includes the same components as the multi-machine blueprint 208 illustrated in FIG. 2 .
- the user interface 1400 includes user interface elements 1402 , 1404 , and 1406 for specifying the settings to be used in provisioning the components of the multi-machine blueprint.
- user interface elements 1402 , 1404 , and 1406 allow a user to specify a number of machines to be provisioned, a number of CPUs to be included, an amount of memory to be included, and an amount of storage to be included in each of the components of the multi-machine blueprint.
- the available options may be controlled by the administrator that built the multi-machine blueprint (e.g., by specifying the minimum and maximum number of machines with the user interface elements 1006 ).
- a user selects a NEXT button 1408 to proceed to the example user interface 1500 of FIG. 15 .
- the example user interface 1500 displays a confirmation of the selections made by the user prior to the user selecting a FINISH button 1502 to provision the machines based on the settings for the components of the multi-machine blueprint.
- FIG. 16 illustrates an example graphical user interface 1600 that may be provided by the cloud manager 138 to facilitate configuration of a provisioned multi-machine blueprint.
- the graphical user interface 1600 displays of list of provisioned virtual machines, including machines provisioned from a multi-machine blueprint.
- a user may select a particular provisioned multi-machine blueprint (e.g., MMS2608-33 in the illustrated example that is provisioned from the multi-machine blueprint 208 of FIG. 2 ) and perform operations provided in an operation menu 1602 .
- a user may select to edit the virtual machine, add additional machines, power cycle the virtual machines, reboot the virtual machines, change the terms of a lease, delete/destroy the virtual machines, power off the virtual machines, shutdown the virtual machines, etc.
- a single operation request from the operation menu 1602 e.g., a selection of the shutdown command is applied to all of the machines in a service.
- a user may select shutdown from the operations menu 1602 (e.g., with a single (i.e., one) selection) and all corresponding virtual machines referenced by the multi-machine blueprint will be shut down without the user specifying a separate shutdown command for each virtual machine provisioned from the multi-machine blueprint.
- example operations are identified in the example operation menu 1602 , any other operations may be included.
- the operation menu 1602 may include an operation to perform a backup that, when selected, may cause all of the multiple machines provisioned from the multi-machine blueprint to be backed up.
- the operation menu 1602 may additionally include a network configuration action that enables a user to reconfigure the network operations, change load balancer settings, etc.
- the operation menu 1602 may also include user defined operations (e.g., scripts, tasks, etc.) created by a user for performing operations on the machines provisioned from the multi-machine blueprint.
- the cloud manager 138 displays the example user interface 1700 of FIG. 17 .
- the example user interface 1700 provides user interface elements to specify a desired number of additional machines to be added to the provisioned virtual machines. In examples where a maximum number of allowable machines has been specified, the user interface 1700 may restrict the number of additional machines added to remain within the specified limits (e.g., by not allowing selection of a number of machines that would exceed the maximum, by displaying an error messages when too many machines are selected, etc.).
- Adding or removing machines from the provisioned multi-machine blueprint allows for scaling up and/or down of the systems.
- the system configurations are updated. For example, a new web server may be brought online by provisioning the virtual hardware for the web server, configuring the network settings for the new web server, and adding the network information to a load balancer for adding the new web server to a pool of web servers that may be utilized in the application.
- FIG. 18 illustrates an example graphical user interface 1800 that may be provided by the cloud manager 138 to configure network information for a multi-machine blueprint.
- the multi-machine blueprint includes settings for an internal network (Internal Application Network) and a public network (NAT to Shared Network Template). Assigning network information to a multi-machine blueprint facilitates management of the network configuration for the machines provisioned from the multi-machine blueprint without the need to configure each machine individually. Accordingly, when machines are provisioned, the networks are provisioned and the provisioned machines can communicate with the other provisioned machines, load balancers, etc. In addition, load balancer information may be configured with the network information.
- FIG. 18 illustrates an example graphical user interface 1800 that may be provided by the cloud manager 138 to configure network information for a multi-machine blueprint.
- the multi-machine blueprint includes settings for an internal network (Internal Application Network) and a public network (NAT to Shared Network Template). Assigning network information to a multi-machine blueprint facilitates management of the
- FIG. 19 illustrates an example graphical user interface 1900 that may be provided by the cloud manager 138 to configure load balancer settings for a particular network configuration (e.g., the NAT to Shared Network Template of FIG. 18 ).
- a load balancer configured for the network enables changes to provisioned machines (e.g., after provisioning a multi-machine blueprint, after adding components, after removing components, etc.) to be managed by the load balancer. For example, after a new component is added to an application, the new component may be utilized as work is distributed by the load balancer (e.g., web requests may be handled by a newly added web server virtual machine as they are distributed by the load balancer).
- FIG. 20 illustrates an example graphical user interface 2000 that may be provided by the cloud manager 138 to configure network information for reservations for a cloud infrastructure.
- Reservations provide a means for dividing the resources of the cloud among different groups of users.
- cloud resources may be divided between a development group, a testing group, and a production group.
- computing resources e.g., processor time, memory, storage space, network resources, etc.
- the development group is allocated 15% of the resources
- the testing group could be allocated 10% of the resources
- the production group could be allocated 75% of the resources.
- multiple network paths may be created and allocated among the groups.
- a first network path may be shared between the development group and the testing group while a second network path is exclusively used by the production group and not available to the development group or the testing group to ensure the integrity of the production system.
- Reservations record the allocation of resources as set by an administrator of the infrastructure.
- the example user interface 2000 allows a user to input network resources that may be utilized by the group for which the reservation is assigned. For example, if the reservation is for the development group and a member of the development group selects to provision a particular multi-machine blueprint, the machines of the multi-machine blueprint will be allowed to utilize the Share Network Application network and, for example, will not be allowed to utilize the Share App Tier network.
- the reservations may override a blueprint where the configurations conflict and may supplement the blueprint where a blueprint does not have a configuration value that is included in the reservation. For example, if a multi-machine blueprint requests a particular network that is not allowed by the reservation, the reservation will override and cause the provisioned machines to utilize an allowed network.
- the multi-machine blueprint might specify a network that is not available in the system on which the basic blueprints of the multi-machine blueprint are to be provisioned.
- Reservations may override and/or supplement settings other than the network settings.
- a multi-machine blueprint may be generated with a default set of policies (e.g., a database storage policy that does not include encryption of credit card numbers).
- the same multi-machine blueprint may be provisioned in multiple localities (e.g., to avoid the need for developing a separate multi-machine blueprint for each locality).
- Reservations associated with systems at each of the localities may include settings related to governmental policies at the localities (e.g., a policy that requires that credit card information is encrypted before storage in a database).
- the credit card encryption policy overrides the default policy of the multi-machine blueprint so that systems provisioned from the multi-machine blueprint in the locality will comply with the local laws. Accordingly, a single multi-machine blueprint could be created and deployed to multiple environments that include overriding or supplemental configurations.
- FIG. 21 is a block diagram of an example processor platform 2100 capable of executing the instructions of FIGS. 5-9 to implement the cloud manager 138 of FIGS. 1-4 .
- the processor platform 2100 can be, for example, a server or any other type of computing device.
- the processor platform 2100 of the illustrated example includes a processor 2112 .
- the processor 2112 of the illustrated example is hardware.
- the processor 2112 can be implemented by one or more integrated circuits, logic circuits, microprocessors or controllers from any desired family or manufacturer.
- the processor 2112 of the illustrated example includes a local memory 2113 (e.g., a cache).
- the processor 2112 of the illustrated example is in communication with a main memory including a volatile memory 2114 and a non-volatile memory 2116 via a bus 2118 .
- the volatile memory 2114 may be implemented by Synchronous Dynamic Random Access Memory (SDRAM), Dynamic Random Access Memory (DRAM), RAMBUS Dynamic Random Access Memory (RDRAM) and/or any other type of random access memory device.
- the non-volatile memory 2116 may be implemented by flash memory and/or any other desired type of memory device. Access to the main memory 2114 , 2116 is controlled by a memory controller.
- the processor platform 2100 of the illustrated example also includes an interface circuit 2120 .
- the interface circuit 2120 may be implemented by any type of interface standard, such as an Ethernet interface, a universal serial bus (USB), and/or a PCI express interface.
- one or more input devices 2122 are connected to the interface circuit 2120 .
- the input device(s) 2122 permit(s) a user to enter data and commands into the processor 2112 .
- the input device(s) can be implemented by, for example, an audio sensor, a microphone, a camera (still or video), a keyboard, a button, a mouse, a touchscreen, a track-pad, a trackball, isopoint and/or a voice recognition system.
- One or more output devices 2124 are also connected to the interface circuit 2120 of the illustrated example.
- the output devices 2124 can be implemented, for example, by display devices (e.g., a light emitting diode (LED), an organic light emitting diode (OLED), a liquid crystal display, a cathode ray tube display (CRT), a touchscreen, a tactile output device, a printer and/or speakers).
- the interface circuit 2120 of the illustrated example thus, typically includes a graphics driver card, a graphics driver chip or a graphics driver processor.
- the interface circuit 2120 of the illustrated example also includes a communication device such as a transmitter, a receiver, a transceiver, a modem and/or network interface card to facilitate exchange of data with external machines (e.g., computing devices of any kind) via a network 2126 (e.g., an Ethernet connection, a digital subscriber line (DSL), a telephone line, coaxial cable, a cellular telephone system, etc.).
- a communication device such as a transmitter, a receiver, a transceiver, a modem and/or network interface card to facilitate exchange of data with external machines (e.g., computing devices of any kind) via a network 2126 (e.g., an Ethernet connection, a digital subscriber line (DSL), a telephone line, coaxial cable, a cellular telephone system, etc.).
- DSL digital subscriber line
- the processor platform 2100 of the illustrated example also includes one or more mass storage devices 2128 for storing software and/or data.
- mass storage devices 2128 include floppy disk drives, hard drive disks, compact disk drives, Blu-ray disk drives, RAID systems, and digital versatile disk (DVD) drives.
- the coded instructions 2132 of FIGS. 5-9 may be stored in the mass storage device 2128 , in the volatile memory 2114 , in the non-volatile memory 2116 , and/or on a removable tangible computer readable storage medium such as a CD or DVD.
- any other type of user interface and/or control may be provided (e.g., a command line interface, text based interface, slider, text box, etc.). Additionally or alternatively, any of the methods and apparatus described herein may be accessed programmatically (e.g., using an API of the cloud manager 138 (e.g., a vCAC API)) by another program or device.
- an API of the cloud manager 138 e.g., a vCAC API
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Stored Programmes (AREA)
Abstract
Methods and apparatus to manage execution of virtual machine workflows are described. An example method includes determining that an execution manager that has requested a first workflow for execution is tagged with a skill, selecting, from a queue, a virtual machine workflow that is tagged with the skill and that matches the requested first workflow; and transmitting the virtual machine workflow to the execution manager for execution.
Description
- This patent claims the benefit of U.S. Provisional Patent Application Ser. No. 61/736,422, filed on Dec. 12, 2012, entitled “METHODS AND APPARATUS FOR VIRTUALIZED COMPUTING” and U.S. Provisional Application Ser. No. 61/828,613, filed on May 29, 2013, entitled “METHODS AND APPARATUS FOR VIRTUALIZED COMPUTING.” Both of U.S. Provisional Patent Application Ser. No. 61/736,422 and U.S. Provisional Application Ser. No. 61/828,613 are hereby incorporated herein by reference in their entirety.
- This disclosure relates generally to virtual computing, and, more particularly, to methods and apparatus to manage virtual machines.
- Virtualizing computer systems provides benefits such as the ability to execute multiple computer systems on a single hardware computer, replicating computer systems, moving computer systems among multiple hardware computers, and so forth. Example systems for virtualizing computer systems are described in U.S. patent application Ser. No. 11/903,374, entitled “METHOD AND SYSTEM FOR MANAGING VIRTUAL AND REAL MACHINES,” filed Sep. 21, 2007, and granted as U.S. Pat. No. 8,171,485, U.S. Provisional Patent Application No. 60/919,965, entitled “METHOD AND SYSTEM FOR MANAGING VIRTUAL AND REAL MACHINES,” filed Mar. 26, 2007, and U.S. Provisional Patent Application No. 61/736,422, entitled “METHODS AND APPARATUS FOR VIRTUALIZED COMPUTING,” filed Dec. 12, 2012, all three of which are hereby incorporated herein by reference in their entirety.
- “Infrastructure-as-a-Service” (also commonly referred to as “IaaS”) generally describes a suite of technologies provided by a service provider as an integrated solution to allow for elastic creation of a virtualized, networked, and pooled computing platform (sometimes referred to as a “cloud computing platform”). Enterprises may use IaaS as a business-internal organizational cloud computing platform (sometimes referred to as a “private cloud”) that gives an application developer access to infrastructure resources, such as virtualized servers, storage, and networking resources. By providing ready access to the hardware resources required to run an application, the cloud computing platform enables developers to build, deploy, and manage the lifecycle of a web application (or any other type of networked application) at a greater scale and at a faster pace than ever before.
-
FIG. 1 is an illustration of an example system constructed in accordance with the teachings of this disclosure for managing a cloud computing platform. -
FIG. 2 illustrates the generation of an example multi-machine blueprint by the example blueprint manager ofFIG. 1 . -
FIG. 3 is a block diagram of example components of an example implementation of the blueprint manager ofFIG. 1 . -
FIG. 4 is a block diagram of an example implementation of the resource manager ofFIG. 1 -
FIGS. 5-9 are flowcharts representative of example machine readable instructions that may be executed to implement the cloud manager, the blueprint manager, and/or the resource manager ofFIGS. 1-4 . -
FIGS. 10-20 illustrate example graphical user interfaces that may be provided by thecloud manager 138 to facilitate configuring and operating multi-machine blueprints. -
FIG. 21 is a block diagram of an example processing platform capable of executing the example machine readable instructions ofFIGS. 5-9 to implement the example cloud manager ofFIGS. 1 , 2, 3, and/or 4. - Cloud computing platforms may provide many powerful capabilities for performing computing operations. However, taking advantage of these computing capabilities manually may be complex and/or require significant training and/or expertise. Methods and apparatus disclosed herein facilitate the management of virtual machine resources in cloud computing platforms. For example, as disclosed in detail herein, methods and apparatus disclosed herein provide for automation of management tasks such as provisioning multiple virtual machines for a multiple-machine computing system (e.g., a group of servers that inter-operate), linking provisioned virtual machines and tasks to desired systems to execute those virtual machines or tasks, and/or reclaiming cloud computing resources that are no longer in use. The improvements to cloud management systems (e.g., the vCloud Automation Center (vCAC) from VMware®), interfaces, portals, etc. disclosed herein may be utilized individually and/or in any combination. For example, all or a subset of the described improvements may be utilized.
-
FIG. 1 depicts anexample system 100 constructed in accordance with the teachings of this disclosure for managing a cloud computing platform. Theexample system 100 includes anapplication director 106 and acloud manager 138 to manage a cloudcomputing platform provider 110 as described in more detail below. As described herein, theexample system 100 facilitates management of thecloud provider 110 and does not include thecloud provider 110. Alternatively, thesystem 100 could be included in thecloud provider 110. - The cloud
computing platform provider 110 provisions virtual computing resources (e.g., virtual machines, or “VMs,” 114) that may be accessed by users of the cloud computing platform 110 (e.g., users associated with anadministrator 116 and/or a developer 118) and/or other programs, software, device. etc. - An
example application 102 ofFIG. 1 includesmultiple VMs 114. Theexample VMs 114 ofFIG. 1 provide different functions within the application 102 (e.g., services, portions of theapplication 102, etc.). One or more of theVMs 114 of the illustrated example are customized by anadministrator 116 and/or adeveloper 118 of theapplication 102 relative to a stock or out-of-the-box (e.g., commonly available purchased copy) version of the services and/or application components. Additionally, the services executing on theexample VMs 114 may have dependencies on other ones of theVMs 114. - As illustrated in
FIG. 1 , the example cloudcomputing platform provider 110 may providemultiple deployment environments 112, for example, for development, testing, staging, and/or production of applications. Theadministrator 116, thedeveloper 118, other programs, and/or other devices may access services from the cloudcomputing platform provider 110, for example, via REST (Representational State Transfer) APIs (Application Programming Interface) and/or via any other client-server communication protocol. Example implementations of a REST API for cloud computing services includes a vCloud Administrator Center (vCAC) API and a vCloud Director API available from VMware, Inc. The example cloudcomputing platform provider 110 provisions virtual computing resources (e.g., the VMs 114) to provide thedeployment environments 112 in which theadministrator 116 and/ordeveloper 118 can deploy multi-tier application(s). One particular example implementation of a deployment environment that may be used to implement thedeployment environments 112 ofFIG. 1 is vCloud DataCenter cloud computing services available from VMware, Inc. - The
example application director 106 ofFIG. 1 , which may be running in one or more VMs, orchestrates deployment of multi-tier applications onto one of theexample deployment environments 112. As illustrated inFIG. 1 , theexample application director 106 includes a topology generator 120, adeployment plan generator 122, and adeployment director 124. - The example topology generator 120 generates a
basic blueprint 126 that specifies a logical topology of an application to be deployed. The examplebasic blueprint 126 generally captures the structure of an application as a collection of application components executing on virtual computing resources. For example, thebasic blueprint 126 generated by the example topology generator 120 for an online store application may specify a web application (e.g., in the form of a Java web application archive or “WAR” file comprising dynamic web pages, static web pages, Java servlets, Java classes, and/or other property, configuration and/or resources files that make up a Java web application) executing on an application server (e.g., Apache Tomcat application server) that uses a database (e.g., MongoDB) as a data store. As used herein, the term “application” generally refers to a logical deployment unit, comprised of one or more application packages and their dependent middleware and/or operating systems. Applications may be distributed across multiple VMs. Thus, in the example described above, the term “application” refers to the entire online store application, including application server and database components, rather than just the web application itself. In some instances, the application may include the underlying hardware (e.g., virtual computing hardware) utilized to implement the components. - The example
basic blueprint 126 ofFIG. 1 may be assembled from items (e.g., templates) from acatalog 130, which is a listing of available virtual computing resources (e.g., VMs, networking, storage) that may be provisioned from the cloudcomputing platform provider 110 and available application components (e.g., software services, scripts, code components, application-specific packages) that may be installed on the provisioned virtual computing resources. Theexample catalog 130 may be pre-populated and/or customized by an administrator 116 (e.g., IT or system administrator) that enters in specifications, configurations, properties, and/or other details about items in thecatalog 130. Based on the application, theexample blueprints 126 may define one or more dependencies between application components to indicate an installation order of the application components during deployment. For example, since a load balancer usually cannot be configured until a web application is up and running, thedeveloper 118 may specify a dependency from an Apache service to an application code package. - The example
deployment plan generator 122 of theexample application director 106 ofFIG. 1 generates adeployment plan 128 based on thebasic blueprint 126 that includes deployment settings for the basic blueprint 126 (e.g., virtual computing resources' cluster size, CPU, memory, networks) and an execution plan of tasks having a specified order in which virtual computing resources are provisioned and application components are installed, configured, and started. Theexample deployment plan 128 ofFIG. 1 provides an IT administrator with a process-oriented view of thebasic blueprint 126 that indicates discrete actions to be performed to deploy the application.Different deployment plans 128 may be generated from a singlebasic blueprint 126 to test prototypes (e.g., new application versions), to scale up and/or scale down deployments, and/or to deploy the application to different deployment environments 112 (e.g., testing, staging, production). Thedeployment plan 128 is separated and distributed as local deployment plans having a series of tasks to be executed by the VMs 114 provisioned from thedeployment environment 112. EachVM 114 coordinates execution of each task with a centralized deployment module (e.g., the deployment director 124) to ensure that tasks are executed in an order that complies with dependencies specified in theapplication blueprint 126. - The
example deployment director 124 ofFIG. 1 executes thedeployment plan 128 by communicating with the cloudcomputing platform provider 110 via acloud interface 132 to provision and configure theVMs 114 in thedeployment environment 112. Theexample cloud interface 132 ofFIG. 1 provides a communication abstraction layer by whichapplication director 106 may communicate with a heterogeneous mixture ofcloud provider 110 anddeployment environments 112. Thedeployment director 124 provides eachVM 114 with a series of tasks specific to the receiving VM 114 (herein referred to as a “local deployment plan”). Tasks are executed by theVMs 114 to install, configure, and/or start one or more application components. For example, a task may be a script that, when executed by aVM 114, causes theVM 114 to retrieve and install particular software packages from acentral package repository 134. Theexample deployment director 124 coordinates with theVMs 114 to execute the tasks in an order that observes installation dependencies betweenVMs 114 according todeployment plan 128. After the application has been deployed, theapplication director 106 may be utilized to monitor and/or modify (e.g., scale) the deployment. - The
example cloud manager 138 ofFIG. 1 interacts with the components of the system 100 (e.g., theapplication director 106 and the cloud provider 110) to facilitate the management of the resources of thecloud provider 110. Theexample cloud manager 138 includes ablueprint manager 140 to facilitate the creation and management of multi-machine blueprints and aresource manager 144 to reclaim unused cloud resources. Thecloud manager 138 may additionally include other components for managing a cloud environment. - The
example blueprint manager 140 of the illustrated example manages the creation of multi-machine blueprints that define the attributes of multiple virtual machines as a single container that can be provisioned, deployed, managed, etc. as a single unit. For example, a multi-machine blueprint may include definitions for multiple basic blueprints that make up a service (e.g., an e-commerce provider that includes web servers, application servers, and database servers). A basic blueprint is a definition of policies (e.g., hardware policies, security policies, network policies, etc.) for a single machine (e.g., a single virtual machine such as a web server virtual machine). Accordingly, theblueprint manager 140 facilitates more efficient management of multiple virtual machines than manually managing (e.g., deploying) virtual machine basic blueprints individually. The management of multi-machine blueprints is described in further detail in conjunction withFIG. 2 . - The
example blueprint manager 140 ofFIG. 1 additionally annotates basic blueprints and/or multi-machine blueprints to control how workflows associated with the basic blueprints and/or multi-machine blueprints are executed. A workflow is a series of actions and decisions to be executed in a virtual computing platform. Theexample system 100 includes first and second distributed execution manager(s) (DEM(s)) 146A and 146B to execute workflows. According to the illustrated example, thefirst DEM 146A includes a first set of characteristics and is physically located at afirst location 148A. Thesecond DEM 146B includes a second set of characteristics and is physically located at asecond location 148B. The location and characteristics of a DEM may make that DEM more suitable for performing certain workflows. For example, a DEM may include hardware particularly suited for performance of certain tasks (e.g., high-end calculations), may be located in a desired area (e.g., for compliance with local laws that require certain operations to be physically performed within a country's boundaries), may specify a location or distance to other DEMS for selecting a nearby DEM (e.g., for reducing data transmission latency), etc. Thus, as described in further detail in conjunction withFIG. 3 , theexample blueprint manager 140 annotates basic blueprints and/or multi-machine blueprints with skills that can be performed by a DEM that is labeled with the same skill. - The
resource manager 144 of the illustrated example facilitates recovery of cloud computing resources of thecloud provider 110 that are no longer being activity utilized. Automated reclamation may include identification, verification and/or reclamation of unused, underutilized, etc. resources to improve the efficiency of the running cloud infrastructure. Resource reclamation is described in further detail in conjunction withFIG. 4 . -
FIG. 2 illustrates the generation of a multi-machine blueprint by theexample blueprint manager 140 ofFIG. 1 . In the illustrated example ofFIG. 2 , three example basic blueprints (aweb server blueprint 202, anapplication server blueprint 204, and a database server blueprint 206) have been created (e.g., by the topology generator 120). For example, theweb server blueprint 202, theapplication server blueprint 204, and thedatabase server blueprint 206 may define the components of an e-commerce online store. - The
example blueprint manager 140 provides a user interface for a user of the blueprint manager 140 (e.g., theadministrator 116, thedeveloper 118, etc.) to specify blueprints (e.g., basic blueprints and/or multi-machine blueprints) to be assigned to an instance of amulti-machine blueprint 208. For example, the user interface may include a list of previously generated basic blueprints (e.g., theweb server blueprint 202, theapplication server blueprint 204, thedatabase server blueprint 206, etc.) to allow selection of desired blueprints. Theblueprint manager 140 combines the selected blueprints into the definition of themulti-machine blueprint 208 and stores information about the blueprints in a multi-machine blueprint record defining themulti-machine blueprint 208. Theblueprint manager 140 may additionally include a user interface to specify other characteristics corresponding to themulti-machine blueprint 208. For example, a creator of themulti-machine blueprint 208 may specify a minimum and maximum number of each blueprint component of themulti-machine blueprint 208 that may be provisioned during provisioning of themulti-machine blueprint 208. - Accordingly, any number of virtual machines (e.g., the virtual machines associated with the blueprints in the multi-machine blueprint 208) may be managed collectively. For example, the multiple virtual machines corresponding to the
multi-machine blueprint 208 may be provisioned based on an instruction to provision themulti-machine blueprint 208, may be power cycled by an instruction, may be shut down by an instruction, may be booted by an instruction, etc. As illustrated inFIG. 2 , an instruction to provision themulti-machine blueprint 208 may result in the provisioning of amulti-machine service 210 that includes web server(s) 210A, application server(s) 210B, anddatabase server 210C. The number of machines provisioned for each blueprint may be specified during the provisioning of the multi-machine blueprint 208 (e.g., subject to the limits specified during creation or management of the multi-machine blueprint 208). - The
multi-machine blueprint 208 maintains the reference to thebasic blueprints multi-machine blueprint 208. Accordingly, an administrator maintaining the source blueprints (e.g., an administrator charged with managing the web server blueprint 202) may change or update the source blueprint and the changes may be propagated to the machines provisioned from themulti-machine blueprint 210. For example, if an operating system update is applied to a disk image referenced by the web server blueprint 202 (e.g., a disk image embodying the primary disk of the web server blueprint 202), the updated disk image is utilized when deploying themulti-machine blueprint 210. Additionally, the blueprints may specify that themachines multi-machine service 210 provisioned from themulti-machine blueprint 208 operate in different environments. For example, some components may be physical machines, some may be on-premise virtual machines, and some may be virtual machines at a cloud service. - Several multi-machine blueprints may be generated to provide one or more varied or customized services. For example, if virtual machines deployed in the various States of the United States require different settings, a multi-machine blueprint could be generated for each state. The multi-machine blueprints could reference the same build profile and/or disk image, but may include different settings specific to each state. For example, the deployment workflow may include an operation to set a locality setting of an operating system to identify a particular State in which a resource is physically located. Thus, a single disk image may be utilized for multiple multi-machine blueprints reducing the amount of storage space for storing disk images compared with storing a disk image for each customized setting.
-
FIG. 3 is a block diagram of an example implementation of theexample blueprint manager 140 ofFIG. 1 . Theexample blueprint manager 140 ofFIG. 3 is structured to manage the execution of blueprint (e.g., basic blueprint and/or multi-machine blueprints) workflows by distributed execution managers (e.g.,DEMs example blueprint manager 140 ofFIG. 3 includes auser interface 302, aworkflow manager 304, andqueue manager 308. - The
user interface 302 of the illustrated example receives information from a user (e.g., theadministrator 116 and/or the developer 118) indicating the assignment of skills to workflows and requests to execute workflows by DEMs. A skill is a characteristic, pre-requisite, capability, etc. of a DEM that makes it more suitable and/or desirable for executing workflows assigned the same skill. A skill may indicate any information that is to be matched between DEMs and workflows during execution of workflows by DEMs (e.g., a physical location, a geographical area, a computing hardware capability, a communication capability, an installed software component, etc.). DEMs may be tagged with skills during their initial configuration. Tagging the DEM with the skill indicates that the DEM is capable of executing workflows that are also tagged with the skill. - The
user interface 302 of the illustrated example passes information about skills assigned to workflows to theworkflow manager 304. Theuser interface 302 also receives requests to remove an assignment of skills and passes the removal to theworkflow manager 304. - The
example workflow manager 304 labels, tags, or otherwise assigns (or removes an assignment) received workflow skills to an identified workflow. For example, theworkflow manager 304 may store an indication of the skills assignment in therepository 134. Theworkflow manager 304 passes workflows that have been tagged or otherwise requested for execution to thequeue manager 308. - The
queue manager 308 of the illustrated example stores information about workflows that are awaiting execution and provides the information to DEMs that are ready to execute a workflow. For example, as a DEM has availability to execute a workflow, the DEM contacts theblueprint manager 140 and requests information about available workflows. The DEM of the illustrated example also provides information about skills that have previously been assigned to the workflow. Theexample queue manager 308 ofFIG. 3 retrieves workflows that are awaiting execution and provides a list of workflows to the requesting DEM. The list of workflows may be sorted based on the skills assigned to the workflow and the skills assigned to the DEM, so that the DEM may choose to execute a workflow that is most closely matched with the skills of the DEM. For example, if the DEM is to select the first available workflow in the list, the workflow with the most matching skills may be first in the list. Accordingly, the workflows may be executed by the first available DEM that is most capable of executing the workflow. Because the DEMs of the illustrated example contact theexample blueprint manager 140 when they are available for executing workflows, a dispatcher may not be needed and the DEMs may be kept busy without human intervention. Alternatively, workflows could be dispatched or assigned to available DEMs by thequeue manager 308. In another alternative, rather than providing a list of workflows, thequeue manager 308 could provide a single workflow that has been selected as most desirable (e.g., based on matching skills) for execution by a requesting DEM. -
FIG. 4 is a block diagram of an example implementation of theexample resource manager 144 ofFIG. 1 . Theexample resource manager 144 ofFIG. 4 includes a resource reclaimer 402, anotifier 404, a user interface 406, and anarchiver 408. - The resource reclaimer 402 of the illustrated example identifies potentially inactive, unused, underused, etc. resources by comparing an activity time to a threshold. For example, the resource reclaimer 402 may identify inactive resources by reviewing logs indicating the last time that a virtual machine was powered on, the last time that a virtual machine was remotely accessed, the amount of system resources consumed, etc. The information from the logs is analyzed (e.g., by comparing the information to a threshold) to determine if the virtual machine appears to be inactive and/or previsioned with excess resources (e.g., where four virtual machines are provisioned but only three are utilized). When the resource reclaimer 402 determines that a virtual machine may be inactive, the resource reclaimer 402 communicates the information to the
notifier 404. Additionally or alternatively, the resource reclaimer 402 removes the virtual machine to free the computing resources currently assigned to the virtual machine (e.g., after a number of notifications have been sent and/or a user has confirmed that the virtual machine is no longer needed). - In some implementations, computing resources are assigned to virtual machines that are backups of active virtual machines. Ghost machines are replicas of active/live virtual machines. The ghost machines may be made live if the active/live virtual machine terminates unexpectedly or for other reasons. Because the ghost machines are typically not in use, they might appear as unused resources that should be reclaimed to be used by other cloud customers. However, this may not be desirable where the ghost resources are utilized as backups to be activated when needed. The example resource reclaimer 402 detects tags associated with the backup virtual machines that indicate that the backup virtual machines should not be identified as inactive virtual machines. When a virtual machine is detected as a backup virtual machine, the machine is not identified as potentially inactive.
- The
notifier 404 of the illustrated example notifies an owner of a virtual machine when the resource reclaimer 402 determines that the virtual machine is inactive. For example, thenotifier 404 may send an email to the identified owner of the virtual machine. Alternatively, any other communication may be sent to the owner and/or the inactive machine may be identified on a list without sending a separate communication to the owner. The message may include information such as Machine Name, Virtual Machine Status, Reclamation Requestor, Machine Owner, Request Date, Reason for Reclamation Request, Daily Cost, Deadline to respond, etc. In example, where an email and/or other message is sent, the message may include a link or other user interface element that allows the user to indicate whether or not the identified virtual machine should remain in use. Other parties than the owner of the resource may be notified. For example, a group owner, a manager of the owner, a system administrator, etc. - The user interface 406 receives instructions from and conveys information to a user (e.g., the
administrator 116 and/or the developer 118) of theresource manager 144. For example, the user interface 406 may provide an interface by which a user is to request that reclamation processing be performed (e.g., inactive and/or underused virtual machine resources should be identified). The user interface 406 may also display a status of a reclamation process including virtual machines identified as potentially inactive. The example user interface 406 additionally provides an interface for a user to configure options associated with the reclamation. For example, a user may configure the amount of time between successive notifications to the virtual machine owner, the amount of time allowed for an owner to respond before reclaiming resources, the amount of inactivity that will trigger identification of a virtual machine as potentially inactive and/or under-utilized, whether or not virtual machines that are inactivated are archived and for how long, etc. In some examples, the user interface 406 prompts a user with a list of potentially inactive virtual machines and requests that the user select the virtual machines for which the owner should be notified. - The
archiver 408 of the illustrated example archives virtual machines that are reclaimed according to policies configured for theresource manager 144 and/or the virtual machine to be reclaimed (e.g., policies set in a multi-machine blueprint for the virtual machine). Archiving reclaimed virtual machines facilitates the recovery of virtual machines that may later be determined to be active and/or for which the contents are still desired. Thearchiver 408 of the illustrated example stores a log of reclamation operations. The log message may contain the following information: Action Date, Machine Name, Machine Owner, Action, User initiating action, Description, Prior Status of Reclamation Request, etc. - While an example manner of implementing the
cloud manager 138, theblueprint manager 140, and theresource manager 144 are illustrated inFIGS. 1-4 , one or more of the elements, processes and/or devices illustrated inFIGS. 1-4 may be combined, divided, re-arranged, omitted, eliminated and/or implemented in any other way. Further, theexample user interface 302, theexample workflow manager 304, and theexample queue manager 308 ofFIG. 3 and/or, more generally, theblueprint manager 140, the example resource reclaimer 402, theexample notifier 404, the example user interface 406, theexample archiver 408 ofFIG. 4 and/or, more generally, theexample resource manager 144 may be implemented by hardware, software, firmware and/or any combination of hardware, software and/or firmware. Thus, for example, any of theexample user interface 302, theexample workflow manager 304, and theexample queue manager 308 ofFIG. 3 and/or, more generally, theblueprint manager 140, the example resource reclaimer 402, theexample notifier 404, the example user interface 406, theexample archiver 408 ofFIG. 4 and/or, more generally, theexample resource manager 144 could be implemented by one or more analog or digital circuit(s), logic circuits, programmable processor(s), application specific integrated circuit(s) (ASIC(s)), programmable logic device(s) (PLD(s)) and/or field programmable logic device(s) (FPLD(s)). None of the apparatus or system claims of this patent are to be construed to cover a purely software and/or firmware implementation. Rather, at least one of theexample user interface 302, theexample workflow manager 304, and theexample queue manager 308 ofFIG. 3 and/or, more generally, theblueprint manager 140, the example resource reclaimer 402, theexample notifier 404, the example user interface 406, theexample archiver 408 ofFIG. 4 and/or, more generally, theexample resource manager 144 is/are hereby expressly defined to include a tangible computer readable storage device or storage disk such as a memory, a digital versatile disk (DVD), a compact disk (CD), a Blu-ray disk, etc. storing the software and/or firmware to preclude interpreting any claim of this patent as purely software. Further still, the examplemachine cloud manager 138, theexample blueprint manager 140, and/or theexample resource manager 144 ofFIG. 1 may include one or more elements, processes and/or devices in addition to, or instead of, those illustrated inFIGS. 1-4 , and/or may include more than one of any or all of the illustrated elements, processes and devices. - Flowcharts representative of example machine readable instructions for implementing the
cloud manager 138, theblueprint manager 140, and/or theresource manager 144 ofFIGS. 1-4 are shown inFIGS. 5-9 . In these examples, the machine readable instructions comprise a program for execution by a processor such as theprocessor 2112 shown in theexample processor platform 2100 discussed below in connection withFIG. 21 . The program may be embodied in software stored on a tangible computer readable storage medium such as a CD-ROM, a floppy disk, a hard drive, a digital versatile disk (DVD), a Blu-ray disk, or a memory associated with theprocessor 2112, but the entire program and/or parts thereof could alternatively be executed by a device other than theprocessor 2112 and/or embodied in firmware or dedicated hardware. Further, although the example program is described with reference to the flowcharts illustrated inFIGS. 5-9 , many other methods of implementing theexample cloud manager 138, theblueprint manager 140, and/or theresource manager 144 may alternatively be used. For example, the order of execution of the blocks may be changed, and/or some of the blocks described may be changed, eliminated, or combined. - As mentioned above, the example processes of
FIGS. 5-9 may be implemented using coded instructions (e.g., computer and/or machine readable instructions) stored on a tangible computer readable storage medium such as a hard disk drive, a flash memory, a read-only memory (ROM), a compact disk (CD), a digital versatile disk (DVD), a cache, a random-access memory (RAM) and/or any other storage device or storage disk in which information is stored for any duration (e.g., for extended time periods, permanently, for brief instances, for temporarily buffering, and/or for caching of the information). As used herein, the term tangible computer readable storage medium is expressly defined to include any type of computer readable storage device and/or storage disk and to exclude propagating signals and to exclude transmission media. As used herein, “tangible computer readable storage medium” and “tangible machine readable storage medium” are used interchangeably. Additionally or alternatively, the example processes ofFIGS. 5-9 may be implemented using coded instructions (e.g., computer and/or machine readable instructions) stored on a non-transitory computer and/or machine readable medium such as a hard disk drive, a flash memory, a read-only memory, a compact disk, a digital versatile disk, a cache, a random-access memory and/or any other storage device or storage disk in which information is stored for any duration (e.g., for extended time periods, permanently, for brief instances, for temporarily buffering, and/or for caching of the information). As used herein, the term non-transitory computer readable medium is expressly defined to include any type of computer readable storage device and/or storage disk and to exclude propagating signals and to exclude transmission media. As used herein, when the phrase “at least” is used as the transition term in a preamble of a claim, it is open-ended in the same manner as the term “comprising” is open ended. - The example program of
FIG. 5 begins atblock 502 when theblueprint manager 140 of thecloud manager 138 ofFIG. 1 receives an instruction to create a multi-machine blueprint (e.g., themulti-machine blueprint 208 ofFIG. 2 . Theblueprint manager 140 displays a list of available blueprints (block 504). For example,blueprint manager 140 may display a list including theweb server blueprint 202, theapplication server blueprint 204, thedatabase server blueprint 206, and other available blueprints. Theblueprint manager 140 then receives an identification of one or more blueprints selected for inclusion in the multi-machine blueprint (block 506). Theblueprint manager 140 then generates and stores the definition for the multi-machine blueprint that references the selected blueprints in a repository (e.g., the repository 134) (block 508).The program ofFIG. 5 then ends. - The program of
FIG. 6 begins atblock 602 when theblueprint manager 140 receives an instruction to provision a multi-machine blueprint (e.g., themulti-machine blueprint 208 ofFIG. 2 ). The instruction may, alternatively, be any other instruction associated with the multi-machine blueprint 208 (e.g., power on, reboot, shutdown, etc.). Thus, a single user instruction may cause an action to be performed for all of the machines covered by the multi-machine blueprint (e.g., rather than separate user instructions for each machine or basic blueprint). Theblueprint manager 140 receives an indication of quantities of machines to be provisioned (e.g., via a user interface provided by the blueprint manager 140) (block 604). Theblueprint manager 140 then retrieves the first blueprint definition included in the multi-machine blueprint 208 (block 606). For example, themulti-machine blueprint 208 may include an indication of the order in which the blueprints of themulti-machine blueprint 208 are to be provisioned. - The
blueprint manager 140 then provisions the selected blueprint with a specified number of machines according to the blueprint definition (block 608). For example, according to the example ofFIG. 2 , theblueprint manager 140 provisions fourweb servers 210A based on a specification of four machines in themulti-machine blueprint 208 and based on theweb server blueprint 202. - The
blueprint manager 140 then determines if there are additional blueprints to be provisioned (or another action instructed) (block 610). When there are additional blueprints to be provisioned, theblueprint manager 140 selects the next blueprint (block 612) and control returns to block 608 to provision the next blueprint. When there are no additional blueprints to be provisioned, the program ofFIG. 6 ends. - In some examples, after provisioning of the blueprints, one or more of the provisioned machines may be encapsulated in an application collection (e.g., a VMWare vApp). For example, according to the example of
FIG. 2 , the machines of theweb servers 210A may be collected into a web servers vApp, the machines of theapplication servers 210B may be collected into an application servers vApp, and thedatabase servers 210C may be collected into a database servers vApp. Further, the multiple collections associated with the multi-machine blueprint may be collected into a multi-machine collection (e.g., a multi-machine vApp). For example, according to the example ofFIG. 2 , a multi-machine vApp may be generated based on the web servers vApp, the application servers vApp, and the database servers vApp. The multi-machine vApp may then be added to a catalog to allow administrators to deploy the multiple machines from a catalog (e.g., a vApp catalog). - In some examples, after a group of machines provisioned from a multi-machine blueprint are collected in a collection (e.g., a App), the collection may be migrated to a different computing type (e.g., from a physical computing type to a cloud environment). Because the components provisioned from the multi-machine blueprint are converted individually to a collection, individual components may be migrated or all components may be migrated. For example, according to the multi-machine blueprint of
FIG. 2 , it may be determined that web traffic will increase greatly during a particular event. Accordingly, prior to the event, a vApp generated for theweb servers 210A may be migrated from a local virtual computing platform to a cloud computing service to enable additional machines to be brought online for the web service. - The abstraction provided by multi-machine blueprints enables components of a multi-machine blueprint to be provisioned on different types of computing resources (e.g., physical resources, virtual resources, cloud resources, etc.). For example, according to the example of
FIG. 2 , theweb servers 210A may be provisioned on physical computing resources while theapplication servers 210B are provisioned in a first cloud service and thedatabase servers 210C are provisioned in a second cloud service. Furthermore, the components of the multi-machine blueprint may be provisioned on different resources at different times. For example, during testing, the components of a multi-machine blueprint may be provisioned on virtual computing resources and, when testing is completed, a production system may be provisioned on physical computer resources. - After a multi-machine blueprint has been provisioned, the
blueprint manager 140 may monitor the provisioned systems to check for compliance with the configuration of the multi-machine blueprint. For example, theblueprint manager 140 may periodically or aperiodically monitor the provisioned systems for changes. When a change is detected, theblueprint manager 140 may automatically revert the change, provide a notification, etc. For example, when the multi-machine blueprint is provisioned utilizing the vCAC, a user may accidently, maliciously, etc. make changes via vCenter (e.g., changes to applications, changes to network configurations, etc.). Theblueprint manager 140 may periodically review the provisioned systems to determine if they match the multi-machine blueprint configuration and revert the configuration when a difference is detected (e.g., when the network configuration has been modified outside of the multi-machine blueprint configuration). -
FIG. 7 is a flowchart of an example program to assign skills to workflows and DEMs. The program ofFIG. 7 begins atblock 702 when theuser interface 302 receives an identification of a skill to be assigned to a workflow. For example, the skill may be a location, characteristic, specification, requirement, etc. that may be specified by selection from a list of skills, typed input of the name of the skill, etc. For example, the skill may be entered by clicking an “Add Skill” button displayed on theuser interface 302. Theuser interface 302 sends the skill to theworkflow manager 304. Theworkflow manager 304 tags the appropriate workflow with the skill (block 706). Tagging the workflow may be performed by storing an association of the skill with the workflow in a database (e.g., the repository 134). Tagging the workflow with the skill indicates that the workflow is to be performed by a DEM that is also tagged with the skill. Thequeue manager 308 then adds the workflow to a queue for execution by an available DEM (block 708). -
FIG. 8 is a flowchart of an example program to distribute workflows to DEMs for execution. The example program ofFIG. 8 begins atblock 802 when thequeue manager 308 receives a request to execute for an available workflow (e.g., a workflow that is ready for execution). Thequeue manager 308 determines if the DEM is tagged with a skill (block 804). When the DEM is tagged with a skill, thequeue manager 308 retrieves workflows that have been tagged with the skill(s) tagged to the DEM (block 806). Thequeue manager 308 then transmits a list of the retrieved workflows to the DEM (block 808). Returning to block 804, if thequeue manager 308 determines that a DEM is not tagged with a skill, thequeue manager 308 transmits a list of workflows that are not tagged with skills to the DEM (block 810). While the foregoing example transmits only workflows with matching skills (or no skills) to the requesting DEM, other arrangements may be utilized. For example, a list of all available workflows ordered or ranked by the matching skills may be transmitted to the DEM, a single workflow that has been matched to the DEM based on the skills may be transmitted to the DEM, etc. In other example, if skills may be labeled as mandatory or optional, workflows having a mandatory skill may be included in a list of available workflows sent to DEMs matching the mandatory skill and may not be included in a list of available workflows sent to DEMs that do not match the mandatory skill. In such an example, workflows having skills identified as desirable but not mandatory may be included in a list of available workflows sent to DEMs that do not match the desirable skill. The list of available workflows may be ranked based on the desirable skill to increase the chances that a DEM having the matching skills will select the workflow for execution. - After the workflows have been transmitted to the requesting DEM (block 808 or block 810), the
queue manager 308 receives an identification of workflow selected for execution by the requesting DEM (block 812). Thequeue manager 308 then removes the workflow from the queue to ensure that the workflow is not selected by execution by another DEM (block 814). -
FIG. 9 is a flowchart of an example program to reclaim virtual machine computing resources from inactive virtual machines. The example program ofFIG. 9 begins when the user interface 406 receives an instruction to perform a reclamation (block 902). For example, the reclamation may be a workflow for which execution is requested. - The resource reclaimer 402 selects a first virtual machine in a provisioned pool of virtual machines (block 904). The example resource reclaimer 402 then determines if characteristics associated with the virtual machine indicate that the virtual machine may be inactive (block 906). For example, the resource reclaimer 402 may determine if an action (e.g., power on, reboot, perform operation) has not been performed within a threshold period of time. When the characteristics do not meet the threshold, control proceeds to block 916, which is described below. When the characteristics meet (or exceed) the threshold, the
notifier 404 determines if a notification has already been sent to the owner of the virtual machine (block 908). When a notification has not been sent, thenotifier 404 sends a communication to the owner of the virtual machine indicating that the virtual machine is suspected of being inactive and requesting that the owner take action to maintain the virtual machine (block 918). - When a notification has already been sent (block 908), the
notifier 404 determines if a notification period has expired (block 910). For example, a user (e.g., the user requesting the reclamation) may specify parameters indicating the amount of time that the system should wait following notification before determining that no response will be received and de-provisioning the virtual machine computing resources. When the notification period has not expired, control proceeds to block 916, which is described below. - When the notification period has expired (block 910), the resource reclaimer 402 reclaims the computing resources assigned to the inactive virtual machine by de-provisioning or uninstalling the inactive virtual machine (block 912). For example, resource reclaimer 402 may return the computing resources to a pool of resources available to other existing and new virtual machines (e.g., virtual machines in a cloud). The
archiver 408 archives the inactive virtual machine in case the owner of the virtual machine or another party determines that the information contained in the virtual machine is wanted (block 914). The archiving may be performed according to archiving policies identified in a blueprint associated with the virtual machine, according to instructions from a user received via the user interface 406, and/or according to a policy for theresource manager 144. Control then proceeds to block 916. - After determining that the characteristics of the selected virtual machine do not meet (or exceed) the threshold (block 906), after determining that the notification period has not expired (block 910), and/or after archiving the virtual machine (or reclaiming the virtual machine resources if archiving is not performed), resource reclaimer 402 determines if there are additional virtual machines to be checked for inactivity. When there are additional virtual machines, the next virtual machine is selected and control returns to block 906 to analyze the next virtual machine for inactivity. When there are no additional virtual machines, the program of
FIG. 9 ends. -
FIGS. 10-12 illustrate example graphical user interfaces that may be provided by thecloud manager 138 to facilitate creation of a multi-machine blueprint. An examplegraphical user interface 1000 illustrated inFIG. 10 includes auser input 1002 for requesting addition of a blueprint to a new multi-machine blueprint. For example, when theuser input 1002 is selected, a listing of available blueprints in a catalog may be displayed and a user (e.g., an administrator) may select blueprint(s) for addition to the multi-machine blueprint. The examplegraphical user interface 1000 includes alisting 1004 of the blueprints that have been added to the multi-machine blueprint being generated. Thelisting 1004 additionally includesuser interface elements 1006 for allowing a user to specify configuration parameters for each of the added blueprints. According to the example ofFIG. 10 , a user may specify a component name, a minimum number of machines, a maximum number of machines, a startup ordering, and/or a shutdown ordering. According to the illustrated example, after adding the desired blueprints and configuration parameters, a user selects anOK button 1008 to proceed to theexample user interface 1100 ofFIG. 11 . - The
example user interface 1100 includesuser interface elements 1102 to allow a user to specify provisioning processing scripts to be performed during provisioning,user interface elements 1104 to allow a user to specify startup processing scripts to be performed upon startup of the multi-machines, anduser interface elements 1106 to allow a user to specify shutdown processing scripts to be performed upon shutdown of the multi-machines. According to the illustrated example, after specifying the scripts, a user selects anOK button 1108 to proceed to theexample user interface 1200 ofFIG. 12 . - The
example user interface 1200 includesuser interface elements 1202 to allow a user to specify security settings for the multi-machine blueprint that is being generated. While example security settings are illustrated, any number or type(s) of security settings may be provided. According to the illustrated example, after specifying the security settings, a user selects anOK button 1208 to cause the multi-machine blueprint generation to be completed. For example, in response to selection of theOK button 1208, the multi-machine blueprint may be generated and stored in a catalog to allow a user to select to provision the multi-machine blueprint. -
FIGS. 13-17 illustrate example graphical user interfaces that may be provided by thecloud manager 138 to facilitate provisioning and configuration of a provisioned multi-machine blueprint. An examplegraphical user interface 1300 illustrated inFIG. 13 provides a listing of available resources (including multi-machine blueprints) that may be provisioned. After selecting a resource that is a multi-machine blueprint, thecloud manager 138 displays theuser interface 1400 ofFIG. 14 to allow configuration of the provisioning. The example illustrated inFIG. 14 includes the same components as themulti-machine blueprint 208 illustrated inFIG. 2 . Theuser interface 1400 includesuser interface elements user interface elements NEXT button 1408 to proceed to theexample user interface 1500 ofFIG. 15 . - The
example user interface 1500 displays a confirmation of the selections made by the user prior to the user selecting aFINISH button 1502 to provision the machines based on the settings for the components of the multi-machine blueprint. -
FIG. 16 illustrates an examplegraphical user interface 1600 that may be provided by thecloud manager 138 to facilitate configuration of a provisioned multi-machine blueprint. Thegraphical user interface 1600 displays of list of provisioned virtual machines, including machines provisioned from a multi-machine blueprint. A user may select a particular provisioned multi-machine blueprint (e.g., MMS2608-33 in the illustrated example that is provisioned from themulti-machine blueprint 208 ofFIG. 2 ) and perform operations provided in anoperation menu 1602. For example, a user (e.g., an administrator) may select to edit the virtual machine, add additional machines, power cycle the virtual machines, reboot the virtual machines, change the terms of a lease, delete/destroy the virtual machines, power off the virtual machines, shutdown the virtual machines, etc. Because the virtual machines are linked to a multi-machine blueprint or service, a single operation request from the operation menu 1602 (e.g., a selection of the shutdown command) is applied to all of the machines in a service. Thus, a user may select shutdown from the operations menu 1602 (e.g., with a single (i.e., one) selection) and all corresponding virtual machines referenced by the multi-machine blueprint will be shut down without the user specifying a separate shutdown command for each virtual machine provisioned from the multi-machine blueprint. While example operations are identified in theexample operation menu 1602, any other operations may be included. For example, theoperation menu 1602 may include an operation to perform a backup that, when selected, may cause all of the multiple machines provisioned from the multi-machine blueprint to be backed up. Theoperation menu 1602 may additionally include a network configuration action that enables a user to reconfigure the network operations, change load balancer settings, etc. Theoperation menu 1602 may also include user defined operations (e.g., scripts, tasks, etc.) created by a user for performing operations on the machines provisioned from the multi-machine blueprint. - When a user selects to add components to virtual machines provisioned from a multi-machine blueprint (e.g., using the
operation menu 1602 in theexample user interface 1600 ofFIG. 16 ), thecloud manager 138 displays theexample user interface 1700 ofFIG. 17 . Theexample user interface 1700 provides user interface elements to specify a desired number of additional machines to be added to the provisioned virtual machines. In examples where a maximum number of allowable machines has been specified, theuser interface 1700 may restrict the number of additional machines added to remain within the specified limits (e.g., by not allowing selection of a number of machines that would exceed the maximum, by displaying an error messages when too many machines are selected, etc.). Adding or removing machines from the provisioned multi-machine blueprint allows for scaling up and/or down of the systems. When components are added and/or removed, the system configurations are updated. For example, a new web server may be brought online by provisioning the virtual hardware for the web server, configuring the network settings for the new web server, and adding the network information to a load balancer for adding the new web server to a pool of web servers that may be utilized in the application. -
FIG. 18 illustrates an examplegraphical user interface 1800 that may be provided by thecloud manager 138 to configure network information for a multi-machine blueprint. According to the illustrated example, the multi-machine blueprint includes settings for an internal network (Internal Application Network) and a public network (NAT to Shared Network Template). Assigning network information to a multi-machine blueprint facilitates management of the network configuration for the machines provisioned from the multi-machine blueprint without the need to configure each machine individually. Accordingly, when machines are provisioned, the networks are provisioned and the provisioned machines can communicate with the other provisioned machines, load balancers, etc. In addition, load balancer information may be configured with the network information.FIG. 19 illustrates an examplegraphical user interface 1900 that may be provided by thecloud manager 138 to configure load balancer settings for a particular network configuration (e.g., the NAT to Shared Network Template ofFIG. 18 ). Having a load balancer configured for the network enables changes to provisioned machines (e.g., after provisioning a multi-machine blueprint, after adding components, after removing components, etc.) to be managed by the load balancer. For example, after a new component is added to an application, the new component may be utilized as work is distributed by the load balancer (e.g., web requests may be handled by a newly added web server virtual machine as they are distributed by the load balancer). -
FIG. 20 illustrates an examplegraphical user interface 2000 that may be provided by thecloud manager 138 to configure network information for reservations for a cloud infrastructure. Reservations provide a means for dividing the resources of the cloud among different groups of users. For example, cloud resources may be divided between a development group, a testing group, and a production group. According to such an example, computing resources (e.g., processor time, memory, storage space, network resources, etc.) could be divided such that the development group is allocated 15% of the resources, the testing group could be allocated 10% of the resources, and the production group could be allocated 75% of the resources. Additionally, multiple network paths may be created and allocated among the groups. For example, a first network path may be shared between the development group and the testing group while a second network path is exclusively used by the production group and not available to the development group or the testing group to ensure the integrity of the production system. Reservations record the allocation of resources as set by an administrator of the infrastructure. - The
example user interface 2000 allows a user to input network resources that may be utilized by the group for which the reservation is assigned. For example, if the reservation is for the development group and a member of the development group selects to provision a particular multi-machine blueprint, the machines of the multi-machine blueprint will be allowed to utilize the Share Network Application network and, for example, will not be allowed to utilize the Share App Tier network. The reservations may override a blueprint where the configurations conflict and may supplement the blueprint where a blueprint does not have a configuration value that is included in the reservation. For example, if a multi-machine blueprint requests a particular network that is not allowed by the reservation, the reservation will override and cause the provisioned machines to utilize an allowed network. In such an example, the multi-machine blueprint might specify a network that is not available in the system on which the basic blueprints of the multi-machine blueprint are to be provisioned. - Reservations may override and/or supplement settings other than the network settings. For example, a multi-machine blueprint may be generated with a default set of policies (e.g., a database storage policy that does not include encryption of credit card numbers). The same multi-machine blueprint may be provisioned in multiple localities (e.g., to avoid the need for developing a separate multi-machine blueprint for each locality). Reservations associated with systems at each of the localities may include settings related to governmental policies at the localities (e.g., a policy that requires that credit card information is encrypted before storage in a database). For example, when the multi-machine blueprint having the default policies is provisioned in a locality wherein the reservation specifies a credit card encryption policy, the credit card encryption policy overrides the default policy of the multi-machine blueprint so that systems provisioned from the multi-machine blueprint in the locality will comply with the local laws. Accordingly, a single multi-machine blueprint could be created and deployed to multiple environments that include overriding or supplemental configurations.
-
FIG. 21 is a block diagram of anexample processor platform 2100 capable of executing the instructions ofFIGS. 5-9 to implement thecloud manager 138 ofFIGS. 1-4 . Theprocessor platform 2100 can be, for example, a server or any other type of computing device. - The
processor platform 2100 of the illustrated example includes aprocessor 2112. Theprocessor 2112 of the illustrated example is hardware. For example, theprocessor 2112 can be implemented by one or more integrated circuits, logic circuits, microprocessors or controllers from any desired family or manufacturer. - The
processor 2112 of the illustrated example includes a local memory 2113 (e.g., a cache). Theprocessor 2112 of the illustrated example is in communication with a main memory including avolatile memory 2114 and anon-volatile memory 2116 via abus 2118. Thevolatile memory 2114 may be implemented by Synchronous Dynamic Random Access Memory (SDRAM), Dynamic Random Access Memory (DRAM), RAMBUS Dynamic Random Access Memory (RDRAM) and/or any other type of random access memory device. Thenon-volatile memory 2116 may be implemented by flash memory and/or any other desired type of memory device. Access to themain memory - The
processor platform 2100 of the illustrated example also includes aninterface circuit 2120. Theinterface circuit 2120 may be implemented by any type of interface standard, such as an Ethernet interface, a universal serial bus (USB), and/or a PCI express interface. - In the illustrated example, one or
more input devices 2122 are connected to theinterface circuit 2120. The input device(s) 2122 permit(s) a user to enter data and commands into theprocessor 2112. The input device(s) can be implemented by, for example, an audio sensor, a microphone, a camera (still or video), a keyboard, a button, a mouse, a touchscreen, a track-pad, a trackball, isopoint and/or a voice recognition system. - One or
more output devices 2124 are also connected to theinterface circuit 2120 of the illustrated example. Theoutput devices 2124 can be implemented, for example, by display devices (e.g., a light emitting diode (LED), an organic light emitting diode (OLED), a liquid crystal display, a cathode ray tube display (CRT), a touchscreen, a tactile output device, a printer and/or speakers). Theinterface circuit 2120 of the illustrated example, thus, typically includes a graphics driver card, a graphics driver chip or a graphics driver processor. - The
interface circuit 2120 of the illustrated example also includes a communication device such as a transmitter, a receiver, a transceiver, a modem and/or network interface card to facilitate exchange of data with external machines (e.g., computing devices of any kind) via a network 2126 (e.g., an Ethernet connection, a digital subscriber line (DSL), a telephone line, coaxial cable, a cellular telephone system, etc.). - The
processor platform 2100 of the illustrated example also includes one or moremass storage devices 2128 for storing software and/or data. Examples of suchmass storage devices 2128 include floppy disk drives, hard drive disks, compact disk drives, Blu-ray disk drives, RAID systems, and digital versatile disk (DVD) drives. - The coded
instructions 2132 ofFIGS. 5-9 may be stored in themass storage device 2128, in thevolatile memory 2114, in thenon-volatile memory 2116, and/or on a removable tangible computer readable storage medium such as a CD or DVD. - While several graphical user interfaces are provided as example interfaces for obtaining user input, any other type of user interface and/or control may be provided (e.g., a command line interface, text based interface, slider, text box, etc.). Additionally or alternatively, any of the methods and apparatus described herein may be accessed programmatically (e.g., using an API of the cloud manager 138 (e.g., a vCAC API)) by another program or device.
- Although certain example methods, apparatus and articles of manufacture have been disclosed herein, the scope of coverage of this patent is not limited thereto. On the contrary, this patent covers all methods, apparatus and articles of manufacture fairly falling within the scope of the claims of this patent.
Claims (21)
1. A method comprising:
determining that an execution manager that has requested a first workflow for execution is tagged with a skill;
selecting, from a queue, a virtual machine workflow that is tagged with the skill and that matches the requested first workflow; and
transmitting the virtual machine workflow to the execution manager for execution.
2. A method as defined in claim 1 , wherein the skill identifies a geographical area and the execution manager is located within the geographical area.
3. A method as defined in claim 1 , wherein the skill is a computing hardware capability.
4. A method as defined in claim 1 , further comprising tagging the virtual machine workflow with the skill by storing an association of the skill and the virtual machine workflow in a database.
5. A method as defined in claim 1 , wherein the queue includes a plurality of virtual machine workflows including the first virtual machine workflow.
6. A method as defined in claim 5 , further comprising transmitting a list of the workflows in the queue that have been tagged with the skill.
7. A method as defined in claim 5 , further comprising transmitting a list of the workflows in the queue, wherein the list of workflows is sorted to give priority to the workflows that include the skill tagged to the execution manager.
8. An apparatus comprising:
a user interface to receive a request for a workflow from an execution manager; and
a queue manager to:
determine that the execution manager is tagged with a skill; and
in response to determining that the execution manager is tagged with the skill and transmit a virtual machine workflow tagged with the skill to the execution manager for execution.
9. An apparatus as defined in claim 8 , wherein the skill identifies a geographical area and the execution manager is located within the geographical area.
10. An apparatus as defined in claim 8 , wherein the skill is a computing hardware capability.
11. An apparatus as defined in claim 8 , further comprising a workflow manager to tag the virtual machine workflow with the skill by storing an association of the skill and the virtual machine workflow in a database.
12. An apparatus as defined in claim 8 , wherein the queue includes a plurality of virtual machine workflows including the first virtual machine workflow.
13. An apparatus as defined in claim 8 , wherein the queue manager is to transmit a list of workflows that are tagged with the skill to the execution manager in response to request.
14. An apparatus as defined in claim 8 , wherein the queue manager is to transmit a list of workflows, including the virtual machine workflow, the list of workflows in the list being sorted to give priority to the workflows that include the skill tagged to the execution manager.
15. A tangible computer readable storage medium including instructions that, when executed, cause a machine to at least:
determine that an execution manager that has requested a first workflow for execution is tagged with a skill; and
select, from a queue, a virtual machine workflow that is tagged with the skill and that matches the requested first workflow; and
transmit the virtual machine workflow to the execution manager for execution.
16. A tangible computer readable storage medium as defined in claim 15 , wherein the skill identifies a geographical area and the execution manager is located within the geographical area.
17. A tangible computer readable storage medium as defined in claim 15 , wherein the skill is a computing hardware capability.
18. A tangible computer readable storage medium as defined in claim 15 , wherein the instructions, when executed, cause the machine to tag the virtual machine workflow with the skill by storing an association of the skill and the virtual machine workflow in a database.
19. A tangible computer readable storage medium as defined in claim 15 , wherein the queue includes a plurality of virtual machine workflows including the first virtual machine workflow.
20. A tangible computer readable storage medium as defined in claim 19 , wherein the instructions, when executed, cause the machine to transmit a list of the workflows in the queue that have been tagged with the skill.
21. A tangible computer readable storage medium as defined in claim 19 , wherein the instructions, when executed, cause the machine to transmit a list of the workflows in the queue, wherein the list of workflows is sorted to give priority to the workflows that include the skill tagged to the execution manager.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/105,069 US20140181817A1 (en) | 2012-12-12 | 2013-12-12 | Methods and apparatus to manage execution of virtual machine workflows |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201261736422P | 2012-12-12 | 2012-12-12 | |
US201361828613P | 2013-05-29 | 2013-05-29 | |
US14/105,069 US20140181817A1 (en) | 2012-12-12 | 2013-12-12 | Methods and apparatus to manage execution of virtual machine workflows |
Publications (1)
Publication Number | Publication Date |
---|---|
US20140181817A1 true US20140181817A1 (en) | 2014-06-26 |
Family
ID=50882499
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/105,066 Active 2035-08-09 US9851989B2 (en) | 2012-12-12 | 2013-12-12 | Methods and apparatus to manage virtual machines |
US14/105,069 Abandoned US20140181817A1 (en) | 2012-12-12 | 2013-12-12 | Methods and apparatus to manage execution of virtual machine workflows |
US14/105,072 Active 2034-08-15 US9529613B2 (en) | 2012-12-12 | 2013-12-12 | Methods and apparatus to reclaim resources in virtual computing environments |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/105,066 Active 2035-08-09 US9851989B2 (en) | 2012-12-12 | 2013-12-12 | Methods and apparatus to manage virtual machines |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/105,072 Active 2034-08-15 US9529613B2 (en) | 2012-12-12 | 2013-12-12 | Methods and apparatus to reclaim resources in virtual computing environments |
Country Status (1)
Country | Link |
---|---|
US (3) | US9851989B2 (en) |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150271331A1 (en) * | 2014-03-20 | 2015-09-24 | Genesys Telecommunications Laboratories, Inc. | Resource sharing in a peer-to-peer network of contact center nodes |
WO2016007679A1 (en) * | 2014-07-08 | 2016-01-14 | Pneuron Corp. | Virtualized execution across distributed nodes |
US10067758B1 (en) * | 2016-06-21 | 2018-09-04 | Jpmorgan Chase Bank, N.A. | Systems and methods for improved workflow processing |
US10075442B2 (en) | 2015-06-30 | 2018-09-11 | Vmware, Inc. | Methods and apparatus to grant access to cloud computing resources |
US10250539B2 (en) | 2015-08-04 | 2019-04-02 | Vmware, Inc. | Methods and apparatus to manage message delivery in enterprise network environments |
US10257143B2 (en) | 2015-06-30 | 2019-04-09 | Vmware, Inc. | Methods and apparatus to generate knowledge base articles |
US10644934B1 (en) | 2016-06-24 | 2020-05-05 | Jpmorgan Chase Bank, N.A. | Systems and methods for controlling message flow throughout a distributed architecture |
US10841268B2 (en) | 2015-08-04 | 2020-11-17 | Vmware, Inc. | Methods and apparatus to generate virtual war rooms via social media in enterprise network environments |
US10938893B2 (en) | 2017-02-15 | 2021-03-02 | Blue Prism Limited | System for optimizing distribution of processing an automated process |
CN112433702A (en) * | 2020-12-19 | 2021-03-02 | 合肥汉腾信息技术有限公司 | Lightweight process design system and method |
US10951656B2 (en) | 2017-08-16 | 2021-03-16 | Nicira, Inc. | Methods, apparatus and systems to use artificial intelligence to define encryption and security policies in a software defined data center |
CN112732406A (en) * | 2021-01-12 | 2021-04-30 | 华云数据控股集团有限公司 | Cloud platform virtual machine recovery method and computer equipment |
US11675620B2 (en) | 2016-12-09 | 2023-06-13 | Vmware, Inc. | Methods and apparatus to automate deployments of software defined data centers based on automation plan and user-provided parameter values |
US11687545B2 (en) | 2015-06-30 | 2023-06-27 | Vmware, Inc. | Conversation context profiles for use with queries submitted using social media |
Families Citing this family (106)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9189419B2 (en) | 2011-04-14 | 2015-11-17 | Vmware, Inc. | Detecting and suppressing redundant input-output operations |
CN104995615B (en) * | 2012-12-27 | 2018-03-30 | 英特尔公司 | The reservation and execution mirror image write-in of local computing device |
US9235322B1 (en) * | 2013-03-09 | 2016-01-12 | Ca, Inc. | Systems, methods and computer program products for a cloud application editor |
US9092451B1 (en) * | 2013-03-14 | 2015-07-28 | Emc Corporation | Genomic application data storage |
US9201736B1 (en) * | 2013-09-30 | 2015-12-01 | Emc Corporation | Methods and apparatus for recovery of complex assets in distributed information processing systems |
US9083653B2 (en) * | 2013-10-21 | 2015-07-14 | Hewlett-Packard Development Company, L.P. | Automated cloud set up |
WO2015065382A1 (en) * | 2013-10-30 | 2015-05-07 | Hewlett-Packard Development Company, L.P. | Instantiating a topology-based service using a blueprint as input |
US10212051B2 (en) | 2013-10-30 | 2019-02-19 | Hewlett Packard Enterprise Development Lp | Stitching an application model to an infrastructure template |
WO2015065353A1 (en) | 2013-10-30 | 2015-05-07 | Hewlett-Packard Development Company, L.P. | Managing the lifecycle of a cloud service modeled as topology decorated by a number of policies |
US10230568B2 (en) | 2013-10-30 | 2019-03-12 | Hewlett Packard Enterprise Development Lp | Monitoring a cloud service modeled as a topology |
US10177988B2 (en) | 2013-10-30 | 2019-01-08 | Hewlett Packard Enterprise Development Lp | Topology remediation |
WO2015065374A1 (en) | 2013-10-30 | 2015-05-07 | Hewlett-Packard Development Company, L.P. | Management of the lifecycle of a cloud service modeled as a topology |
US10567231B2 (en) | 2013-10-30 | 2020-02-18 | Hewlett Packard Enterprise Development Lp | Execution of a topology |
US9448827B1 (en) * | 2013-12-13 | 2016-09-20 | Amazon Technologies, Inc. | Stub domain for request servicing |
US10027569B1 (en) * | 2014-08-07 | 2018-07-17 | Amdocs Development Limited | System, method, and computer program for testing virtual services |
US10063453B1 (en) | 2014-08-07 | 2018-08-28 | Amdocs Development Limited | System, method, and computer program for tag based testing of virtual services |
US10606718B1 (en) | 2013-12-19 | 2020-03-31 | Amdocs Development Limited | System, method, and computer program for managing fault recovery in network function virtualization (Nfv) based networks |
US10031767B2 (en) * | 2014-02-25 | 2018-07-24 | Dynavisor, Inc. | Dynamic information virtualization |
KR101709121B1 (en) * | 2014-04-09 | 2017-02-22 | 한국전자통신연구원 | Method and system for driving virtual machine |
US9513939B2 (en) | 2014-05-19 | 2016-12-06 | International Business Machines Corporation | Agile VM load balancing through micro-checkpointing and multi-architecture emulation |
US10430219B2 (en) | 2014-06-06 | 2019-10-01 | Yokogawa Electric Corporation | Configuring virtual machines in a cloud computing platform |
US9645805B2 (en) * | 2014-06-26 | 2017-05-09 | Vmware, Inc. | Application blueprints based on service templates to deploy applications in different cloud environments |
US10979279B2 (en) * | 2014-07-03 | 2021-04-13 | International Business Machines Corporation | Clock synchronization in cloud computing |
US20160019317A1 (en) * | 2014-07-16 | 2016-01-21 | Commvault Systems, Inc. | Volume or virtual machine level backup and generating placeholders for virtual machine files |
US9594592B2 (en) * | 2015-01-12 | 2017-03-14 | International Business Machines Corporation | Dynamic sharing of unused bandwidth capacity of virtualized input/output adapters |
US9575795B2 (en) * | 2015-01-26 | 2017-02-21 | Ca, Inc. | Reverting a virtual resource to its base configuration using the snapshot image based on frequency the virtual resource is requested |
US9940150B2 (en) * | 2015-02-27 | 2018-04-10 | International Business Machines Corporation | Policy based virtual resource allocation and allocation adjustment |
US9998978B2 (en) * | 2015-04-16 | 2018-06-12 | Visa International Service Association | Systems and methods for processing dormant virtual access devices |
US10038640B2 (en) | 2015-04-30 | 2018-07-31 | Amazon Technologies, Inc. | Managing state for updates to load balancers of an auto scaling group |
US10412020B2 (en) | 2015-04-30 | 2019-09-10 | Amazon Technologies, Inc. | Background processes in update load balancers of an auto scaling group |
US10341426B2 (en) * | 2015-04-30 | 2019-07-02 | Amazon Technologies, Inc. | Managing load balancers associated with auto-scaling groups |
US9804880B2 (en) * | 2015-06-16 | 2017-10-31 | Vmware, Inc. | Reservation for a multi-machine application |
US10713616B2 (en) | 2015-06-22 | 2020-07-14 | Southwire Company, Llc | Determining a remaining amount of material in a material package |
WO2016209827A1 (en) | 2015-06-22 | 2016-12-29 | Southwire Company, Llc | Determining a remaining amount of material in a material package |
US9875130B2 (en) | 2015-07-13 | 2018-01-23 | International Business Machines Corporation | Customizing mirror virtual machine(s) |
US10397324B2 (en) | 2015-07-22 | 2019-08-27 | Netapp, Inc. | Methods and systems for managing a resource in a networked storage environment |
US9912565B2 (en) | 2015-07-22 | 2018-03-06 | Netapp, Inc. | Methods and systems for determining performance capacity of a resource of a networked storage environment |
US10268493B2 (en) | 2015-09-22 | 2019-04-23 | Amazon Technologies, Inc. | Connection-based resource management for virtual desktop instances |
US10652313B2 (en) | 2015-11-08 | 2020-05-12 | Vmware, Inc. | Deploying an application in a hybrid cloud computing environment |
US10158653B1 (en) | 2015-12-04 | 2018-12-18 | Nautilus Data Technologies, Inc. | Artificial intelligence with cyber security |
US10037221B2 (en) * | 2015-12-28 | 2018-07-31 | Amazon Technologies, Inc. | Management of virtual desktop instance pools |
US10250684B2 (en) | 2016-01-12 | 2019-04-02 | Netapp, Inc. | Methods and systems for determining performance capacity of a resource of a networked storage environment |
US10031822B2 (en) | 2016-01-29 | 2018-07-24 | Netapp, Inc. | Techniques for estimating ability of nodes to support high availability functionality in a storage cluster system |
US10782991B2 (en) | 2016-02-26 | 2020-09-22 | Red Hat, Inc. | Customizable virtual machine retirement in a management platform |
US10419283B1 (en) * | 2016-03-01 | 2019-09-17 | VCE IP Holding Company LLC | Methods, systems, and computer readable mediums for template-based provisioning of distributed computing systems |
US10048896B2 (en) | 2016-03-16 | 2018-08-14 | Netapp, Inc. | Methods and systems for determining performance capacity of a resource of a networked storage environment |
US10210023B2 (en) * | 2016-04-05 | 2019-02-19 | Netapp, Inc. | Methods and systems for managing service level objectives in a networked storage environment |
US10817348B2 (en) | 2016-04-05 | 2020-10-27 | Netapp, Inc. | Methods and systems for managing service level objectives in a networked storage environment |
US10469582B2 (en) | 2016-04-13 | 2019-11-05 | Netapp, Inc. | Methods and systems for managing provisioning requests in a networked storage environment |
US10795706B2 (en) * | 2016-06-06 | 2020-10-06 | Vmware, Inc. | Multitier application blueprint representation in open virtualization format package |
US10505830B2 (en) * | 2016-08-11 | 2019-12-10 | Micro Focus Llc | Container monitoring configuration deployment |
US11223537B1 (en) * | 2016-08-17 | 2022-01-11 | Veritas Technologies Llc | Executing custom scripts from the host during disaster recovery |
CN107870800A (en) * | 2016-09-23 | 2018-04-03 | 超威半导体(上海)有限公司 | Virtual machine activity detects |
KR20190052033A (en) * | 2016-10-03 | 2019-05-15 | 스트라투스 디지털 시스템즈 | Transient transaction server |
US20190114630A1 (en) | 2017-09-29 | 2019-04-18 | Stratus Digital Systems | Transient Transaction Server DNS Strategy |
US10805232B2 (en) * | 2016-11-22 | 2020-10-13 | Vmware, Inc. | Content driven public cloud resource partitioning and governance |
US10581757B2 (en) * | 2016-11-22 | 2020-03-03 | Vmware, Inc. | Pooling public cloud resources from different subscriptions using reservations |
US10362096B2 (en) | 2016-11-23 | 2019-07-23 | Vmware, Inc. | Lifecycle management of custom resources in a cloud computing environment |
US10558449B2 (en) | 2016-12-06 | 2020-02-11 | Vmware, Inc. | Distribution and execution of instructions in a distributed computing environment |
US10235296B2 (en) | 2016-12-06 | 2019-03-19 | Vmware, Inc. | Distribution and execution of instructions in a distributed computing environment |
US10152356B2 (en) | 2016-12-07 | 2018-12-11 | Vmware, Inc. | Methods and apparatus for limiting data transferred over the network by interpreting part of the data as a metaproperty |
US11481239B2 (en) * | 2016-12-07 | 2022-10-25 | Vmware, Inc. | Apparatus and methods to incorporate external system to approve deployment provisioning |
US10552180B2 (en) | 2016-12-07 | 2020-02-04 | Vmware, Inc. | Methods, systems, and apparatus to trigger a workflow in a cloud computing environment |
US10353752B2 (en) | 2016-12-07 | 2019-07-16 | Vmware, Inc. | Methods and apparatus for event-based extensibility of system logic |
US11231910B2 (en) | 2016-12-14 | 2022-01-25 | Vmware, Inc. | Topological lifecycle-blueprint interface for modifying information-technology application |
US11231912B2 (en) * | 2016-12-14 | 2022-01-25 | Vmware, Inc. | Post-deployment modification of information-technology application using lifecycle blueprint |
US10664350B2 (en) | 2016-12-14 | 2020-05-26 | Vmware, Inc. | Failure handling for lifecycle blueprint workflows |
US10505791B2 (en) * | 2016-12-16 | 2019-12-10 | Futurewei Technologies, Inc. | System and method to handle events using historical data in serverless systems |
US10909136B1 (en) | 2017-02-08 | 2021-02-02 | Veritas Technologies Llc | Systems and methods for automatically linking data analytics to storage |
US10587459B2 (en) * | 2017-02-13 | 2020-03-10 | Citrix Systems, Inc. | Computer system providing cloud-based health monitoring features and related methods |
US10685033B1 (en) | 2017-02-14 | 2020-06-16 | Veritas Technologies Llc | Systems and methods for building an extract, transform, load pipeline |
US10360053B1 (en) * | 2017-02-14 | 2019-07-23 | Veritas Technologies Llc | Systems and methods for completing sets of computing tasks |
US10216455B1 (en) | 2017-02-14 | 2019-02-26 | Veritas Technologies Llc | Systems and methods for performing storage location virtualization |
US10394593B2 (en) * | 2017-03-13 | 2019-08-27 | International Business Machines Corporation | Nondisruptive updates in a networked computing environment |
US10606646B1 (en) | 2017-03-13 | 2020-03-31 | Veritas Technologies Llc | Systems and methods for creating a data volume from within a software container and initializing the data volume with data |
US10540191B2 (en) | 2017-03-21 | 2020-01-21 | Veritas Technologies Llc | Systems and methods for using dynamic templates to create application containers |
US10437625B2 (en) * | 2017-06-16 | 2019-10-08 | Microsoft Technology Licensing, Llc | Evaluating configuration requests in a virtual machine |
US10656983B2 (en) | 2017-07-20 | 2020-05-19 | Nicira, Inc. | Methods and apparatus to generate a shadow setup based on a cloud environment and upgrade the shadow setup to identify upgrade-related errors |
US10643002B1 (en) * | 2017-09-28 | 2020-05-05 | Amazon Technologies, Inc. | Provision and execution of customized security assessments of resources in a virtual computing environment |
US10706155B1 (en) | 2017-09-28 | 2020-07-07 | Amazon Technologies, Inc. | Provision and execution of customized security assessments of resources in a computing environment |
US10684881B2 (en) * | 2017-11-07 | 2020-06-16 | International Business Machines Corporation | Batch processing of computing elements to conditionally delete virtual machine(s) |
US10740132B2 (en) | 2018-01-30 | 2020-08-11 | Veritas Technologies Llc | Systems and methods for updating containers |
US11048539B2 (en) | 2018-02-27 | 2021-06-29 | Hewlett Packard Enterprise Development Lp | Transitioning virtual machines to an inactive state |
US10841236B1 (en) * | 2018-03-30 | 2020-11-17 | Electronic Arts Inc. | Distributed computer task management of interrelated network computing tasks |
US11042393B2 (en) * | 2018-07-25 | 2021-06-22 | Vmware, Inc. | Priming virtual machines in advance of user login in virtual desktop environments |
US10534759B1 (en) | 2018-08-23 | 2020-01-14 | Cohesity, Inc. | Incremental virtual machine metadata extraction |
CN109189581B (en) * | 2018-09-20 | 2021-08-31 | 郑州云海信息技术有限公司 | Job scheduling method and device |
CN109508226B (en) * | 2018-11-20 | 2021-10-29 | 郑州云海信息技术有限公司 | Openstack-based virtual machine life cycle management method |
CN112840318A (en) | 2018-12-03 | 2021-05-25 | 易享信息技术有限公司 | Automated operation management for computer systems |
US10810035B2 (en) | 2019-02-27 | 2020-10-20 | Cohesity, Inc. | Deploying a cloud instance of a user virtual machine |
US11573861B2 (en) | 2019-05-10 | 2023-02-07 | Cohesity, Inc. | Continuous data protection using a write filter |
US12001866B2 (en) * | 2019-07-01 | 2024-06-04 | Microsoft Technology Licensing, Llc | Harvest virtual machine for utilizing cloud-computing resources |
US11263037B2 (en) | 2019-08-15 | 2022-03-01 | International Business Machines Corporation | Virtual machine deployment |
CN110489241A (en) * | 2019-08-26 | 2019-11-22 | 北京首都在线科技股份有限公司 | Recovery method as resource, device, equipment and computer readable storage medium |
US11698814B2 (en) * | 2019-08-28 | 2023-07-11 | Vega Cloud, Inc. | Cloud resources management |
US11250136B2 (en) | 2019-10-22 | 2022-02-15 | Cohesity, Inc. | Scanning a backup for vulnerabilities |
US11397649B2 (en) | 2019-10-22 | 2022-07-26 | Cohesity, Inc. | Generating standby cloud versions of a virtual machine |
US11487549B2 (en) | 2019-12-11 | 2022-11-01 | Cohesity, Inc. | Virtual machine boot data prediction |
US11593165B2 (en) | 2020-06-23 | 2023-02-28 | Red Hat, Inc. | Resource-usage notification framework in a distributed computing environment |
US11573837B2 (en) | 2020-07-27 | 2023-02-07 | International Business Machines Corporation | Service retention in a computing environment |
US11914480B2 (en) | 2020-12-08 | 2024-02-27 | Cohesity, Inc. | Standbys for continuous data protection-enabled objects |
US11768745B2 (en) | 2020-12-08 | 2023-09-26 | Cohesity, Inc. | Automatically implementing a specification of a data protection intent |
US11614954B2 (en) * | 2020-12-08 | 2023-03-28 | Cohesity, Inc. | Graphical user interface to specify an intent-based data management plan |
US11481287B2 (en) | 2021-02-22 | 2022-10-25 | Cohesity, Inc. | Using a stream of source system storage changes to update a continuous data protection-enabled hot standby |
US11983572B2 (en) | 2021-06-03 | 2024-05-14 | Hewlett Packard Enterprise Development Lp | Accessing purged workloads |
US20230266979A1 (en) * | 2022-02-23 | 2023-08-24 | Workspot, Inc. | Method and system for maximizing resource utilization and user experience for a pool of virtual desktops |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6711616B1 (en) * | 2000-05-01 | 2004-03-23 | Xilinx, Inc. | Client-server task distribution system and method |
US20060075079A1 (en) * | 2004-10-06 | 2006-04-06 | Digipede Technologies, Llc | Distributed computing system installation |
US7225220B2 (en) * | 2000-07-21 | 2007-05-29 | Hewlett-Packard Development Company, Lp | On-line selection of service providers in distributed provision of services on demand |
US20090276771A1 (en) * | 2005-09-15 | 2009-11-05 | 3Tera, Inc. | Globally Distributed Utility Computing Cloud |
US7996458B2 (en) * | 2004-01-28 | 2011-08-09 | Apple Inc. | Assigning tasks in a distributed system |
US20120324070A1 (en) * | 2011-06-14 | 2012-12-20 | International Business Machines Corporation | Distributed cloud placement software |
US20130145367A1 (en) * | 2011-09-27 | 2013-06-06 | Pneuron Corp. | Virtual machine (vm) realm integration and management |
US20130232498A1 (en) * | 2012-03-02 | 2013-09-05 | Vmware, Inc. | System to generate a deployment plan for a cloud infrastructure according to logical, multi-tier application blueprint |
US20130326510A1 (en) * | 2012-05-31 | 2013-12-05 | International Business Machines Corporation | Virtualization-based environments for problem resolution |
Family Cites Families (42)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6763384B1 (en) | 2000-07-10 | 2004-07-13 | International Business Machines Corporation | Event-triggered notification over a network |
US20050203921A1 (en) | 2004-03-11 | 2005-09-15 | Newman Aaron C. | System for protecting database applications from unauthorized activity |
US7257811B2 (en) | 2004-05-11 | 2007-08-14 | International Business Machines Corporation | System, method and program to migrate a virtual machine |
US7533170B2 (en) * | 2005-01-06 | 2009-05-12 | International Business Machines Corporation | Coordinating the monitoring, management, and prediction of unintended changes within a grid environment |
US7805419B2 (en) | 2005-07-11 | 2010-09-28 | Application Security, Inc. | System for tracking and analyzing the integrity of an application |
US7774446B2 (en) | 2005-12-30 | 2010-08-10 | Microsoft Corporation | Discovering, defining, and implementing computer application topologies |
US7461223B2 (en) * | 2006-05-29 | 2008-12-02 | Microsoft Corporation | Retaining shadow copy data during replication |
US9015703B2 (en) | 2006-10-17 | 2015-04-21 | Manageiq, Inc. | Enforcement of compliance policies in managed virtual systems |
US9038062B2 (en) | 2006-10-17 | 2015-05-19 | Manageiq, Inc. | Registering and accessing virtual systems for use in a managed system |
US8171485B2 (en) * | 2007-03-26 | 2012-05-01 | Credit Suisse Securities (Europe) Limited | Method and system for managing virtual and real machines |
WO2009004757A1 (en) | 2007-07-05 | 2009-01-08 | Panasonic Corporation | Data processing device, data processing method, data processing program, recording medium, and integrated circuit |
US8762986B2 (en) * | 2008-02-20 | 2014-06-24 | Sap Ag | Advanced packaging and deployment of virtual appliances |
WO2009104400A1 (en) | 2008-02-22 | 2009-08-27 | 日本電気株式会社 | Information processing device, information processing system, setting program transmitting method, and server setting program |
US20090217263A1 (en) * | 2008-02-25 | 2009-08-27 | Alexander Gebhart | Virtual appliance factory |
US8266612B2 (en) * | 2008-10-03 | 2012-09-11 | Microsoft Corporation | Dynamic, customizable and configurable notification mechanism |
WO2010102084A2 (en) | 2009-03-05 | 2010-09-10 | Coach Wei | System and method for performance acceleration, data protection, disaster recovery and on-demand scaling of computer applications |
JP5218985B2 (en) * | 2009-05-25 | 2013-06-26 | 株式会社日立製作所 | Memory management method computer system and program |
US20100332629A1 (en) * | 2009-06-04 | 2010-12-30 | Lauren Ann Cotugno | Secure custom application cloud computing architecture |
US8914511B1 (en) | 2009-06-26 | 2014-12-16 | VMTurbo, Inc. | Managing resources in virtualization systems |
CA2674402C (en) * | 2009-07-31 | 2016-07-19 | Ibm Canada Limited - Ibm Canada Limitee | Optimizing on demand allocation of virtual machines using a stateless preallocation pool |
US8479098B2 (en) | 2009-08-12 | 2013-07-02 | Ebay Inc. | Reservation of resources and deployment of applications using an integrated development environment |
US8789041B2 (en) | 2009-12-18 | 2014-07-22 | Verizon Patent And Licensing Inc. | Method and system for bulk automated virtual machine deployment |
US8732310B2 (en) * | 2010-04-22 | 2014-05-20 | International Business Machines Corporation | Policy-driven capacity management in resource provisioning environments |
US8661132B2 (en) * | 2010-05-28 | 2014-02-25 | International Business Machines Corporation | Enabling service virtualization in a cloud |
US8775625B2 (en) | 2010-06-16 | 2014-07-08 | Juniper Networks, Inc. | Virtual machine mobility in data centers |
US8732290B2 (en) | 2010-10-05 | 2014-05-20 | Citrix Systems, Inc. | Virtual workplace software based on organization characteristics |
US8473584B2 (en) * | 2010-12-20 | 2013-06-25 | Sap Ag | Revocable indication of session termination |
CN102594652B (en) | 2011-01-13 | 2015-04-08 | 华为技术有限公司 | Migration method of virtual machine, switch and virtual machine system |
US9280458B2 (en) | 2011-05-12 | 2016-03-08 | Citrix Systems, Inc. | Reclaiming memory pages in a computing system hosting a set of virtual machines |
US9003019B1 (en) | 2011-09-30 | 2015-04-07 | Emc Corporation | Methods and systems for utilization tracking and notification of cloud resources |
US8695060B2 (en) | 2011-10-10 | 2014-04-08 | Openpeak Inc. | System and method for creating secure applications |
US9311159B2 (en) * | 2011-10-31 | 2016-04-12 | At&T Intellectual Property I, L.P. | Systems, methods, and articles of manufacture to provide cloud resource orchestration |
US8881144B1 (en) * | 2011-11-22 | 2014-11-04 | Symantec Corporation | Systems and methods for reclaiming storage space from virtual machine disk images |
US9154556B1 (en) | 2011-12-27 | 2015-10-06 | Emc Corporation | Managing access to a limited number of computerized sessions |
US20130191516A1 (en) * | 2012-01-19 | 2013-07-25 | Sungard Availability Services Lp | Automated configuration error detection and prevention |
US20130227710A1 (en) | 2012-02-27 | 2013-08-29 | Computer Associates Think, Inc. | System and method for securing leased images in a cloud environment |
US8997093B2 (en) * | 2012-04-17 | 2015-03-31 | Sap Se | Application installation management by selectively reuse or terminate virtual machines based on a process status |
US9003406B1 (en) | 2012-06-29 | 2015-04-07 | Emc Corporation | Environment-driven application deployment in a virtual infrastructure |
US9135040B2 (en) * | 2012-08-03 | 2015-09-15 | International Business Machines Corporation | Selecting provisioning targets for new virtual machine instances |
US8825550B2 (en) * | 2012-08-23 | 2014-09-02 | Amazon Technologies, Inc. | Scaling a virtual machine instance |
US9384056B2 (en) * | 2012-09-11 | 2016-07-05 | Red Hat Israel, Ltd. | Virtual resource allocation and resource and consumption management |
US9104463B2 (en) * | 2012-11-07 | 2015-08-11 | International Business Machines Corporation | Automated and optimal deactivation of service to enable effective resource reusability |
-
2013
- 2013-12-12 US US14/105,066 patent/US9851989B2/en active Active
- 2013-12-12 US US14/105,069 patent/US20140181817A1/en not_active Abandoned
- 2013-12-12 US US14/105,072 patent/US9529613B2/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6711616B1 (en) * | 2000-05-01 | 2004-03-23 | Xilinx, Inc. | Client-server task distribution system and method |
US7225220B2 (en) * | 2000-07-21 | 2007-05-29 | Hewlett-Packard Development Company, Lp | On-line selection of service providers in distributed provision of services on demand |
US7996458B2 (en) * | 2004-01-28 | 2011-08-09 | Apple Inc. | Assigning tasks in a distributed system |
US20060075079A1 (en) * | 2004-10-06 | 2006-04-06 | Digipede Technologies, Llc | Distributed computing system installation |
US20090276771A1 (en) * | 2005-09-15 | 2009-11-05 | 3Tera, Inc. | Globally Distributed Utility Computing Cloud |
US20120324070A1 (en) * | 2011-06-14 | 2012-12-20 | International Business Machines Corporation | Distributed cloud placement software |
US20130145367A1 (en) * | 2011-09-27 | 2013-06-06 | Pneuron Corp. | Virtual machine (vm) realm integration and management |
US20130232498A1 (en) * | 2012-03-02 | 2013-09-05 | Vmware, Inc. | System to generate a deployment plan for a cloud infrastructure according to logical, multi-tier application blueprint |
US20130326510A1 (en) * | 2012-05-31 | 2013-12-05 | International Business Machines Corporation | Virtualization-based environments for problem resolution |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9774739B2 (en) * | 2014-03-20 | 2017-09-26 | Genesys Telecommunications Laboratories, Inc. | Resource sharing in a peer-to-peer network of contact center nodes |
US10567587B2 (en) | 2014-03-20 | 2020-02-18 | Genesys Telecommunications Laboratories, Inc. | Resource sharing in a peer-to-peer network of contact center nodes |
US20150271331A1 (en) * | 2014-03-20 | 2015-09-24 | Genesys Telecommunications Laboratories, Inc. | Resource sharing in a peer-to-peer network of contact center nodes |
WO2016007679A1 (en) * | 2014-07-08 | 2016-01-14 | Pneuron Corp. | Virtualized execution across distributed nodes |
US10747573B2 (en) | 2014-07-08 | 2020-08-18 | UST Global (Singapore) Pte. Ltd. | Virtualized execution across distributed nodes |
US11687545B2 (en) | 2015-06-30 | 2023-06-27 | Vmware, Inc. | Conversation context profiles for use with queries submitted using social media |
US10075442B2 (en) | 2015-06-30 | 2018-09-11 | Vmware, Inc. | Methods and apparatus to grant access to cloud computing resources |
US10257143B2 (en) | 2015-06-30 | 2019-04-09 | Vmware, Inc. | Methods and apparatus to generate knowledge base articles |
US10250539B2 (en) | 2015-08-04 | 2019-04-02 | Vmware, Inc. | Methods and apparatus to manage message delivery in enterprise network environments |
US10841268B2 (en) | 2015-08-04 | 2020-11-17 | Vmware, Inc. | Methods and apparatus to generate virtual war rooms via social media in enterprise network environments |
US10067758B1 (en) * | 2016-06-21 | 2018-09-04 | Jpmorgan Chase Bank, N.A. | Systems and methods for improved workflow processing |
US10644934B1 (en) | 2016-06-24 | 2020-05-05 | Jpmorgan Chase Bank, N.A. | Systems and methods for controlling message flow throughout a distributed architecture |
US11675620B2 (en) | 2016-12-09 | 2023-06-13 | Vmware, Inc. | Methods and apparatus to automate deployments of software defined data centers based on automation plan and user-provided parameter values |
US11290528B2 (en) * | 2017-02-15 | 2022-03-29 | Blue Prism Limited | System for optimizing distribution of processing an automated process |
US10938893B2 (en) | 2017-02-15 | 2021-03-02 | Blue Prism Limited | System for optimizing distribution of processing an automated process |
US10951656B2 (en) | 2017-08-16 | 2021-03-16 | Nicira, Inc. | Methods, apparatus and systems to use artificial intelligence to define encryption and security policies in a software defined data center |
CN112433702A (en) * | 2020-12-19 | 2021-03-02 | 合肥汉腾信息技术有限公司 | Lightweight process design system and method |
CN112732406A (en) * | 2021-01-12 | 2021-04-30 | 华云数据控股集团有限公司 | Cloud platform virtual machine recovery method and computer equipment |
Also Published As
Publication number | Publication date |
---|---|
US9529613B2 (en) | 2016-12-27 |
US20140181816A1 (en) | 2014-06-26 |
US9851989B2 (en) | 2017-12-26 |
US20140165060A1 (en) | 2014-06-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9529613B2 (en) | Methods and apparatus to reclaim resources in virtual computing environments | |
US11307890B2 (en) | Methods and apparatus to manage virtual machines | |
US11755343B2 (en) | Methods, systems and apparatus to trigger a workflow in a cloud computing environment | |
US11507432B2 (en) | Methods, systems and apparatus for client extensibility during provisioning of a composite blueprint | |
US20210111957A1 (en) | Methods, systems and apparatus to propagate node configuration changes to services in a distributed environment | |
US20180359162A1 (en) | Methods, systems, and apparatus to scale in and/or scale out resources managed by a cloud automation system | |
US11099909B2 (en) | Methods and apparatus for adaptive workflow adjustment during resource provisioning using meta-topics | |
US11263058B2 (en) | Methods and apparatus for limiting data transferred over the network by interpreting part of the data as a metaproperty | |
US11586430B2 (en) | Distribution and execution of instructions in a distributed computing environment | |
US20180157560A1 (en) | Methods and apparatus for transparent database switching using master-replica high availability setup in relational databases | |
US11750451B2 (en) | Batch manager for complex workflows | |
woon Ahn et al. | Mirra: Rule-based resource management for heterogeneous real-time applications running in cloud computing infrastructures |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: VMWARE INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MULLER, LESLIE;REUTOVA, VALENTINA;PESPISA, KEN;AND OTHERS;SIGNING DATES FROM 20140422 TO 20150901;REEL/FRAME:036570/0746 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |