WO2012141573A1 - Method and system for automatic deployment of grid compute nodes - Google Patents

Method and system for automatic deployment of grid compute nodes Download PDF

Info

Publication number
WO2012141573A1
WO2012141573A1 PCT/MY2012/000080 MY2012000080W WO2012141573A1 WO 2012141573 A1 WO2012141573 A1 WO 2012141573A1 MY 2012000080 W MY2012000080 W MY 2012000080W WO 2012141573 A1 WO2012141573 A1 WO 2012141573A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual machine
image
new
virtual
lrms
Prior art date
Application number
PCT/MY2012/000080
Other languages
French (fr)
Inventor
Mohd Amril Nurman MOHD NAZIR
Mohd Bazli ABD KARIM
Mohd Sidek Salleh
Kwang Ming NG
Original Assignee
Mimos Berhad
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mimos Berhad filed Critical Mimos Berhad
Publication of WO2012141573A1 publication Critical patent/WO2012141573A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5077Logical partitioning of resources; Management or configuration of virtualized resources

Definitions

  • the present invention relates to resource management in a cluster, grid environment, and more particularly to system and method for automatic creation and deployment of new grid compute nodes by deploying virtual machines over physical machines to serve long waiting, high priority, urgent, and/or communication-intensive jobs.
  • a computational grid enables the aggregation of multiple clusters for dynamic sharing, selection, and management of resources.
  • the grid middleware provides protocols and functionalities to enable these dynamic mechanisms of sharing resources.
  • Existing cluster Management Systems such as Condor, SGE, LSF, Torque, and PBS Pro cluster systems are widely deployed in academic and commercial environments to manage the computational needs for the execution of sequential and parallel applications.
  • Patent US6353898 and US2010162259 disclosed resource management in clustered computer system, and virtualization-based resource management apparatus and method and computing system for virtualization-based resource management respectively.
  • US6353898 is distinctive over the present invention wherein the main focus is to provide a failure-proof system. In order to achieve this, failures and possible failures by node software, hardware and interconnections can be detected. Compensation is made for preventing failures, such as emergency communications and sharable resource allocation with minimal locking. Furthermore, virtualization technology is not involved. It is observed that the steps involved are distinctive from that of the present invention.
  • US2010162259 is distinguished over the present invention whereby it covers method for resource management comprising a plurality of virtual machines which monitors physical machines, in term of the resources utilized by each of the plurality of physical machines and time costs of the plurality of virtual machines, and to perform a resource reallocation and resource reclamation.
  • the role and steps of the virtual machines are distinctive from that of the present invention, as the present invention is focused on how to deploy compute node based on queued jobs and not to monitor only physical machines.
  • the present invention aims to provide a system and method for automatic creation and deployment of new grid compute nodes by deploying virtual machines over physical machines to serve long waiting, high priority, urgent, and/or communication- intensive jobs.
  • a method for automatic deployment of grid compute nodes comprising the steps of determining a list of queued jobs to serve from queue of LRMS, extracting information on number of nodes, number of required CPUs, and minimum memory capacity requirements from all queued jobs, determining quantity and physical server nodes to deploy virtual machine, integrating all newly virtual machine hostnames and IP addresses to be deployed in centralized information controller, sending updated hostname list from centralized information controller to each existing grid compute node to notify existence of new virtual machines to all grid compute nodes, determining virtual memory image to be deployed, preparing and configuring virtual machine, recording number of deployed virtual machines and their corresponding job unique identifiers in the log file and assigning queued jobs to grid compute nodes.
  • the method for preparing and configuring the virtual machine images further comprising the steps of preparing images and configuration files for new virtual machines, mounting virtual machine images to physical server nodes, configuring host files for the virtual machine images, configuring network interface for virtual machine image, configuring name server for virtual machine image, sending updated host name lists from Centralized Information Controller to new image, unmounting virtual machine image from physical server node, notifying LRMS of the addition of new virtual machine image, registering new virtual machine image as Grid Compute Node at LRMS and deploying new Grid Compute Node.
  • configuration files include information on MAC address, VM name, number of and virtual CPUs.
  • preparing image includes duplicating a pre-defined virtual image for the creation of new images.
  • the new images are updated with latest hostname list retrieved from centralized information controller.
  • system for automatic deployment of grid compute nodes comprising a user interface, a virtual image repository, a local resource management and grid compute nodes.
  • FIG.1 illustrates flow diagram of job submission to Local Resource Management System (LRMS) of a prior art.
  • LRMS Local Resource Management System
  • FIG.2 illustrates flow of job execution based on priority on Local Resource Management System (LRMS) of a prior art.
  • FIG.3 illustrates prior art of Job 4, Job 5 and Job 6 which are queued by LRMS due to insufficient resources.
  • LRMS Local Resource Management System
  • FIG.4 illustrates the utilization of centralized information controller for synchronization of all network information.
  • FIG.5 illustrates method to serve job of the present invention.
  • FIG.6 illustrates steps to . configure virtual machine image for deployment
  • FIG. 7 illustrates steps to prepare pre-configured virtual machine images to support automatic deployment and connection with LRMS server at runtime.
  • FIG. 8 illustrates preparing pre-configured virtual machine image for grid compute node DETAILED DESCRIPTION OF THE ACCOMPANYING DRAWINGS
  • FIG. 1 the figure illustrates flow diagram of job submission to Local Resource Management System (LRMS) of one of the prior arts (100).
  • the job will be submitted to the LRMS (10) and determine whether sufficient nodes are available (11) to submit to compute nodes (14). When there are no free compute nodes available to execute jobs (13), the jobs will be queued until sufficient compute nodes are available and the process ends (15).
  • FIG. 2 describes flow of job execution based on priority on Local Resource Management System (LRMS). Job is submitted to LRMS (210). The LRMS will check whether there are sufficient nodes available (211). If the priority of the job is low (113), the system will wait until there is available nodes (117).
  • LRMS Local Resource Management System
  • FIG.3 the figure illustrates a prior art where Job 4, Job 5 and Job 6 are queued by LRMS (210) due to insufficient resources.
  • the present invention in FIG. 3b illustrates the present invention through automatic deployment of grid compute nodes.
  • the solution employs modern virtualization technology to configure and deploy virtual machines to accommodate and satisfy the requirements of queue and/or pending jobs.
  • FIG. 4 shows the present invention makes use of the centralized information controller (314) to synchronize all network information of each existing and new Grid compute node.
  • the centralized information controller (314) compiled all the jobs from LRMS (313) prior to distribution to the physical servers (311 , 312). Without a centralized information controller (314), it is not possible for grid compute node to recognize one another.
  • the image repository (310) stores the image from the system.
  • the method integrates virtualization technology to support execution requirement of queued, pending, urgent and/or high priority jobs. It provides method and system allowing critical execution of queued and/or pending, urgent and high priority jobs in the times when current available resources do not have the capabilities to match the requirements of the job.
  • the system starts (410) by receiving job submission directly from the user (410), from which resource broker performs match-making process (413) to determine Local Resource Management System (LRMS) to assign the job (414). If job requirements can be met, the job is allocated to LRMS (415) and will be scheduled immediately (417) when there are sufficient nodes (416). However, if the job requirement cannot exactly be met, the present invention provides method to serve the job by determining a list of queued jobs (418) to serve from the queue of the LRMS. Thereafter, it will extract information on the number of nodes, number of required CPUs, and minimum memory capacity requirements from all queued jobs (420). For a list of current queued jobs, the system determines how many and which physical server nodes to deploy virtual machine (420).
  • LRMS Local Resource Management System
  • virtual machines is later prepared and configured.
  • the steps to configure virtual machines are by preparing images and the configuration files for the new virtual machines.
  • Configuration files include information on MAC address, VM name, number of and virtual CPUs, preparing image involves in duplicating a predefined virtual image for the creation of new images (510).
  • the new images are updated with the latest hostname list retrieved from centralized information controller.
  • the VM image is then mounted to the physical server nodes and host files for the VM image are configured (51 1 , 512, 513 and 514). Thereafter, updated hostname lists from Centralized Information Controller are delivered to new image (515) and VM image from physical server node is unmounted (516). LRMS is notified by addition of new VM image (517). New VM image is registered as Grid Compute Node at LRMS (518). Finally, new Grid Compute Node is deployed (519).
  • the method starts (610) with configuring the LRMS server to support auto-deployment and connection with grid compute nodes (61 1). At this stage, it is assumed that LRMS has been installed successfully.
  • the LRMS is configured to have two network interfaces for internal and external networks.
  • the internal network allows communication between the LRMS and grid compute nodes.
  • the external network allows communication between LRMS to public network.
  • Network Address Translator (NAT) is configured on LRMS (612).
  • NAT configuration allows communication of grid compute nodes to public network.
  • IP forwarding at LRMS is enabled (613) at LRMS to allow communication of grid compute nodes to public network. Routing from internal IP address (gateway) to external IP address is enabled (614) to allow communication of grid compute nodes to public network.
  • the method ends (615) once the steps are completed.
  • the pre-configured virtual machine image is prepared for grid compute nodes.
  • the method is executed (710) through creating a new disk image
  • LRMS client is configured to establish communication with LRMS server (713).
  • the LRMS server information is included in LRMS client configuration files to allow auto deployment of grid compute node.
  • a user account from LRMS server to LRMS client is mapped (714) to enable job execution at LRMS client.
  • Passwordless communication between LRMS server and LRMS client is then configured for all user accounts (715). The steps end (716) once the sequence is completed.

Abstract

A system and method for automatic creation and deployment of new grid compute nodes by deploying virtual machines over physical machines to serve long waiting, high priority, urgent, and/or communication-intensive jobs.

Description

METHOD AND SYSTEM FOR AUTOMATIC DEPLOYMENT OF GRID COMPUTE
NODES
TECHNICAL FIELD
The present invention relates to resource management in a cluster, grid environment, and more particularly to system and method for automatic creation and deployment of new grid compute nodes by deploying virtual machines over physical machines to serve long waiting, high priority, urgent, and/or communication-intensive jobs.
BACKGROUND ART
A computational grid enables the aggregation of multiple clusters for dynamic sharing, selection, and management of resources. The grid middleware provides protocols and functionalities to enable these dynamic mechanisms of sharing resources. Existing cluster Management Systems such as Condor, SGE, LSF, Torque, and PBS Pro cluster systems are widely deployed in academic and commercial environments to manage the computational needs for the execution of sequential and parallel applications.
However, current grid and resource management systems have limitations. One of the limitations is long waiting time under scarce competitions of resources. Arriving jobs that request resources tend to cause fragmentations to other waiting jobs in queue causing long waiting time for most jobs. Hence, the user job is forced to wait in the queue until suitable resources are released by other running jobs that have completed their execution. The current grid arid resource management system cannot accommodate urgent/high priority jobs immediately/cannot accommodate jobs submitted by high priority users immediately. Currently, there is no build in support to obtain the resources when user job needs them at burst or sudden spikes in demand. In most cases, at high load, it is often difficult to find physical resources that exactly match with the resource specification requested by the user.
On system where network connections do not resemble a flat all-to-all topology, resource placement may impact performance as communication intensive parallel jobs. If latencies and network bandwidth between any two or more compute nodes vary significantly, the node allocation policy should attempt to allocate compute nodes of a given job as close to each other as possible to minimize impact of bandwidth and latency differences. Currently there is no support for resolving the bandwidth and latency issues.
Patent US6353898 and US2010162259 disclosed resource management in clustered computer system, and virtualization-based resource management apparatus and method and computing system for virtualization-based resource management respectively. US6353898 is distinctive over the present invention wherein the main focus is to provide a failure-proof system. In order to achieve this, failures and possible failures by node software, hardware and interconnections can be detected. Compensation is made for preventing failures, such as emergency communications and sharable resource allocation with minimal locking. Furthermore, virtualization technology is not involved. It is observed that the steps involved are distinctive from that of the present invention.
US2010162259 is distinguished over the present invention whereby it covers method for resource management comprising a plurality of virtual machines which monitors physical machines, in term of the resources utilized by each of the plurality of physical machines and time costs of the plurality of virtual machines, and to perform a resource reallocation and resource reclamation. However, the role and steps of the virtual machines are distinctive from that of the present invention, as the present invention is focused on how to deploy compute node based on queued jobs and not to monitor only physical machines.
Effectively, there is a need for mechanism to deploy resources on demand when they are needed most, meaning those resources have to be made available immediately with very little advance notice. This requirement is a must for urgent and/or high priority and bandwidth and latency sensitive jobs.
DISCLOSURE OF THE INVENTION The present invention aims to provide a system and method for automatic creation and deployment of new grid compute nodes by deploying virtual machines over physical machines to serve long waiting, high priority, urgent, and/or communication- intensive jobs. In a preferred embodiment of the present invention, a method for automatic deployment of grid compute nodes comprising the steps of determining a list of queued jobs to serve from queue of LRMS, extracting information on number of nodes, number of required CPUs, and minimum memory capacity requirements from all queued jobs, determining quantity and physical server nodes to deploy virtual machine, integrating all newly virtual machine hostnames and IP addresses to be deployed in centralized information controller, sending updated hostname list from centralized information controller to each existing grid compute node to notify existence of new virtual machines to all grid compute nodes, determining virtual memory image to be deployed, preparing and configuring virtual machine, recording number of deployed virtual machines and their corresponding job unique identifiers in the log file and assigning queued jobs to grid compute nodes.
In another preferred embodiment of the present invention, the method for preparing and configuring the virtual machine images further comprising the steps of preparing images and configuration files for new virtual machines, mounting virtual machine images to physical server nodes, configuring host files for the virtual machine images, configuring network interface for virtual machine image, configuring name server for virtual machine image, sending updated host name lists from Centralized Information Controller to new image, unmounting virtual machine image from physical server node, notifying LRMS of the addition of new virtual machine image, registering new virtual machine image as Grid Compute Node at LRMS and deploying new Grid Compute Node. In another preferred embodiment of the present invention, configuration files include information on MAC address, VM name, number of and virtual CPUs.
In another preferred embodiment of the present invention, preparing image includes duplicating a pre-defined virtual image for the creation of new images.
In another preferred embodiment of the present invention, the new images are updated with latest hostname list retrieved from centralized information controller. In another preferred embodiment of the present invention, system for automatic deployment of grid compute nodes comprising a user interface, a virtual image repository, a local resource management and grid compute nodes.
The present invention consists of features and a combination of parts hereinafter fully described and illustrated in the accompanying drawings, it being understood that various changes in the details may be made without departing from the scope of the invention or sacrificing any of the advantages of the present invention.
BRIEF DESCRIPTION OF THE ACCOMPANYING DRAWINGS
To further clarify various aspects of some embodiments of the present invention, a more particular description of the invention will be rendered by references to specific embodiments thereof, which are illustrated, in the appended drawings. It is appreciated that these drawings depict only typical embodiments of the invention and are therefore not to be considered limiting of its scope. The invention will be described and explained with additional specificity and detail through the accompanying drawings in which:
FIG.1 illustrates flow diagram of job submission to Local Resource Management System (LRMS) of a prior art.
FIG.2 illustrates flow of job execution based on priority on Local Resource Management System (LRMS) of a prior art. FIG.3 illustrates prior art of Job 4, Job 5 and Job 6 which are queued by LRMS due to insufficient resources.
FIG.4 illustrates the utilization of centralized information controller for synchronization of all network information.
FIG.5 illustrates method to serve job of the present invention.
FIG.6 illustrates steps to . configure virtual machine image for deployment FIG. 7 illustrates steps to prepare pre-configured virtual machine images to support automatic deployment and connection with LRMS server at runtime.
FIG. 8 illustrates preparing pre-configured virtual machine image for grid compute node DETAILED DESCRIPTION OF THE ACCOMPANYING DRAWINGS
Now referring to FIG. 1 , the figure illustrates flow diagram of job submission to Local Resource Management System (LRMS) of one of the prior arts (100). The job will be submitted to the LRMS (10) and determine whether sufficient nodes are available (11) to submit to compute nodes (14). When there are no free compute nodes available to execute jobs (13), the jobs will be queued until sufficient compute nodes are available and the process ends (15). Meanwhile, as according to the present invention, FIG. 2 describes flow of job execution based on priority on Local Resource Management System (LRMS). Job is submitted to LRMS (210). The LRMS will check whether there are sufficient nodes available (211). If the priority of the job is low (113), the system will wait until there is available nodes (117). When high priority jobs are allocated at queue, low priority jobs that are executed will be suspended (114) and queued jobs will be released (115) and executed (116). During determining whether there s sufficient nodes available ( 12), if there is sufficient nodes, the system will directly schedule the job to the LRMS (118) and conclude the job (1 9). Now referring to FIG.3, the figure illustrates a prior art where Job 4, Job 5 and Job 6 are queued by LRMS (210) due to insufficient resources. Meanwhile, the present invention in FIG. 3b illustrates the present invention through automatic deployment of grid compute nodes. The solution employs modern virtualization technology to configure and deploy virtual machines to accommodate and satisfy the requirements of queue and/or pending jobs. The solution is through present steps and methods to configure, and deploy virtual machines on physical server nodes registered in a grid system. New methods and system are disclosed on the provisioning, and deploying, and managing jobs on a pool of grid compute nodes. FIG. 4 shows the present invention makes use of the centralized information controller (314) to synchronize all network information of each existing and new Grid compute node. The centralized information controller (314) compiled all the jobs from LRMS (313) prior to distribution to the physical servers (311 , 312). Without a centralized information controller (314), it is not possible for grid compute node to recognize one another. The image repository (310) stores the image from the system.
According to FIG. 5, the method integrates virtualization technology to support execution requirement of queued, pending, urgent and/or high priority jobs. It provides method and system allowing critical execution of queued and/or pending, urgent and high priority jobs in the times when current available resources do not have the capabilities to match the requirements of the job.
The system starts (410) by receiving job submission directly from the user (410), from which resource broker performs match-making process (413) to determine Local Resource Management System (LRMS) to assign the job (414). If job requirements can be met, the job is allocated to LRMS (415) and will be scheduled immediately (417) when there are sufficient nodes (416). However, if the job requirement cannot exactly be met, the present invention provides method to serve the job by determining a list of queued jobs (418) to serve from the queue of the LRMS. Thereafter, it will extract information on the number of nodes, number of required CPUs, and minimum memory capacity requirements from all queued jobs (420). For a list of current queued jobs, the system determines how many and which physical server nodes to deploy virtual machine (420). It includes all newly virtual machine hostnames and IP addresses to be deployed in centralized information controller (421 ). Thereafter, updated hostname list from centralized information controller is sent to each existing grid compute nodes as to notify the existence of new virtual machines to all grid compute nodes (422). Later, virtual machine image is determined to be deployed (423). The preparation of virtual machine can be seen from FIG.6.
According to FIG. 6, virtual machines is later prepared and configured. The steps to configure virtual machines are by preparing images and the configuration files for the new virtual machines. Configuration files include information on MAC address, VM name, number of and virtual CPUs, preparing image involves in duplicating a predefined virtual image for the creation of new images (510). The new images are updated with the latest hostname list retrieved from centralized information controller.
The VM image is then mounted to the physical server nodes and host files for the VM image are configured (51 1 , 512, 513 and 514). Thereafter, updated hostname lists from Centralized Information Controller are delivered to new image (515) and VM image from physical server node is unmounted (516). LRMS is notified by addition of new VM image (517). New VM image is registered as Grid Compute Node at LRMS (518). Finally, new Grid Compute Node is deployed (519).
According to FIG. 7, it is also necessary to prepare the pre-configured VM images to support automatic deployment and connection with the LRMS server at runtime. The method starts (610) with configuring the LRMS server to support auto-deployment and connection with grid compute nodes (61 1). At this stage, it is assumed that LRMS has been installed successfully. The LRMS is configured to have two network interfaces for internal and external networks. The internal network allows communication between the LRMS and grid compute nodes. The external network allows communication between LRMS to public network. Network Address Translator (NAT) is configured on LRMS (612). NAT configuration allows communication of grid compute nodes to public network. IP forwarding at LRMS is enabled (613) at LRMS to allow communication of grid compute nodes to public network. Routing from internal IP address (gateway) to external IP address is enabled (614) to allow communication of grid compute nodes to public network. The method ends (615) once the steps are completed.
According to FIG. 8, the pre-configured virtual machine image is prepared for grid compute nodes. The method is executed (710) through creating a new disk image
(7 1 ) and deploying and installing operating system on newly created disk image
(712) . Thereafter, LRMS client is configured to establish communication with LRMS server (713). The LRMS server information is included in LRMS client configuration files to allow auto deployment of grid compute node. A user account from LRMS server to LRMS client is mapped (714) to enable job execution at LRMS client. Passwordless communication between LRMS server and LRMS client is then configured for all user accounts (715). The steps end (716) once the sequence is completed. In the foregoing specification, specific embodiments of the present invention have been described. However, one of ordinary skill in the art will appreciate that various modifications and changes can be made without departing from the scope of the present invention as set forth in the various embodiments discussed above and the claims that follow. Accordingly, the specification and figures are to be regarded in an illustrative rather than a restrictive sense, and all such modifications are intended to be included within the scope of present invention. The benefits, advantages, solutions to problems, and any element(s) that may cause any benefit, advantage, or solution to occur or become more pronounced are not to be construed as a critical, required, or essential features or elements as described herein.

Claims

1. A method for automatic deployment of grid compute nodes comprising the steps of:
determining a list of queued jobs to serve from queue of LRMS (418);
extracting information on number of nodes, number of required CPUs, and minimum memory capacity requirements from all queued jobs (419);
determining quantity and physical server nodes to deploy virtual machine (420); integrating all newly virtual machine hostnames and IP addresses to be deployed in centralized information controller (421);
sending updated hostname list from centralized information controller to each existing grid compute node to notify existence of new virtual machines to all grid compute nodes (422);
determining virtual memory image to be deployed (423);
preparing and configuring virtual machine (424);
recording number of deployed virtual machines and their corresponding job unique identifiers in the log file (425); and
assigning queued jobs to grid compute nodes (426).
2. The method for preparing and configuring the virtual machine images as according to claim 1 further comprising the steps of
preparing images and configuration files for new virtual machines (510);
mounting virtual machine images to physical server nodes (5 1);
configuring host files for the virtual machine images (512); configuring network interface for virtual machine image (513);
configuring name server for virtual machine image (514);
sending updated host name lists from Centralized Information Controller to new image (515);
unmounting virtual machine image from physical server node (516);
notifying LRMS of the addition of new virtual machine image (517);
registering new virtual machine image as Grid Compute Node at LRMS (518); and deploying new Grid Compute Node (519).
3. The method according to claim 2 wherein configuration files includes information on MAC address, VM name, number of and virtual CPUs.
4. The method according to claim 2 wherein preparing image (510) includes duplicating a pre-defined virtual image for the creation of new images.
5. The method according to claim 4 wherein the new images are updated with latest hostname list retrieved from centralized information controller.
6. The method according to claim 1 includes system for automatic deployment of grid compute nodes comprising a user interface, a virtual image repository, a local resource management and grid compute nodes.
PCT/MY2012/000080 2011-04-12 2012-04-12 Method and system for automatic deployment of grid compute nodes WO2012141573A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
MYPI2011001637 2011-04-12
MYPI2011001637A MY181851A (en) 2011-04-12 2011-04-12 Method and system for automatic deployment of grid compute nodes

Publications (1)

Publication Number Publication Date
WO2012141573A1 true WO2012141573A1 (en) 2012-10-18

Family

ID=47009557

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/MY2012/000080 WO2012141573A1 (en) 2011-04-12 2012-04-12 Method and system for automatic deployment of grid compute nodes

Country Status (2)

Country Link
MY (1) MY181851A (en)
WO (1) WO2012141573A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015032430A1 (en) * 2013-09-04 2015-03-12 Telefonaktiebolaget L M Ericsson (Publ) Scheduling of virtual machines

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050060704A1 (en) * 2003-09-17 2005-03-17 International Business Machines Corporation Managing processing within computing environments including initiation of virtual machines

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050060704A1 (en) * 2003-09-17 2005-03-17 International Business Machines Corporation Managing processing within computing environments including initiation of virtual machines

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
MOORE J. ET AL.: "Managing Mixed-Use Clusters with Cluster-on-Demand", TECHNICAL REPORT, November 2002 (2002-11-01), pages 1 - 12, Retrieved from the Internet <URL:http://www.cs.duke.edu/nicl/pub/papers/cod.pdt> [retrieved on 20120807] *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015032430A1 (en) * 2013-09-04 2015-03-12 Telefonaktiebolaget L M Ericsson (Publ) Scheduling of virtual machines

Also Published As

Publication number Publication date
MY181851A (en) 2021-01-11

Similar Documents

Publication Publication Date Title
US11687422B2 (en) Server clustering in a computing-on-demand system
US9999030B2 (en) Resource provisioning method
US9003411B2 (en) Automated provisioning and configuration of virtual and physical servers
EP3049927B1 (en) Client-premise resource control via provider-defined interfaces
CN107924383B (en) System and method for network function virtualized resource management
US8370481B2 (en) Inventory management in a computing-on-demand system
EP1763749B1 (en) Facilitating access to input/output resources via an i/o partition shared by multiple consumer partitions
EP2659381B1 (en) Integrated software and hardware system that enables automated provisioning and configuration of a blade based on its physical location
US9104461B2 (en) Hypervisor-based management and migration of services executing within virtual environments based on service dependencies and hardware requirements
US8463882B2 (en) Server cloning in a computing-on-demand system
WO2014169870A1 (en) Virtual network element automatic loading and virtual machine ip address acquisition method and system, and storage medium
CN110661647A (en) Life cycle management method and device
JP2009075718A (en) Method of managing virtual i/o path, information processing system, and program
EP3794807A1 (en) Apparatuses and methods for zero touch computing node initialization
US8819200B2 (en) Automated cluster node configuration
US8995424B2 (en) Network infrastructure provisioning with automated channel assignment
US20170322788A1 (en) Parallel distribution of application services to virtual nodes
CN114827177B (en) Deployment method and device of distributed file system and electronic equipment
WO2012141573A1 (en) Method and system for automatic deployment of grid compute nodes
CN111124593A (en) Information processing method and device, network element and storage medium
EP4302464A1 (en) Pre-provisioning server hardware for deployment on an edge network
US9547455B1 (en) Allocating mass storage to a logical server
Margaris Local Area Multicomputer (LAM-MPI)

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12771821

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12771821

Country of ref document: EP

Kind code of ref document: A1