CN106789198B - Computing node management method and system - Google Patents

Computing node management method and system Download PDF

Info

Publication number
CN106789198B
CN106789198B CN201611116226.3A CN201611116226A CN106789198B CN 106789198 B CN106789198 B CN 106789198B CN 201611116226 A CN201611116226 A CN 201611116226A CN 106789198 B CN106789198 B CN 106789198B
Authority
CN
China
Prior art keywords
computing node
node
information
central server
computing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611116226.3A
Other languages
Chinese (zh)
Other versions
CN106789198A (en
Inventor
林楷填
李文杰
范日明
毛亮
黄仝宇
宋一兵
汪刚
侯玉清
刘双广
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Gosuncn Technology Group Co Ltd
Original Assignee
Gosuncn Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gosuncn Technology Group Co Ltd filed Critical Gosuncn Technology Group Co Ltd
Priority to CN201611116226.3A priority Critical patent/CN106789198B/en
Publication of CN106789198A publication Critical patent/CN106789198A/en
Application granted granted Critical
Publication of CN106789198B publication Critical patent/CN106789198B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/02Standardisation; Integration
    • H04L41/0246Exchanging or transporting network management information using the Internet; Embedding network management web servers in network elements; Web-services-based protocols
    • H04L41/0253Exchanging or transporting network management information using the Internet; Embedding network management web servers in network elements; Web-services-based protocols using browsers or web-pages for accessing management information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/02Standardisation; Integration
    • H04L41/0246Exchanging or transporting network management information using the Internet; Embedding network management web servers in network elements; Web-services-based protocols
    • H04L41/0273Exchanging or transporting network management information using the Internet; Embedding network management web servers in network elements; Web-services-based protocols using web services for network management, e.g. simple object access protocol [SOAP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/0823Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability
    • H04L41/0826Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability for reduction of network costs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0889Techniques to speed-up the configuration process
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Computer And Data Communications (AREA)

Abstract

The invention relates to the field of cloud computing, in particular to a computing node management method and system. The method comprises the following steps: s1, a central server receives computing node information sent by a web management interface; s2, the central server side inquires whether the computing node really exists and is available according to the received computing node information, if yes, the central server side inquires whether record information related to a host where the computing node is located exists in a central database, if yes, physical information of the computing node is obtained and updated through an interface of livirt, and if not, the step S3 is executed; and S3, the central server mounts the stored directory on a new computing node, initializes the configuration of the computing node, and calls an interface of livirt to acquire and update the physical information of the computing node. The method and the system shield the difference of the virtualization layer hypervisor, can simultaneously carry out unified management on the computing nodes, are simple and quick, greatly reduce the management cost and improve the working efficiency.

Description

Computing node management method and system
Technical Field
The invention relates to the field of cloud computing, in particular to a computing node management method and system.
Background
At present, the bottom layer of virtualization has various implementation modes, and a lot of virtualization systems are born in order to realize the unified management of virtual machines. Currently, there are two main types of virtualization systems for computing node management:
one is a single-machine version virtualization system represented by virt-manager. The single-machine version virtualization system cannot realize the public use of a plurality of computer points and cannot form the concept of a resource pool due to the limitation of a single machine, so that the virtualization scale is greatly limited, and most of the virtualization systems only aim at one hypervisor and are weak in general use.
And secondly, a virtualization system with multiple computing nodes represented by openstack and the like. The OpenStack project is an open-source cloud computing platform, and cloud computing developers and technicians all over the world jointly create the OpenStack project. OpenStack provides an infrastructure as a service (IaaS) solution through a set of related services. Each service provides an Application Programming Interface (API), facilitating such integration. For a virtualization system represented by openstack, the defects of complex deployment and high expansion difficulty of computing nodes exist, and convenient and rapid deployment cannot be realized.
Disclosure of Invention
In order to overcome at least one of the above-mentioned drawbacks (shortcomings) of the prior art, the present invention provides a method for managing a compute node that can quickly add a compute node.
The invention also provides a computing node management system for rapidly increasing computing nodes.
In order to solve the technical problems, the technical scheme of the invention is as follows:
a computing node management method comprises the following steps:
s1, a central server receives computing node information sent by a web management interface;
s2, the central server side inquires whether the computing node really exists and is available according to the received computing node information, if yes, the central server side inquires whether record information related to a host corresponding to the computing node exists in a central database, if yes, physical information of the computing node is obtained and updated through an interface of livirt, the computing node information is marked as available, and if not, the step S3 is executed;
and S3, the central server mounts the stored directory on a new computing node, initializes the configuration of the computing node, and calls an interface of livirt to acquire and update the physical information of the computing node.
The invention manages the computing nodes by using the central server, the central server can inquire the record information of the computing nodes through the database, updates different computing nodes by using the livirt interface, and can acquire and store the information of the new computing nodes by using the livirt interface while increasing the new computing nodes.
In the above solution, the computing node information sent by the web management interface in step S1 is input to the web management interface by means of external input. The method comprises the steps of obtaining external requirements through a web management interface, and then informing a center server side to execute through the web management interface, so that cloud operation is achieved.
In the above scheme, the computing node information includes an IP address and a login password of the computing node.
In the above scheme, in step S2, the central server queries whether the computing node is really present and available by using ping according to the IP address of the computing node.
In the above scheme, in step S3, the central server mounts the stored directory to the new computing node in a manner of remotely calling a shell command and in a manner of glusterfs or nfs.
In the above solution, in step S3, the configuration initialization of the computing node is implemented by executing a script remotely.
In the above scheme, the method further comprises:
and S4, when the computing node needs to be deleted, the central server side marks the computing node information recorded in the central database as unavailable.
The deletion of the nodes only needs to mark the information of the computing nodes in the central database, and the computing nodes do not need to be deleted directly, thereby providing convenience for the subsequent increase of the computing nodes.
A computing node management system comprises a central server, wherein the central server comprises a receiving module, an inquiry updating module and a node adding module:
the receiving module is used for receiving the computing node information sent by the web management interface;
the query updating module is used for querying whether the computing node really exists and is available according to the received computing node information, if so, querying whether record information related to a host where the computing node is located exists in a central database, if so, acquiring and updating physical information of the computing node through an interface of livirt, and if not, notifying the node adding module;
and the node adding module is used for mounting the stored directory on a new computing node, initializing the configuration of the computing node, and calling an interface of livirt to acquire and update the physical information of the computing node.
The system of the invention utilizes the livirt interface to carry out unified management on different computing nodes by arranging the query updating module, the difference of the virtualization layer hypervisor is not required to be considered in the increasing process of the computing nodes, the operation is simple and quick, the management cost is greatly reduced, and the working efficiency is improved.
In the above solution, the central server further includes:
and the node deleting module is used for marking the computing node information recorded by the central database as unavailable when the computing node is required to be deleted.
In the above scheme, when the node adding module mounts the storage directory to the new computing node, the storage directory is specifically mounted to the new computing node in a mode of remotely calling a shell command and in a mode of glusterfs or nfs.
Compared with the prior art, the technical scheme of the invention has the beneficial effects that:
the invention utilizes libvirt interface for managing the computing nodes, shields the hypervisor difference of the virtualization layer, can realize flexible management on the computing nodes, is convenient and quick, greatly reduces the management cost and improves the working efficiency.
Drawings
Fig. 1 is a flowchart of a method for managing a compute node according to an embodiment of the present invention.
Fig. 2 is an architecture diagram of a computing node management system according to an embodiment of the present invention.
Fig. 3 is an architecture diagram of a cloud computing resource virtualization system according to embodiment 3 of the present invention.
Detailed Description
The drawings are for illustrative purposes only and are not to be construed as limiting the patent;
for the purpose of better illustrating the embodiments, certain features of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product;
it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
In the description of the present invention, it is to be understood that, furthermore, the terms "first" and "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implying any number of technical features indicated. Thus, a defined feature of "first", "second", may explicitly or implicitly include one or more of that feature. In the description of the present invention, "a plurality" means two or more unless otherwise specified.
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted" and "connected" are to be interpreted broadly, e.g., as being either fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, so to speak, as communicating between the two elements. The specific meaning of the above terms in the present invention can be understood in specific cases to those skilled in the art.
The technical solution of the present invention is further described below with reference to the accompanying drawings and examples.
Example 1
Fig. 1 is a flowchart of a method for managing a compute node according to an embodiment of the present invention. Referring to fig. 1, a method for managing a compute node according to this embodiment includes:
s101, a central server receives computing node information sent by a web management interface; the computing node information sent by the web management interface is input into the web management interface in an external input mode. After the information required by the computing node is input through the web management interface, the web management interface is transmitted to the central server through an http protocol. The information of the computing node comprises but is not limited to an IP address and a login password of the computing node, and the setting of the login password is mainly used for secret-free initialization of the computing node, so that the central shared storage area can be initialized conveniently through a remote command and the initialization of a node virtualization environment can be facilitated.
S102, the central server side inquires whether a computing node really exists and is available according to received computing node information, if yes, the central server side inquires whether record information related to a host corresponding to the computing node exists in a central database, if yes, physical information of the computing node is obtained and updated through a livirt interface, the computing node information is marked as available, and if not, the step S103 is executed; whether the computing node really exists and is available can be achieved through a ping mode, and after receiving the computing node information, the central server side can determine whether the computing node really exists and is available through the IP address of the central server side in a ping mode. In this step, the central server determines whether the computing node exists really and is available, in which the purpose is that the deleted computing node information may exist in the central database, if the computing node needs to be added, the central server only needs to update the physical information of the computing node in the central database and mark the physical information as available, so that the addition of the computing node can be completed, and if the computing node needing to be added does not have the record information in the central database, the computing node needs to be initialized by executing step S103. The physical information of the computing node mentioned in this step includes, but is not limited to, the disk size, cpu, memory, network card information, and the like of the computing node, and the central server automatically acquires and updates the physical information.
In this step, the livirt interface supports the dynamic migration in the backend mirroring mode by modifying the implementation manner of the livirt dynamic migration interface.
S103, the central server mounts the stored directory on a new computing node, initializes the configuration of the computing node, and calls an interface of livirt to acquire and update the physical information of the computing node. The stored catalog is positioned in the central shared storage, and the content comprises the following components: the method comprises the steps of initializing script information needed by a computing node, creating a mirror image template and a related configuration file template needed by a virtual machine, managing files of a virtual machine cluster network, and mirror images and snapshot files of other virtual machines. The central storage is realized through the central shared storage, so that the management of data is convenient on one hand, and the virtual machine can be quickly migrated from one to another on the other hand. The mounting process is that the central server mounts the stored directory to a new computing node in a remote shell command calling mode and in a glusterfs or nfs mode, and the configuration initialization of the computing node is realized in a remote script executing mode. S104, when the computing node needs to be deleted, the central server side marks the computing node information recorded in the central database as unavailable. The deleting step does not need to delete the computing nodes directly, but marks the computing node information in the central database, and the mode aims to provide convenience for the later re-addition of the computing nodes. The deletion requirement of the computing node can be acquired through the web management interface, and then the central server side is informed to carry out the computing node deletion step. When the deleted computing node is subsequently restarted, the method can be realized only by executing the computing node adding step. The central server side can judge whether to add or delete the computing node according to the information mark bit carried by the computing node information sent by the web.
In the specific implementation process, after the central shared storage network is unexpectedly disconnected for a period of time and the network is restored again, the virtual machine cannot be identified, and therefore a mounting mode of gluterfs + nfs is adopted in step S103. The replication tape roll of the gluterfs is unstable, the problem is easy to occur in the production process, and the problem that the virtual machine cannot be identified after the central shared network is disconnected and recovered can be well solved by using the replication tape roll + raid0 instead.
In the method of the present invention, the information stored in the central database includes, but is not limited to, host information, virtual machine information, template information, and cluster network related information.
The method for managing the computing nodes can realize the rapid addition and deletion of the computing nodes, and the addition and deletion process shields the difference of the virtualization layer hypervisor, thereby greatly reducing the management cost and improving the working efficiency.
Example 2
The invention further provides a computing node management system on the basis of the embodiment 1. Referring to fig. 2, a computing node management system in this embodiment specifically includes a central server 200, where the central server 200 includes a receiving module 201, an inquiry updating module 202, a node adding module 203, and a node deleting module 204.
A receiving module 201, configured to receive computing node information sent from a web management interface; wherein the computing node information includes, but is not limited to, an IP address of the computing node, a login password.
The query updating module 202 is configured to query whether a computing node exists and is available according to received computing node information, if so, query whether record information related to a host corresponding to the computing node exists in a central database, and if so, acquire and update physical information of the computing node through an interface of livirt and mark the computing node information as available, where the physical information includes but is not limited to information of a disk size, a cpu, a memory, and a network card of the computing node; if not, notifying the node to increase the module; in order to determine whether a computing node exists and is available, the query and update module 202 may be implemented in a ping manner, and after receiving the computing node information, the query and update module 202 may determine whether the computing node exists and is available in a ping manner through its IP address.
The node adding module 203 is configured to mount the stored directory on a new computing node, initialize configuration of the computing node, and call an interface of livirt to obtain and update physical information of the computing node. When the node adding module 203 mounts the storage directory to the new computing node, the storage directory is specifically mounted to the new computing node in a mode of remotely calling a shell command and in a mode of glusterfs or nfs. The stored catalog is positioned in the central shared storage, and the content comprises the following components: the method comprises the steps of initializing script information needed by a computing node, creating a mirror image template and a related configuration file template needed by a virtual machine, managing files of a virtual machine cluster network, and mirror images and snapshot files of other virtual machines. The central storage is realized through the central shared storage, so that the management of data is convenient on one hand, and the virtual machine can be quickly migrated from one to another on the other hand.
And the node deleting module 204 is configured to mark the computing node information recorded in the central database as unavailable when the computing node needs to be deleted. The system of the present invention does not need to delete the computing node directly, but marks the computing node information in the node deletion module 204, which aims to provide convenience for the later re-addition of the computing node. The deletion requirement of the computing node can be acquired through the web management interface, and then the node deletion module 204 of the center server is informed to perform the computing node deletion step. When the deleted computing node needs to be restarted later, the method can be realized only by the node adding module 203. The central server 200 may determine whether to add or delete a compute node according to an information flag carried in the compute node information sent by the web.
The system of the invention utilizes the livirt interface to carry out unified management on different computing nodes by arranging the query updating module, the difference of the virtualization layer hypervisor is not required to be considered in the increasing process of the computing nodes, the operation is simple and quick, the management cost is greatly reduced, and the working efficiency is improved.
Example 3
On the basis of embodiment 2, this embodiment further describes embodiment 2 in combination with a cloud computing resource virtualization system where a central server is located.
The cloud computing resource virtualization system comprises a management end, a central service layer and a computing node end, wherein the management end is provided with a web management interface 100, the central server layer is provided with a central service end 200, a central database 300 and a central shared storage area 400, and the computing node end is provided with at least one computing node 500;
the Web management interface 100 is used for a manager to input information required by the computing node, including an IP address, a login password and the like of the computing node, and the Web management interface 100 transmits the input information to the central server 200 through an http protocol;
the central server 200 is responsible for processing incoming and outgoing information from the web management interface 100. The central server 200 firstly analyzes the received information, extracts and stores the required information, and the information storage is completed through the central database 300; the central server 200 then determines whether the computing node is actually present and available by pinging the computing node's IP address. If the physical information is available, the central database 300 is searched first, whether the relevant records of the host corresponding to the computing node exist is checked, if the relevant records of the host corresponding to the computing node exist, the relevant interfaces of livirt acquire and update the physical information of the computing node, wherein the physical information comprises but is not limited to the information of the disk size, the cpu, the memory and the network card of the computing node; if no relevant record is found in the central database 300, the central server 200 mounts the directory of the central shared storage area 400 to the new computing node 500 in a manner of remotely calling a shell command and in a manner of glusterfs or nfs, initializes relevant configuration of the computing node 500 by remotely executing a script, and then calls a relevant interface to acquire and update relevant information of the computing node.
When the computing node 500 is deleted, the central server 200 sets the flag bit of the record related to the computing node 500 to be deleted in the central database 300 as unavailable, so that convenience is provided for the subsequent re-addition of the computing node 500.
The same or similar reference numerals correspond to the same or similar parts;
the positional relationships depicted in the drawings are for illustrative purposes only and are not to be construed as limiting the present patent;
it should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.

Claims (5)

1. A method for managing a compute node, comprising the steps of:
s1, a central server receives computing node information sent by a web management interface;
s2, the central server side inquires whether the computing node really exists and is available according to the received computing node information, if yes, the central server side inquires whether record information related to a host corresponding to the computing node exists in a central database, if yes, physical information of the computing node is obtained and updated through an interface of libvirt, the computing node information is marked as available, and if not, the step S3 is executed;
s3, the central server side mounts the stored directory on a new computing node, initializes the configuration of the computing node, and calls an interface of libvirt to acquire and update physical information of the computing node;
s4, when the computing node needs to be deleted, the central server side marks the computing node information recorded by the central database as unavailable;
the computing node information sent by the web management interface in the step S1 is input to the web management interface by means of external input; the computing node information comprises an IP address and a login password of the computing node; in the step S2, the central server side inquires whether the computing node really exists and is available or not in a ping mode according to the IP address of the computing node;
in step S2, the central server determines whether the compute node exists and is available, where the purpose of the determination is that the deleted compute node may exist in the central database, and if the compute node needs to be added, the central server only needs to update the physical information of the compute node in the central database and mark the update as available, so as to complete the addition of the compute node, and if the compute node that needs to be added does not have record information in the central database, the central server needs to execute step S3 to initialize the compute node.
2. The method as claimed in claim 1, wherein in step S3, the central server mounts the stored directory onto the new compute node by means of a remote call shell command and by means of glusterfs or nfs.
3. The method for managing computing nodes according to claim 1, wherein in step S3, the initialization of configuration of the computing nodes is realized by executing scripts remotely.
4. A computing node management system is characterized by comprising a central server, wherein the central server comprises a receiving module, an inquiry updating module and a node adding module:
the receiving module is used for receiving the computing node information sent by the web management interface;
the query updating module is used for querying whether the computing node really exists and is available according to the received computing node information, if so, querying whether record information related to a host where the computing node is located exists in a central database, if so, acquiring and updating physical information of the computing node through an interface of libvirt, and if not, informing the node adding module;
the node adding module is used for mounting the stored directory on a new computing node, initializing the configuration of the computing node, and calling an interface of livirt to acquire and update physical information of the computing node;
the center server also comprises:
the node deleting module is used for marking the computing node information recorded by the central database as unavailable when the computing node is required to be deleted;
the computing node information sent by the web management interface is input into the web management interface in an external input mode; the computing node information comprises an IP address and a login password of the computing node; the central server side inquires whether the computing node really exists and is available or not in a ping mode according to the IP address of the computing node;
the central server side judges whether the computing node really exists and is available, and aims to solve the problem that deleted computing node information possibly exists in a central database, if the computing node needs to be added, the central server side only needs to update physical information of the computing node in the central database and mark the updated physical information as available to complete the addition of the computing node, if the computing node needing to be added does not have record information in the central database, the central server side needs to mount a stored directory on a new computing node, initialize the configuration of the computing node, and call an interface of libvirt to acquire and update the physical information of the computing node to initialize the computing node.
5. The system according to claim 4, wherein the node adding module mounts the stored directory to the new computing node by calling a shell command remotely and by using glusterfs or nfs.
CN201611116226.3A 2016-12-07 2016-12-07 Computing node management method and system Active CN106789198B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611116226.3A CN106789198B (en) 2016-12-07 2016-12-07 Computing node management method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611116226.3A CN106789198B (en) 2016-12-07 2016-12-07 Computing node management method and system

Publications (2)

Publication Number Publication Date
CN106789198A CN106789198A (en) 2017-05-31
CN106789198B true CN106789198B (en) 2020-10-02

Family

ID=58882177

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611116226.3A Active CN106789198B (en) 2016-12-07 2016-12-07 Computing node management method and system

Country Status (1)

Country Link
CN (1) CN106789198B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107391640B (en) * 2017-07-11 2020-12-08 浪潮云信息技术股份公司 Method for realizing automatic deployment of SQL Server database mirror mode
CN111240895A (en) * 2019-12-31 2020-06-05 深圳证券通信有限公司 OpenStack-oriented node batch backup system method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103699430A (en) * 2014-01-06 2014-04-02 山东大学 Working method of remote KVM (Kernel-based Virtual Machine) management system based on J2EE (Java 2 Platform Enterprise Edition) framework
CN104657215A (en) * 2013-11-19 2015-05-27 南京鼎盟科技有限公司 Virtualization energy-saving system in Cloud computing

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10467036B2 (en) * 2014-09-30 2019-11-05 International Business Machines Corporation Dynamic metering adjustment for service management of computing platform

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104657215A (en) * 2013-11-19 2015-05-27 南京鼎盟科技有限公司 Virtualization energy-saving system in Cloud computing
CN103699430A (en) * 2014-01-06 2014-04-02 山东大学 Working method of remote KVM (Kernel-based Virtual Machine) management system based on J2EE (Java 2 Platform Enterprise Edition) framework

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
《libvirt详解》;chenyulancn;《https://blog.csdn.net/chenyulancn/article/details/8916452》;20130512;全文 *
《高清互动电视云服务平台及其关键技术研究》;孙亮等;《中国有线电视》;20130620;全文 *

Also Published As

Publication number Publication date
CN106789198A (en) 2017-05-31

Similar Documents

Publication Publication Date Title
US11226847B2 (en) Implementing an application manifest in a node-specific manner using an intent-based orchestrator
US11829263B2 (en) In-place cloud instance restore
US11016935B2 (en) Centralized multi-cloud workload protection with platform agnostic centralized file browse and file retrieval time machine
US11243792B2 (en) Image file conversion method and apparatus
US11113158B2 (en) Rolling back kubernetes applications
US10754741B1 (en) Event-driven replication for migrating computing resources
EP3767471A1 (en) Provisioning and managing replicated data instances
US11347684B2 (en) Rolling back KUBERNETES applications including custom resources
US9864791B2 (en) Flow for multi-master replication in distributed storage
US20080195827A1 (en) Storage control device for storage virtualization system
US8745342B2 (en) Computer system for controlling backups using wide area network
CN110019514A (en) Method of data synchronization, device and electronic equipment
WO2016078422A1 (en) Virtual machine configuration information storage method and apparatus
CN110520844A (en) Cloud management platform, virtual machine management method and its system
US10620871B1 (en) Storage scheme for a distributed storage system
CN105446831A (en) Server-Free backup method in conjunction with SAN
US9461884B2 (en) Information management device and computer-readable medium recorded therein information management program
CN111582824B (en) Cloud resource synchronization method, device, equipment and storage medium
US20160308871A1 (en) Network element data access method and apparatus, and network management system
CN106789198B (en) Computing node management method and system
CN100571172C (en) A kind of multi-vendor alarm and control system and alarm method thereof in the telecommunication management network
JP2009251756A (en) Client device, distributed file system, shared resource multiplexing method, and program
US10976952B2 (en) System and method for orchestrated application protection
US20150074116A1 (en) Indexing attachable applications for computing systems
US10848405B2 (en) Reporting progress of operation executing on unreachable host

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant