CN115242598A - Cloud operating system deployment method and device - Google Patents

Cloud operating system deployment method and device Download PDF

Info

Publication number
CN115242598A
CN115242598A CN202210835020.5A CN202210835020A CN115242598A CN 115242598 A CN115242598 A CN 115242598A CN 202210835020 A CN202210835020 A CN 202210835020A CN 115242598 A CN115242598 A CN 115242598A
Authority
CN
China
Prior art keywords
management
preset
nodes
resource information
operating system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210835020.5A
Other languages
Chinese (zh)
Inventor
杨帆
何玥
刘磊
闫海娜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianyi Cloud Technology Co Ltd
Original Assignee
Tianyi Cloud Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianyi Cloud Technology Co Ltd filed Critical Tianyi Cloud Technology Co Ltd
Priority to CN202210835020.5A priority Critical patent/CN115242598A/en
Publication of CN115242598A publication Critical patent/CN115242598A/en
Priority to PCT/CN2022/141608 priority patent/WO2024011860A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/04Network management architectures or arrangements
    • H04L41/042Network management architectures or arrangements comprising distributed management centres cooperatively managing the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1008Server selection for load balancing based on parameters of servers, e.g. available memory or workload
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1012Server selection for load balancing based on compliance of requirements or conditions with available server resources

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer And Data Communications (AREA)

Abstract

The embodiment of the application provides a cloud operating system deployment method and device, which are used for deploying a plurality of cloud operating systems on one server cluster and saving cost. The method comprises the following steps: receiving a first request; the first request is used for requesting to deploy a first management service corresponding to a first cloud operating system, and the first request comprises the number of management nodes required by the first management service; acquiring residual resource information of a plurality of preset management nodes; the plurality of preset management nodes belong to the first server cluster, and the remaining resource information is the remaining available resources of the plurality of preset management nodes after the second management service corresponding to the second cloud operating system is deployed; and determining a target management node corresponding to the first management service based on the residual resource information and the number of the management nodes.

Description

Cloud operating system deployment method and device
Technical Field
The invention relates to the technical field of cloud computing, in particular to a method and a device for deploying a cloud operating system.
Background
Cloud computing is a popular technical field in recent years, can achieve the purpose of conveniently and rapidly acquiring required resources from a configurable resource sharing pool at any time and any place as needed, and the resources can be rapidly supplied and released, so that the workload of managing the resources and the interaction with service providers are reduced to the minimum. In the Development (i.e., the combination of Development and Operations) practice of the cloud operating system, tests for different characteristics cannot be completed on the same cloud operating system, so that multiple cloud operating systems are required for testing different characteristics.
Currently, when a cloud operating system is deployed, one cloud operating system is generally deployed in one server cluster, so that when a plurality of cloud operating systems need to be deployed, a plurality of server clusters are needed, and the cost is high.
Disclosure of Invention
The embodiment of the application provides a cloud operating system deployment method and device, which are used for deploying a plurality of cloud operating systems on one server cluster and saving cost.
In a first aspect, a cloud operating system deployment method is provided, which is applied to a first server cluster, and the method includes:
receiving a first request; the first request is used for requesting to deploy a first management service corresponding to a first cloud operating system, and the first request comprises the number of management nodes required by the first management service;
acquiring residual resource information of a plurality of preset management nodes; the plurality of preset management nodes belong to the first server cluster, and the remaining resource information is available resources remaining after the plurality of preset management nodes deploy second management services corresponding to a second cloud operating system;
and determining a target management node corresponding to the first management service based on the residual resource information and the number of the management nodes.
Optionally, the determining a target management node based on the remaining resource information and the number of management nodes includes:
determining a first preset management node of the plurality of preset management nodes, wherein the residual resource information of the first preset management node does not meet a preset condition;
determining the target management node based on the remaining resource information of a second preset management node and the number of the management nodes; the second preset management node is other preset management nodes except the first preset management node in the plurality of preset management nodes.
Optionally, the determining, that the remaining resource information includes remaining central processing unit information, remaining memory capacity information, and remaining network resource information, where the determining the first preset management node in the plurality of preset management nodes whose remaining resource information does not satisfy the preset condition includes:
determining the ratio of the residual central processing units, the ratio of the residual memory capacity and the ratio of the residual network resources of each preset management node based on the residual resource information;
determining whether the ratio of the remaining central processing units, the ratio of the remaining memory capacity and the ratio of the remaining network resources are all greater than or equal to corresponding thresholds;
and if any ratio in the first preset management nodes is smaller than a corresponding threshold value, determining that the residual resource information of the first preset management nodes does not meet the preset condition.
Optionally, the determining a target management node based on the remaining resource information and the number of management nodes includes:
performing normalization processing on the residual resource information of each preset management node to obtain a normalization vector;
determining a correlation coefficient between the normalized vector and the resource vector required by the first management service; wherein the correlation coefficient is used for indicating the matching degree of the residual resource and the resource required by the first management service;
determining the target management node based on the remaining resource information, the correlation coefficient and the number of management nodes.
Optionally, the determining the target management node based on the remaining resource information, the correlation coefficient, and the number of management nodes includes:
scoring each preset management node based on the remaining resource information and the correlation coefficient; the score value is used for indicating the degree that each preset management node accords with the deployment of the first management service;
sequencing each preset management node from top to bottom according to the grading score;
and determining the target management node based on the sequencing result and the number of the management nodes.
Optionally, the method further includes:
receiving a second request; the second request is used for requesting to deploy computing service corresponding to the first cloud operating system;
determining a plurality of first computing nodes which do not deploy computing services from a plurality of preset computing nodes;
and randomly determining a second computing node from the plurality of first computing nodes as a target computing node corresponding to the computing service.
Optionally, the second cloud operating system and the first cloud operating system use different ports.
In a second aspect, a cloud operating system deployment apparatus is provided, which is applied to a first server cluster, and includes:
a communication module for receiving a first request; the first request is used for requesting to deploy a first management service corresponding to a first cloud operating system, and the first request comprises the number of management nodes required by the first management service;
the processing module is used for acquiring the residual resource information of a plurality of preset management nodes; the plurality of preset management nodes belong to the first server cluster, and the remaining resource information is available resources remaining after the plurality of preset management nodes deploy second management services corresponding to a second cloud operating system;
the processing module is further configured to determine a target management node corresponding to the first management service based on the remaining resource information and the number of management nodes.
Optionally, the processing module is specifically configured to:
determining a first preset management node of the plurality of preset management nodes, wherein the residual resource information of the first preset management node does not meet a preset condition;
determining the target management node based on the remaining resource information of a second preset management node and the number of the management nodes; the second preset management node is other preset management nodes except the first preset management node in the plurality of preset management nodes.
Optionally, the remaining resource information includes remaining central processing unit information, remaining memory capacity information, and remaining network resource information, and the processing module is specifically configured to:
determining the ratio of the residual central processing units, the ratio of the residual memory capacity and the ratio of the residual network resources of each preset management node based on the residual resource information;
determining whether the ratio of the remaining central processing units, the ratio of the remaining memory capacity and the ratio of the remaining network resources are all greater than or equal to corresponding thresholds;
and if any ratio in the first preset management nodes is smaller than a corresponding threshold value, determining that the residual resource information of the first preset management nodes does not meet the preset condition.
Optionally, the processing module is specifically configured to:
performing normalization processing on the residual resource information of each preset management node to obtain a normalization vector;
determining a correlation coefficient between the normalized vector and the resource vector required by the first management service; wherein the correlation coefficient is used for indicating the matching degree of the residual resource and the resource required by the first management service;
determining the target management node based on the remaining resource information, the correlation coefficient, and the number of management nodes.
Optionally, the processing module is specifically configured to:
scoring each preset management node based on the remaining resource information and the correlation coefficient; the score value is used for indicating the degree that each preset management node accords with the deployment of the first management service;
sequencing the preset management nodes according to the order of the score values from top to bottom;
and determining the target management node based on the sequencing result and the number of the management nodes.
Optionally, the communication module is further configured to:
receiving a second request; the second request is used for requesting to deploy computing service corresponding to the first cloud operating system;
determining a plurality of first computing nodes which do not deploy computing services from a plurality of preset computing nodes;
and randomly determining a second computing node from the plurality of first computing nodes as a target computing node corresponding to the computing service.
Optionally, the second cloud operating system and the first cloud operating system use different ports.
In a third aspect, an electronic device is provided, which includes:
a memory for storing program instructions;
a processor for calling the program instructions stored in said memory and for executing the steps comprised in any of the methods of the first aspect in accordance with the obtained program instructions.
In a fourth aspect, there is provided a computer-readable storage medium having stored thereon computer-executable instructions for causing a computer to perform the steps included in the method of any one of the first aspects.
In a fifth aspect, a computer program product containing instructions is provided, which when run on a computer causes the computer to perform the cloud operating system deployment method described in the above various possible implementations.
In the embodiment of the application, a first request sent by a user is received, where the first request is used for requesting deployment of a first management service corresponding to a first cloud operating system, the first request includes the number of management nodes required by the first management service, remaining resource information of a plurality of preset management nodes is obtained, the plurality of preset management nodes belong to a first server cluster, the remaining resource information is available resources left after deployment of a second management service corresponding to a second cloud operating system by the plurality of preset management nodes, and a target management node corresponding to the first management service is determined based on the remaining resource information and the number of management nodes.
That is to say, after the second management service is deployed by the plurality of preset management nodes of the first server cluster, if a request for deploying the first management service by a user is received, the first management service may be deployed based on the remaining resource information obtained from the plurality of preset management nodes, and the deployment of the plurality of cloud operating systems in one server cluster is achieved by multiplexing the management nodes in the first server cluster by the management services corresponding to the plurality of cloud operating systems, so that the cost is saved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application.
Fig. 1 is a structural diagram of a server cluster provided in an embodiment of the present application;
fig. 2 is a structural diagram of another server cluster provided in the embodiment of the present application;
fig. 3 is a block diagram of a control node according to an embodiment of the present disclosure;
fig. 4 is a flowchart of a method for deploying a cloud operating system according to an embodiment of the present application;
fig. 5 is an interaction diagram of each module in a control node according to an embodiment of the present application;
fig. 6 is a block diagram illustrating a configuration of a cloud operating system deployment device according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of a computer device in an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions in the embodiments of the present application will be described clearly and completely with reference to the accompanying drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application. In the present application, the embodiments and features of the embodiments may be arbitrarily combined with each other without conflict. Also, while a logical order is shown in the flow diagrams, in some cases, the steps shown or described may be performed in an order different than here.
The terms "first" and "second" in the description and claims of the present application and the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the term "comprises" and any variations thereof are intended to cover non-exclusive protection. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements but may alternatively include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. The "plurality" in the present application may mean at least two, for example, two, three or more, and the embodiments of the present application are not limited.
In addition, the term "and/or" herein is only one kind of association relationship describing the association object, and means that there may be three kinds of relationships, for example, a and/or B, and may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" in this document generally indicates that the preceding and following related objects are in an "or" relationship unless otherwise specified.
Before introducing the cloud operating system deployment method provided by the embodiment of the present application, some brief descriptions are first made on the application scenarios of the technical solution of the embodiment of the present application.
As shown in fig. 1, the first server cluster 10 includes a control node 101, a management node 102, and a computing node 103, where the control node 101 is configured to deploy a management service corresponding to a cloud operating system in the management node 102, and deploy a computing service corresponding to the cloud operating system in the computing node 103, that is, the cloud operating system deployment method provided in this embodiment of the present application is mainly implemented by the control node 101. It should be noted that, in this embodiment of the present application, the number of servers included in the control node 101, the management node 102, and the computing node 103 included in the first server cluster 10 may be determined according to actual needs, and in this embodiment of the present application, the number of servers included in each node is not limited.
In this embodiment of the application, the control node 101 receives a first request sent by a user, acquires the remaining resource information of the management node 102, and selects a corresponding management node from the management node 102 based on the remaining resource information and the number of management nodes included in the first request, as a target management node for deploying a first management service corresponding to a first cloud operating system. Taking fig. 2 as an example, the first server cluster includes 10 servers, where servers 1 to 3 are preset as control nodes, servers 4 to 7 are management nodes, servers 8 to 10 are calculation nodes, and when any one of the servers 1 to 3 receives a first request, it is determined that three copies of deployment are required for deploying a first management service corresponding to a first cloud operating system (that is, the number of the management nodes required for the first management service is 3), remaining resource information of the servers 4 to 7 is obtained, and 3 servers are selected from the servers 4 to 7 as target management nodes for deploying the first management service based on the remaining resource information of the servers 4 to 7, for example, the servers 4, 5, and 6 are selected as target management nodes for deploying the first management service.
As a possible implementation manner, as shown in fig. 3, the control node 103 provided in this embodiment of the present Application includes an Application Programming Interface (API) module 103-1, a scheduling module 103-2, a collecting module 103-3, a deploying module 103-4, and a database 103-5. Wherein the content of the first and second substances,
the API module 103-1 is configured to provide an API Interface of Restful style to the outside, and a user performs data interaction through a Command Line Interface (CLI) or in a form of sending a hypertext Transfer Protocol (HTTP) message. The API module provides functions of adding (deploying), deleting, modifying and inquiring of the cloud operating system, functions of adding (deploying), deleting and inquiring of computing nodes of the cloud operating system, and functions of updating and inquiring of resources (CPU, memory and residual ports) of the first server cluster; and performing work such as parameter verification, parameter format processing, configuration file checking, mirror image checking and the like.
The scheduling module 103-2 is configured to select a management node and a computing node from the first server cluster, and deploy a management service and a computing service corresponding to the cloud operating system.
The acquisition module 103-3 is in network communication with each management node, and is configured to acquire hardware resource information of each management node and service state information of a management service corresponding to the cloud operating system, where the hardware resource information of the management node includes information such as CPU, memory, disk, and port occupancy. The management node service state information comprises the state of a container running the cloud operating system, such as: nova, cider, etc. As a possible implementation manner, a timing task may be built in the acquisition module 103-3, and the task of acquiring the hardware resource information and the service state information of the management service corresponding to the cloud operating system is triggered at a fixed time, specifically, the acquisition module 103-3 may acquire the hardware resource information and the service state information of the management service corresponding to the cloud operating system (e.g., docker stations, etc.) by using an idle.
The deployment module 103-4 is configured to deploy and clear the cloud operating system on a designated host (the host refers to a server where a container running the cloud operating system is located, and includes a container running a management service corresponding to the cloud operating system and a server where a container running a computing service corresponding to the cloud operating system is located, for example, a management node and a computing node described in this embodiment of the present application), and specifically, the deployment module includes a management node and a computing node corresponding to the cloud operating system, and different cloud operating systems need to be allocated before the management service is deployed, and port numbers of different services need to be allocated (the same type of services of different cloud operating systems are distinguished according to a combination of IP addresses and port numbers). In the embodiment of the present application, the control node further needs to be in network communication with the code hosting platform, so that the deployment module 103-4 downloads the deployment configuration from the code hosting platform, allocates a port according to the port occupation condition of the host, writes the deployed configuration file, finally deploys the host through the infrastructure, initializes the cloud operating system after deployment is completed, and returns the endpoint information and the administrator credential information of the cloud operating system.
And the database 103-5 is used for storing basic information of the cloud operating system and managing the node resource occupation condition. The database may be MySql, for example, and in order to ensure reliability improvement, the database may be deployed in a cluster manner and periodically backed up.
In a possible implementation manner, in order to achieve greater throughput for communication among different service processes among the API module, the scheduling module, the acquisition module, and other modules, a message queue is used in the embodiment of the present application to asynchronously transfer inter-module messages.
The method for deploying the cloud operating system provided by the embodiment of the application is described below with reference to the drawings of the specification. Referring to fig. 4, a flow of a cloud operating system deployment method in the embodiment of the present application is described as follows:
step 401: receiving a first request;
the first request is used for requesting to deploy a first management service corresponding to the first cloud operating system, and the first request comprises the number of management nodes required by the first management service.
As a possible implementation manner, when a user needs to deploy the first cloud operating system, the user needs to first submit a deployment configuration to the code hosting platform, in this embodiment, the code hosting platform may adopt gerrit, for example, and an administrator reviews the deployment configuration submitted by the user on the gerrit platform and joins the configuration after the review is passed. After determining the joining configuration, the user may initiate a first request for deploying a first management service corresponding to the first cloud operating system.
As a possible implementation manner, the first request further includes the deployment parameters submitted by the user, so when the first request is received, it may also be checked, by the aforementioned API module, whether the parameters submitted by the user are legal, whether an image exists, and the like, and after the check is passed, step 402 is executed, and if the check is not passed, an error is returned to instruct the user to resubmit the deployment parameters.
Step 402: acquiring residual resource information of a plurality of preset management nodes;
the plurality of preset management nodes belong to a first server cluster, and as described above, the first server cluster includes a control node for deploying the cloud operating system, a management node for deploying the management service corresponding to the cloud operating system, and a computing node for deploying the computing service corresponding to the cloud operating system, where the management node for deploying the management service corresponding to the cloud operating system is the plurality of preset management nodes in the embodiment of the present application.
The remaining resource information is available resources remaining after the second management service corresponding to the second cloud operating system is deployed by the plurality of preset management nodes, and in one possible implementation manner, the remaining resource information includes remaining central processing unit information, remaining memory capacity information, and remaining network resource information. It should be noted that, the specific content included in the remaining resource information is exemplary, and in a specific implementation process, other information (information about the remaining disk capacity and the remaining port number) may also be included, and the other information also belongs to the protection scope of the embodiment of the present application.
Specifically, for example, a CPU of a core of each server 40 in the first server cluster, a 384GB memory, and 6 network cards (including 4 gigabit network cards and 2 ten-million network cards) are configured, where two gigabit network cards are used as a bond1 for a cloud operating system management network, two gigabit network cards are used as a bond1 for a cloud operating system service network, and two ten-million network cards are used as a bond1 for a cloud operating system storage external network (for example, distributed storage). In this embodiment of the application, when obtaining the remaining resource information of each preset node in the multiple preset management nodes, the remaining resource information may be determined based on resource information used by the second management service corresponding to the second cloud operating system (that is, information about a use condition of each preset management node in the multiple preset management nodes) and configuration information of each preset management node (that is, information about available resources of each preset management node when the cloud operating system is not deployed), and specifically includes determining the remaining central processing unit information, the remaining memory capacity information, the remaining disk capacity information, and the remaining port number information.
Step 403: and determining a target management node corresponding to the first management service based on the residual resource information and the number of the management nodes.
In the embodiment of the present application, a specific implementation is provided for determining a target management node corresponding to a first management service based on the remaining resource information and the number of management nodes.
A first possible implementation: determining a first preset management node of the plurality of preset management nodes, wherein the residual resource information of the first preset management node does not meet preset conditions, and determining a target management node based on the residual resource information of a second preset management node and the number of the management nodes corresponding to the deployment of the first management service, wherein the second preset management node is other preset management nodes except the first preset management node in the plurality of preset management nodes.
As a possible implementation manner when the first preset management node that is less than the preset condition is determined, the ratio of the remaining central processing units, the ratio of the remaining memory capacities, the ratio of the remaining disk capacities, and the ratio of the number of remaining ports of each preset management node may be determined based on the remaining resource information of each preset management node in the plurality of preset management nodes, respectively, and it is determined whether the ratio of the remaining central processing units, the ratio of the remaining memory capacities, and the ratio of the remaining network resources are all greater than or equal to the corresponding threshold, and if any one of the ratios of the first preset management nodes is less than the corresponding threshold, it is determined that the remaining resource information of the first preset management node does not satisfy the preset condition. Before determining the ratio of the remaining central processing units, the ratio of the remaining memory capacity, the ratio of the remaining disk capacity and the ratio of the remaining port number of each preset management node based on the remaining resource information of each preset management node in the plurality of preset management nodes, the remaining disk capacity and the remaining port number of each preset management node can be obtained, and the servers of which the remaining disk capacity and the remaining port number do not meet the port number for deploying the first management service are filtered and removed.
As another possible implementation, a plurality of preset management nodes may be screened based on a kalman filtering load smoothing model. The Kalman filtering load smoothing model structure comprises a state equation, an observation equation, a time updating equation and an observation updating equation.
The equation of state is as follows:
Figure BDA0003747462780000111
wherein L is CPU For CPU load, L MEM For memory loading, L NET Is the network load. Δ L CPU For CPU load change rate, Δ L MEM For the rate of change of memory load, Δ L NET Is the rate of change of network load.
The observation equation is as follows:
Figure BDA0003747462780000112
the time update equation is as follows:
Figure BDA0003747462780000121
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0003747462780000122
is the time update of the state, a is the coefficient matrix in the state equation,
Figure BDA0003747462780000123
the predicted value for the last iteration, i.e.,
Figure BDA0003747462780000124
Figure BDA0003747462780000125
for time updating of the state covariance matrix, P k-1 And Q is a process noise covariance matrix. The initial value of P and Q are both saved in the configuration file.
The observation update equation is as follows:
Figure BDA0003747462780000126
wherein, K g For Kalman gain, H is the coefficient matrix in the observation equation, R is the observation noise covariance matrix, Y k As a k-th observation, i.e.
Figure BDA0003747462780000127
And acquiring the acquisition result of the module 103-3. R is stored in a configuration file.
In a specific implementation process, considering that the load condition of each preset management node (the load condition is used for determining the remaining resource information) is only at a certain moment, the load condition at the certain moment is not stable, and the change of each preset management node within a period of time cannot be reflected, therefore, the kalman filter load smoothing model can effectively cope with the sudden load, and the single load observation result is corrected by adopting the comprehensive historical record values of the CPU, the memory and the like, so that the smoothed load observation value is more beneficial to the uniform scheduling of the cloud operating system control service, and the performance of each cloud operating system for the devices is more balanced. And screening out target management nodes with enough resources for the first management service container corresponding to the first cloud operating system to operate according to the residual resource information of the preset management nodes, so that the situation that the container of the cloud operating system cannot operate normally due to insufficient residual resources is avoided.
A second possible implementation: and then determining a target management node based on the residual resource information, the correlation coefficient and the number of management nodes corresponding to the deployment of the first management service. The formula for determining the correlation coefficient between the normalized vector (e.g., vector H) and the resource vector (e.g., vector D) required by the first management service is as follows:
Figure BDA0003747462780000131
specifically, when a target management node is determined based on the remaining resource information, the correlation coefficient, and the number of management nodes corresponding to the first management service, each preset management node may be scored based on the remaining resource information and the correlation coefficient, where the score is used to indicate a degree that each preset management node conforms to the first management service deployment, each preset management node is sorted according to a sequence of scores from high to low, and the target management node is determined based on a sorting result and the number of management nodes corresponding to the first management service. The formula for scoring each preset management node based on the residual resource information and the correlation coefficient is as follows:
score=α+β+γ+r(D,H) (7)
wherein, α is the ratio of the remaining central processing units, β is the ratio of the remaining memory capacity, and γ is the ratio of the remaining network resources.
The first possible implementation manner and the second possible implementation manner may be used alone or in combination, and when the first possible implementation manner and the second possible implementation manner are used in combination, the execution order of the first possible implementation manner and the second possible implementation manner may be random, and the examples of the present application are not limited.
In the embodiment of the application, after determining that a target management node corresponding to a first management service is deployed, a control node selects a port different from that of a second cloud operating system to deploy the first management service, that is, the ports used by the second deployed first cloud operating system and the second cloud operating system are different, and the target management node synchronously mirrors from a mirror image platform.
In some other embodiments, the user may further initiate a second request for deploying the computing service corresponding to the first cloud operating system, at this time, the control node may determine, from the plurality of preset computing nodes, a plurality of first computing nodes that do not deploy the computing service, and randomly determine, from the plurality of first computing nodes, the second computing node as a target computing node corresponding to the computing service. In a possible implementation manner, the second request may further include the number of computing nodes required for deploying the computing service, and the control node determines, from the plurality of first computing nodes, a corresponding number of second computing nodes as the target computing nodes according to the number. It should be noted that the computing nodes used by each cloud operating system are independent from each other, that is, when determining to deploy a computing node corresponding to a computing task of a first cloud operating system, the control node cannot reuse a computing node that has deployed a computing task corresponding to a second cloud operating system. For example, there are 3 servers (server a, server b, and server c) in the first server cluster for deploying computing tasks, where if the server a has deployed a computing service corresponding to the second cloud operating system, the control node may only randomly select a target computing node from the server b and the server c when determining to deploy the computing service corresponding to the first cloud operating system.
In order to better understand the technical solution of the present application, the following explains the cloud operating system deployment method provided by the present application with reference to specific embodiments.
Example 1
As shown in fig. 5, fig. 5 is an interaction process of each module in a control node according to the embodiment of the present application, where the specific interaction steps are as follows:
step 1: the method comprises the steps that an API module receives a request for deploying management services corresponding to a cloud operating system, wherein the request is initiated by a user;
step 2: the API module carries out parameter verification and parameter processing based on parameters in the request, detects whether a configuration file exists in the code hosting platform or not and detects whether a mirror image exists in the mirror image platform or not;
and step 3: the API module sends scheduling information to the scheduling module after the parameter is successfully checked and the code hosting platform is detected to have a configuration file and the mirror image platform has a mirror image;
and 4, step 4: the scheduling module sends indication information to the acquisition module, and the indication acquisition module acquires the service condition information of the server;
and 5: the acquisition module acquires the service condition information of the server and sends the acquisition result to the scheduling module;
step 6: the scheduling module screens the servers which can be used for deploying the management service according to the service condition information of the servers;
and 7: the scheduling module predicts the load of each server based on historical acquisition information by adopting a load prediction model of Kalman filtering;
and 8: the scheduling module determines the residual resource information of each server according to the load prediction result and scores the servers based on the residual resource information;
and step 9: the scheduling module picks out N servers as target servers for deploying the management service based on the scoring result;
step 10: the scheduling module returns a scheduling result to the API module;
step 11: the API module triggers management service deployment corresponding to the cloud operating system to the deployment module;
step 12: the deployment module acquires the configuration file from the code hosting platform, controls the target server to acquire the mirror image from the mirror image platform, and deploys the management service at the target server;
step 13: the deployment module controls the target server to start a container for running the management service at a specified port;
step 14: the deployment module initializes the cloud operating system and returns administrator credential information.
In the above embodiment, only the interaction process of each module in the control node in the process of deploying the first management service corresponding to the first cloud operating system is listed, and the process of deploying the computing service corresponding to the first cloud operating system is listed, and the interaction process of each module in the control node may refer to the interaction process, which is not described herein again.
Example 2
Fig. 2 is an example to explain a process of the cloud operation deployment system provided in the embodiment of the present application. In this embodiment, taking an example that a user creates two cloud Operating Systems (OS) (i.e., OS1 and OS 2), where OS2 corresponds to the second cloud Operating System and OS1 corresponds to the first cloud Operating System, in this embodiment, a process of deploying OS2 is described first, and then a process of deploying OS1 is described.
Step 1: the method comprises the steps that an API module receives a request, sent by a user, for deploying a second management service corresponding to a second cloud operating system (OS 2);
step 2: the API module checks parameters (both configuration files and mirror images for deployment exist at the moment);
and 3, step 3: the acquisition module acquires service condition information of the server 4-7, wherein the service condition information comprises residual CPUs, residual memory capacity, residual network resources, residual disk capacity, residual port number, and CPU utilization rate and memory utilization rate of a container (docker) of management service corresponding to the cloud operating system running on the server, and the cloud operating system is not deployed on the server 4-7 (namely, the cloud operating system without stock) before, and the container of the management service corresponding to the cloud operating system is not run on the server 4-7, so that the CPU utilization rate and the memory utilization rate of the container of the management service corresponding to the cloud operating system on the host can be 0;
step 6: the scheduling module screens the servers 4-7 for scheduling according to the usage information of the servers 4-7 in step 5. The servers 4-7 do not have an inventory cloud operating system, so that the servers 4-7 are determined to be available for scheduling;
and 7: the scheduling module predicts the CPU and memory loads of the containers on the servers 4-7 using a kalman filter model. If only the acquisition result of step 5 is assumed before, calculating the available server load prediction to be 0 according to the formulas (3) and (4);
and 8: the scheduling module normalizes the residual resource (total resource-load) information to obtain a normalized vector, and calculates a correlation coefficient between a specification demand vector (namely a resource vector required by the second management service) of a container for running the second management service and the normalized vector;
and step 9: the scheduling module scores the servers according to the information such as the remaining resource information and the correlation coefficient, for example, selects the servers 4-6 for deploying the servers of the second management service (it is assumed here that the user requires the copy deployment of the second cloud operating system 3, that is, the number of management nodes required by the second management service is 3);
step 10: the deployment module deploys a second management service to the servers 4-6. The deployment module randomly selects a port when deploying the second management service;
step 11: the method comprises the steps that an API module receives a request, sent by a user, for deploying computing service corresponding to an OS 2;
step 12: the API module checks parameters (both configuration and mirror image for deployment exist at the moment);
step 13: the acquisition module acquires the service condition information of the servers 8-10. Because there is no existing cloud operating system, there is no container for running the computing service corresponding to the cloud operating system on the servers 8-10, and at this time, the utilization rate of the container for running the computing service corresponding to the cloud operating system to the CPU and the memory of the host is 0;
step 14: the scheduling module screens the servers 8-10 for scheduling based on the usage information of step 13. The servers 8-10 may all be used for scheduling due to the cloud operating system that has not been on stock before. For example, the scheduling module randomly picks server 8 for deploying computing services corresponding to OS 2.
Step 15: the deployment module deploys the computing service corresponding to OS2 to server 8.
The process of deploying OS1 is described above for the process of deploying OS 2.
Step 1: the method comprises the steps that an API module receives a request, sent by a user, for deploying a first management service corresponding to a first cloud operating system (OS 1);
step 2: the API module checks parameters (both configuration and mirror image for deployment exist at the moment);
and step 3: the acquisition module acquires the service condition information of the server 4-7. Since the OS2 has been previously deployed in the server 4-6 in the first server cluster, the server 4-6 has a certain CPU and memory load (for example, the CPU load ratio is 0.1, and 0.2, the memory load ratio is 0.1, 0.2, and the network load is 0.1, and 0.3), and the server 7 has no load of the cloud operating system;
and 4, step 4: and the scheduling module screens the servers 4-7 which can be used for scheduling according to the information in the step 3. This step still does not filter any servers, as the resources of servers 4-7 are still abundant;
and 5: the scheduling module predicts a CPU and memory load of a container running the second management service on the server using a Kalman filtering model. At this time, the CPU loads of the servers 4 to 7 are calculated to be 0.1, 0.15, 0.2 and 0, the memory loads are 0.1, 0.15 and 0, and the network loads are 0.1, 0.25 and 0 according to the formulas (3) and (4);
and 6: the scheduling module scores servers 4-7 as 2.84, 2.74, 2.68, 3.15 according to equation (6). Sorting the servers 4-7 into servers 7, 4, 5 and 6 according to the order of the score values from large to small, and therefore, selecting the servers 4, 5 and 7 as servers for deploying the first management service;
and 7: the deployment module deploys the first management service to the servers 4, 5, 7. When the deployment module deploys the first management service, the deployment module randomly selects the rest ports and does not mix with the port for deploying the second management service, so that the two cloud operating systems share one physical network but are isolated from each other through the ports;
and step 8: the method comprises the steps that an API module receives a request, sent by a user, for deploying computing service corresponding to an OS 1;
and step 9: the API module checks parameters (at this time, both configuration and mirror image for deployment exist);
step 10: the acquisition module acquires the service condition information of the servers 8-10. The server 8 is already used for deploying computing services corresponding to the OS2, and no management service container of the cloud operating system exists on any of the servers 9-10;
step 11: the scheduling module randomly selects a server which can be used for scheduling from the servers 9-10 according to the use condition information in the step 10, for example, the scheduling module randomly selects the server 9 for deploying the computing service corresponding to the OS 1;
step 12: the deployment module deploys the computing service corresponding to the OS1 to the server 9.
Based on the same inventive concept, embodiments of the present application provide a cloud operating system deployment device, which can implement functions corresponding to the foregoing cloud operating system deployment method. The cloud operating system deployment device may be a hardware structure, a software module, or a hardware structure plus a software module. The cloud operating system deployment device can be realized by a chip system, and the chip system can be formed by a chip and can also comprise the chip and other discrete devices. Referring to fig. 6, the cloud operating system deployment device includes a communication module 601 and a processing module 602. Wherein:
a communication module 601, configured to receive a first request; the first request is used for requesting the deployment of a first management service corresponding to a first cloud operating system, and the first request comprises the number of management nodes required by the first management service;
a processing module 602, configured to obtain remaining resource information of multiple preset management nodes; the plurality of preset management nodes belong to the first server cluster, and the remaining resource information is available resources remaining after the plurality of preset management nodes deploy second management services corresponding to a second cloud operating system;
the processing module 602 is further configured to determine a target management node corresponding to the first management service based on the remaining resource information and the number of management nodes.
Optionally, the processing module 602 is specifically configured to:
determining a first preset management node of the plurality of preset management nodes, wherein the residual resource information of the first preset management node does not meet a preset condition;
determining the target management node based on the remaining resource information of a second preset management node and the number of the management nodes; the second preset management node is other preset management nodes except the first preset management node in the plurality of preset management nodes.
Optionally, the remaining resource information includes information about a remaining central processing unit, information about a remaining memory capacity, and information about a remaining network resource, and the processing module 602 is specifically configured to:
determining the ratio of the residual central processing units, the ratio of the residual memory capacity and the ratio of the residual network resources of each preset management node based on the residual resource information;
determining whether the ratio of the residual central processing units, the ratio of the residual memory capacity and the ratio of the residual network resources are all larger than or equal to corresponding thresholds;
and if any ratio in the first preset management nodes is smaller than a corresponding threshold value, determining that the residual resource information of the first preset management nodes does not meet the preset condition.
Optionally, the processing module 602 is specifically configured to:
performing normalization processing on the residual resource information of each preset management node to obtain a normalization vector;
determining a correlation coefficient between the normalized vector and the resource vector required by the first management service; wherein the correlation coefficient is used for indicating the matching degree of the residual resource and the resource required by the first management service;
determining the target management node based on the remaining resource information, the correlation coefficient, and the number of management nodes.
Optionally, the processing module 602 is specifically configured to:
scoring each of the preset management nodes based on the remaining resource information and the correlation coefficient; the score value is used for indicating the degree that each preset management node accords with the deployment of the first management service;
sequencing each preset management node from top to bottom according to the grading score;
and determining the target management node based on the sequencing result and the number of the management nodes.
Optionally, the communication module 601 is further configured to:
receiving a second request; the second request is used for requesting to deploy computing service corresponding to the first cloud operating system;
determining a plurality of first computing nodes which do not deploy computing services from a plurality of preset computing nodes;
and randomly determining a second computing node from the plurality of first computing nodes as a target computing node corresponding to the computing service.
Optionally, the second cloud operating system and the first cloud operating system use different ports.
All relevant contents of each step related to the foregoing embodiment of the cloud operating system deployment method may be referred to the functional description of the functional module corresponding to the cloud operating system deployment device in this application embodiment, and are not described herein again.
The division of the modules in the embodiments of the present application is schematic, and only one logical function division is provided, and in actual implementation, there may be another division manner, and in addition, each functional module in each embodiment of the present application may be integrated in one processor, may also exist alone physically, or may also be integrated in one module by two or more modules. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
Based on the same inventive concept, the embodiment of the application provides electronic equipment. Referring to fig. 7, the electronic device includes at least one processor 701 and a memory 702 connected to the at least one processor, a specific connection medium between the processor 701 and the memory 702 is not limited in this embodiment, in fig. 7, the processor 701 and the memory 702 are connected through a bus 700 as an example, the bus 700 is represented by a thick line in fig. 7, and a connection manner between other components is only schematically illustrated and is not limited. The bus 700 may be divided into an address bus, a data bus, a control bus, etc., and is shown in fig. 7 with only one thick line for ease of illustration, but does not represent only one bus or one type of bus.
In the embodiment of the present application, the memory 702 stores instructions executable by the at least one processor 701, and the at least one processor 701 may execute the steps included in the foregoing cloud operating system deployment method by executing the instructions stored in the memory 702.
The processor 701 is a control center of the electronic device, and may connect various portions of the whole electronic device by using various interfaces and lines, and perform various functions and process data of the electronic device by operating or executing instructions stored in the memory 702 and calling data stored in the memory 702, thereby performing overall monitoring on the electronic device. Alternatively, the processor 701 may include one or more processing units, and the processor 701 may integrate an application processor and a modem processor, wherein the application processor mainly processes an operating system, an application program, and the like, and the modem processor mainly processes wireless communication. It will be appreciated that the modem processor described above may not be integrated into the processor 701. In some embodiments, processor 701 and memory 702 may be implemented on the same chip, or in some embodiments they may be implemented separately on separate chips.
The processor 701 may be a general-purpose processor, such as a Central Processing Unit (CPU), digital signal processor, application specific integrated circuit, field programmable gate array or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or the like, that may implement or perform the methods, steps, and logic blocks disclosed in embodiments of the present application. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of the method for deploying the cloud operating system disclosed in the embodiments of the present application may be directly implemented by a hardware processor, or implemented by a combination of hardware and software modules in the processor.
Memory 702, which is a non-volatile computer-readable storage medium, may be used to store non-volatile software programs, non-volatile computer-executable programs, and modules. The Memory 702 may include at least one type of storage medium, and may include, for example, a flash Memory, a hard disk, a multimedia card, a card-type Memory, a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Programmable Read Only Memory (PROM), a Read Only Memory (ROM), a charge Erasable Programmable Read Only Memory (EEPROM), a magnetic Memory, a magnetic disk, an optical disk, and so on. The memory 702 is any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. The memory 702 in the embodiments of the present application may also be circuitry or any other device capable of performing a storage function for storing program instructions and/or data.
By programming the processor 701, the code corresponding to the cloud operating system deployment method described in the foregoing embodiment may be solidified into a chip, so that the chip can execute the steps of the cloud operating system deployment method when running.
Based on the same inventive concept, the present application also provides a computer-readable storage medium storing computer instructions, which, when executed on a computer, cause the computer to perform the steps of the cloud operating system deployment method as described above.
In some possible embodiments, the aspects of the cloud operating system deployment method provided by the present application may also be implemented in the form of a program product, which includes program code for causing the detection apparatus to perform the steps in the cloud operating system deployment method according to various exemplary embodiments of the present application described above in this specification, when the program product is run on an electronic device.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (10)

1. A method for deploying a cloud operating system, the method being applied to a first server cluster, the method comprising:
receiving a first request; the first request is used for requesting to deploy a first management service corresponding to a first cloud operating system, and the first request comprises the number of management nodes required by the first management service;
acquiring residual resource information of a plurality of preset management nodes; the plurality of preset management nodes belong to the first server cluster, and the remaining resource information is available resources remaining after the plurality of preset management nodes deploy second management services corresponding to a second cloud operating system;
and determining a target management node corresponding to the first management service based on the residual resource information and the number of the management nodes.
2. The method of claim 1, wherein said determining a target management node based on the remaining resource information and the number of management nodes comprises:
determining a first preset management node of the plurality of preset management nodes, wherein the residual resource information of the first preset management node does not meet a preset condition;
determining the target management node based on the residual resource information of a second preset management node and the number of the management nodes; the second preset management node is other preset management nodes except the first preset management node in the plurality of preset management nodes.
3. The method of claim 2, wherein the remaining resource information includes remaining central processing unit information, remaining memory capacity information, and remaining network resource information, and the determining a first preset management node of the plurality of preset management nodes whose remaining resource information does not satisfy a preset condition comprises:
determining the ratio of the residual central processing units, the ratio of the residual memory capacity and the ratio of the residual network resources of each preset management node based on the residual resource information;
determining whether the ratio of the residual central processing units, the ratio of the residual memory capacity and the ratio of the residual network resources are all larger than or equal to corresponding thresholds;
and if any ratio in the first preset management nodes is smaller than a corresponding threshold value, determining that the residual resource information of the first preset management nodes does not meet the preset condition.
4. The method of claim 1, wherein the determining a target management node based on the remaining resource information and the number of management nodes comprises:
performing normalization processing on the residual resource information of each preset management node to obtain a normalization vector;
determining a correlation coefficient between the normalized vector and the resource vector required by the first management service; wherein the correlation coefficient is used for indicating the matching degree of the residual resource and the resource required by the first management service;
determining the target management node based on the remaining resource information, the correlation coefficient, and the number of management nodes.
5. The method of claim 4, wherein the determining the target management node based on the remaining resource information, the correlation coefficient, and the number of management nodes comprises:
scoring each of the preset management nodes based on the remaining resource information and the correlation coefficient; the score value is used for indicating the degree that each preset management node accords with the deployment of the first management service;
sequencing each preset management node from top to bottom according to the grading score;
and determining the target management node based on the sequencing result and the number of the management nodes.
6. The method of claim 1, wherein the method further comprises:
receiving a second request; the second request is used for requesting to deploy computing service corresponding to the first cloud operating system;
determining a plurality of first computing nodes which do not deploy computing services from a plurality of preset computing nodes;
and randomly determining a second computing node from the plurality of first computing nodes as a target computing node corresponding to the computing service.
7. The method of any of claims 1-6, wherein the second cloud operating system is different from a port used by the first cloud operating system.
8. A cloud operating system deployment apparatus applied to a first server cluster, the apparatus comprising:
a communication module for receiving a first request; the first request is used for requesting the deployment of a first management service corresponding to a first cloud operating system, and the first request comprises the number of management nodes required by the first management service;
the processing module is used for acquiring the residual resource information of a plurality of preset management nodes; the plurality of preset management nodes belong to the first server cluster, and the remaining resource information is available resources remaining after the plurality of preset management nodes deploy second management services corresponding to a second cloud operating system;
the processing module is further configured to determine a target management node corresponding to the first management service based on the remaining resource information and the number of management nodes.
9. An electronic device, comprising:
a memory for storing program instructions;
a processor for calling program instructions stored in said memory and for executing the steps comprised in the method of any one of claims 1 to 7 according to the obtained program instructions.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program comprising program instructions that, when executed by a computer, cause the computer to perform the method according to any one of claims 1-7.
CN202210835020.5A 2022-07-15 2022-07-15 Cloud operating system deployment method and device Pending CN115242598A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210835020.5A CN115242598A (en) 2022-07-15 2022-07-15 Cloud operating system deployment method and device
PCT/CN2022/141608 WO2024011860A1 (en) 2022-07-15 2022-12-23 Cloud operating system deployment method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210835020.5A CN115242598A (en) 2022-07-15 2022-07-15 Cloud operating system deployment method and device

Publications (1)

Publication Number Publication Date
CN115242598A true CN115242598A (en) 2022-10-25

Family

ID=83673778

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210835020.5A Pending CN115242598A (en) 2022-07-15 2022-07-15 Cloud operating system deployment method and device

Country Status (2)

Country Link
CN (1) CN115242598A (en)
WO (1) WO2024011860A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116074555A (en) * 2022-12-23 2023-05-05 天翼云科技有限公司 Full-link performance test method and system for cloud edge architecture video monitoring platform
WO2024011860A1 (en) * 2022-07-15 2024-01-18 天翼云科技有限公司 Cloud operating system deployment method and device
CN117472548A (en) * 2023-12-11 2024-01-30 北京火山引擎科技有限公司 Resource scheduling method, device, equipment and storage medium

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130103838A1 (en) * 2011-10-19 2013-04-25 Guang-Jian Wang System and method for transferring guest operating system
US20140053149A1 (en) * 2012-08-17 2014-02-20 Systex Software & Service Corporation Fast and automatic deployment method for cluster system
CN104536832A (en) * 2015-01-21 2015-04-22 北京邮电大学 Virtual machine deployment method
CN108737463A (en) * 2017-04-17 2018-11-02 北京神州泰岳软件股份有限公司 A kind of software deployment method, server and system
CN109032618A (en) * 2018-07-11 2018-12-18 郑州云海信息技术有限公司 A kind of deployment and interconnection method and system with OpenStack cloud management platform
CN109213555A (en) * 2018-08-16 2019-01-15 北京交通大学 A kind of resource dynamic dispatching method of Virtual desktop cloud
CN109376006A (en) * 2018-09-04 2019-02-22 西安电子科技大学 Resource integrated method based on user demand time-varying characteristics under a kind of cloud computing environment
CN109634915A (en) * 2018-11-28 2019-04-16 深圳市网心科技有限公司 File dispositions method, Cloud Server, system and storage medium
WO2019076369A1 (en) * 2017-10-19 2019-04-25 北京金山云网络技术有限公司 Cloud platform deployment method, device, electronic device, and readable storage medium
CN109947616A (en) * 2019-02-11 2019-06-28 北京国电通网络技术有限公司 A kind of automatically-monitored operational system of the cloud operating system based on OpenStack technology
CN110535894A (en) * 2018-05-25 2019-12-03 深圳先进技术研究院 A kind of container resource dynamic distributing method and its system based on load feedback
CN111176697A (en) * 2020-01-02 2020-05-19 广州虎牙科技有限公司 Service instance deployment method, data processing method and cluster federation
WO2021129733A1 (en) * 2019-12-24 2021-07-01 中兴通讯股份有限公司 Cloud operating system management method and apparatus, server, management system, and medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104104720B (en) * 2014-07-10 2017-11-03 浪潮(北京)电子信息产业有限公司 A kind of server set group managing means and system
CN109298868B (en) * 2018-08-22 2024-01-09 华东计算技术研究所(中国电子科技集团公司第三十二研究所) Intelligent dynamic deployment and uninstallation method for mapping image data processing software
US11194566B1 (en) * 2020-03-16 2021-12-07 Amazon Technologies, Inc. Decentralized, cluster-managed deployment of software updates in a multi-cluster environment
CN111405055A (en) * 2020-03-23 2020-07-10 北京达佳互联信息技术有限公司 Multi-cluster management method, system, server and storage medium
CN112311886B (en) * 2020-10-30 2022-03-01 新华三大数据技术有限公司 Multi-cluster deployment method, device and management node
CN115242598A (en) * 2022-07-15 2022-10-25 天翼云科技有限公司 Cloud operating system deployment method and device

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130103838A1 (en) * 2011-10-19 2013-04-25 Guang-Jian Wang System and method for transferring guest operating system
US20140053149A1 (en) * 2012-08-17 2014-02-20 Systex Software & Service Corporation Fast and automatic deployment method for cluster system
CN104536832A (en) * 2015-01-21 2015-04-22 北京邮电大学 Virtual machine deployment method
CN108737463A (en) * 2017-04-17 2018-11-02 北京神州泰岳软件股份有限公司 A kind of software deployment method, server and system
WO2019076369A1 (en) * 2017-10-19 2019-04-25 北京金山云网络技术有限公司 Cloud platform deployment method, device, electronic device, and readable storage medium
CN110535894A (en) * 2018-05-25 2019-12-03 深圳先进技术研究院 A kind of container resource dynamic distributing method and its system based on load feedback
CN109032618A (en) * 2018-07-11 2018-12-18 郑州云海信息技术有限公司 A kind of deployment and interconnection method and system with OpenStack cloud management platform
CN109213555A (en) * 2018-08-16 2019-01-15 北京交通大学 A kind of resource dynamic dispatching method of Virtual desktop cloud
CN109376006A (en) * 2018-09-04 2019-02-22 西安电子科技大学 Resource integrated method based on user demand time-varying characteristics under a kind of cloud computing environment
CN109634915A (en) * 2018-11-28 2019-04-16 深圳市网心科技有限公司 File dispositions method, Cloud Server, system and storage medium
CN109947616A (en) * 2019-02-11 2019-06-28 北京国电通网络技术有限公司 A kind of automatically-monitored operational system of the cloud operating system based on OpenStack technology
WO2021129733A1 (en) * 2019-12-24 2021-07-01 中兴通讯股份有限公司 Cloud operating system management method and apparatus, server, management system, and medium
CN111176697A (en) * 2020-01-02 2020-05-19 广州虎牙科技有限公司 Service instance deployment method, data processing method and cluster federation

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SEUNGHYUNG LEE: "Refining Micro Services Placement over Mulitiple Kubernetes-orchesttrated Clusters employing Resourcers Monitoring", 《2020 IEEE 40TH INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS》 *
王彬: "基于Openstack的云平台管理系统的设计与实现", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024011860A1 (en) * 2022-07-15 2024-01-18 天翼云科技有限公司 Cloud operating system deployment method and device
CN116074555A (en) * 2022-12-23 2023-05-05 天翼云科技有限公司 Full-link performance test method and system for cloud edge architecture video monitoring platform
CN116074555B (en) * 2022-12-23 2024-06-07 天翼云科技有限公司 Full-link performance test method and system for cloud edge architecture video monitoring platform
CN117472548A (en) * 2023-12-11 2024-01-30 北京火山引擎科技有限公司 Resource scheduling method, device, equipment and storage medium

Also Published As

Publication number Publication date
WO2024011860A1 (en) 2024-01-18

Similar Documents

Publication Publication Date Title
CN115242598A (en) Cloud operating system deployment method and device
CN110096336B (en) Data monitoring method, device, equipment and medium
CN115328663B (en) Method, device, equipment and storage medium for scheduling resources based on PaaS platform
CN114840304B (en) Container scheduling method, electronic equipment and storage medium
CN110389843B (en) Service scheduling method, device, equipment and readable storage medium
CN110389903B (en) Test environment deployment method and device, electronic equipment and readable storage medium
CN110213128B (en) Service port detection method, electronic device and computer storage medium
WO2024120205A1 (en) Method and apparatus for optimizing application performance, electronic device, and storage medium
CN111736957A (en) Multi-type service mixed deployment method, device, equipment and storage medium
CN109062580B (en) Virtual environment deployment method and deployment device
CN114721824A (en) Resource allocation method, medium and electronic device
CN112214321B (en) Node selection method and device for newly added micro service and micro service management platform
CN114296909A (en) Automatic node capacity expansion and reduction method and system according to kubernets event
CN113626173A (en) Scheduling method, device and storage medium
CN116089477B (en) Distributed training method and system
CN116483546B (en) Distributed training task scheduling method, device, equipment and storage medium
CN111831408A (en) Asynchronous task processing method and device, electronic equipment and medium
CN115964176B (en) Cloud computing cluster scheduling method, electronic equipment and storage medium
CN111431951B (en) Data processing method, node equipment, system and storage medium
CN111143033A (en) Operation execution method and device based on scalable operating system
US20110066754A1 (en) Intelligent Device and Media Server Selection for Optimized Backup Image Duplication
CN108874798B (en) Big data sorting method and system
CN116260876A (en) AI application scheduling method and device based on K8s and electronic equipment
CN115941758A (en) Cloud service console deployment method, system and storage medium based on dynamic programming
CN113760446A (en) Resource scheduling method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination