CN109213493B - Container deployment method, special resource management terminal and readable storage medium - Google Patents
Container deployment method, special resource management terminal and readable storage medium Download PDFInfo
- Publication number
- CN109213493B CN109213493B CN201710550300.0A CN201710550300A CN109213493B CN 109213493 B CN109213493 B CN 109213493B CN 201710550300 A CN201710550300 A CN 201710550300A CN 109213493 B CN109213493 B CN 109213493B
- Authority
- CN
- China
- Prior art keywords
- information
- container
- computing
- node
- resource
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F8/00—Arrangements for software engineering
- G06F8/60—Software deployment
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention discloses a container deployment method, which comprises the following steps: when a container deployment instruction is received, acquiring resource demand information of a container to be deployed according to configuration information included in the container deployment instruction; acquiring available universal resource information of each computing node in a cluster from a preset database to determine a computing node set capable of meeting the universal resource requirement; sending special resource application information to each computing node in the computing node set; when application success information returned by the computing node is received, the computing node returning the application success information is determined as a target node, and the container is deployed in the target node. The invention also discloses a special resource management terminal and a readable storage medium. The method can realize the container deployment with special resource requirements under the condition of compatible container deployment supporting common resource requirements.
Description
Technical Field
The present invention relates to the field of container technologies, and in particular, to a container deployment method, a terminal, and a readable storage medium.
Background
The container technology has been developed along with the development of thin client systems. When developing a thin client system, a developer expends a great deal of effort to pay attention to details of thread security, transactions, networks, resources, etc., thereby reducing development efficiency. Because the solution to these details is generally fixed or only the parameter is changed, from the perspective of code reuse and design mode, these bottom-layer details are extracted, made into a platform, and provided with a certain interface; this eliminates the need to spend excessive time and effort focusing on the implementation of these underlying details, focusing on the implementation of the business logic. This platform may be referred to as a "container," among other things.
Container technology is an important content in cloud technology. In a cloud service, if an application is expected to be deployed or a service is expected to be provided by a container, a target node (a single computer) needs to be determined in a cluster (a group of computers) for deploying the container. The selection of the target node has certain requirements, and resources (CPU performance, memory volume, etc.) that can be provided by the target node for deploying the container must meet container resource requirements; therefore, before deploying the container, resource data of all nodes in the cluster needs to be collected, and then a target node satisfying the container resource requirement is determined in the nodes to complete the deployment of the container.
However, the resource data that can be collected by the conventional container deployment method only includes general resource data (e.g., CPU performance, memory size, etc.) of all nodes, and special resource data (e.g., codec chip, encryption chip, etc.) cannot be collected, so that it is impossible to determine whether special resources in each node can meet special resource requirements of the container, and thus the container having requirements for the special resources cannot be deployed normally.
Disclosure of Invention
The invention mainly aims to provide a container deployment method, a special resource management terminal and a readable storage medium, and aims to solve the technical problem that containers with special resource requirements cannot be deployed.
To achieve the above object, the present invention provides a container deployment method, including the steps of:
when a container deployment instruction is received, acquiring resource demand information of a container to be deployed according to configuration information included in the container deployment instruction, wherein the resource demand information includes general resource demand and special resource demand;
acquiring current available universal resource information of each computing node in a cluster from a preset database, and determining a computing node set capable of meeting the universal resource requirement in each computing node according to the available universal resource information;
sending special resource application information to each computing node in the computing node set, wherein the special resource application information comprises the detail information of the special resource requirements;
and when receiving application success information returned by the computing nodes in the computing node set, determining the computing nodes returning the application success information as target nodes, and deploying the container to be deployed in the target nodes.
Optionally, the special resource at least includes a resource other than a CPU, a memory, and a hard disk.
Optionally, the step of sending the special resource application information to each computing node in the computing node set includes:
and starting threads with the same number as the number of the computing nodes in the computing node set, and simultaneously sending special resource application information to each computing node in the computing node set.
Optionally, the step of sending the special resource application information to each computing node in the computing node set further includes:
dividing each computing node in the computing node set into more than two subsets according to a preset rule;
and starting threads with preset quantity, and sequentially sending special resource application information to the computing nodes in each subset.
Optionally, when receiving application success information returned by the computing nodes in the computing node set, the step of determining the computing nodes returning the application success information as target nodes, and deploying the container to be deployed in the target nodes includes:
when application success information returned by the computing nodes in the computing node set is received, determining the computing nodes returning the application success information as target nodes, and locking the target nodes and container deployment instructions;
sending confirmation information to the target node, so that the target node reserves special resources in the target node according to the confirmation information;
deploying the container to be deployed in the target node according to the special resource in the target node.
Optionally, the step of determining, when receiving application success information returned by the computing nodes in the computing node set, the computing node returning the application success information as a target node, and deploying the container to be deployed in the target node further includes:
when application success information returned by more than two computing nodes in the computing node set is received, determining a target node in the multiple computing nodes returning the application success information according to the receiving sequence of the application success information, and deploying the container to be deployed in the target node.
Optionally, the step of determining, when receiving application success information returned by the computing nodes in the computing node set, the computing node returning the application success information as a target node, and deploying the container to be deployed in the target node further includes:
when application success information returned by more than two computing nodes in the computing node set is received, determining a target node in the computing nodes returning the application success information according to available special resource information included in the application success information, and deploying the container to be deployed in the target node.
Optionally, after the step of sending the special resource application information to each computing node in the computing node set, the method further includes:
and if the returned information of each computing node in the computing node set is application failure information, outputting the information of the computing nodes without deployability.
In addition, in order to achieve the above object, the present invention further provides a special resource management terminal, where the special resource includes at least one kind other than a CPU, a memory, and a hard disk, and the special resource management terminal includes a processor, a storage, and a special resource management program stored in the storage and executable by the processor, where the special resource management program implements the steps of the container deployment method as described above when the special resource management program is executed by the processor.
In addition, to achieve the above object, the present invention further provides a readable storage medium, on which a special resource management program is stored, where the special resource management program, when executed by a processor, implements the steps of the container deployment method as described above.
When a container deployment instruction is received, acquiring resource demand information of a container to be deployed according to configuration information included in the container deployment instruction, wherein the resource demand information comprises general resource demands and special resource demands; acquiring current available universal resource information of each computing node in a cluster from a preset database, and determining a computing node set capable of meeting the universal resource requirement in each computing node according to the available universal resource information; sending special resource application information to each computing node in the computing node set, wherein the special resource application information comprises the detail information of the special resource requirements; and when application success information returned by the computing nodes in the computing node set is received, determining the computing nodes returning the application success information as target nodes, and deploying the container to be deployed in the target nodes. Through the mode, when a container with special resource requirements needs to be deployed, the computing nodes capable of meeting the general resource requirements of the container are determined through the information in the database, then the target nodes capable of meeting the special resource requirements of the container are found out from the computing nodes, and then the container deployment is completed on the target nodes; in the whole process, local computing nodes need to be inquired, the integral logic algorithm of the cluster does not need to be modified, and the support and the deployment of containers with special resource requirements can be realized under the condition of compatible container deployment supporting common resource requirements.
Drawings
Fig. 1 is a schematic diagram of a hardware structure of a special resource management terminal according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart diagram of a first embodiment of a container deployment method of the present invention;
FIG. 3 is a detailed diagram of the step of sending a special resource application message to each compute node in the set of compute nodes shown in FIG. 2;
FIG. 4 is a detailed diagram of the step of sending a special resource application message to each compute node in the set of compute nodes shown in FIG. 2;
fig. 5 is a schematic diagram of a refining process of determining, when receiving application success information returned by the computing nodes in the computing node set in fig. 2, the computing nodes returning the application success information as target nodes and deploying the container to be deployed in the target nodes;
fig. 6 is a schematic diagram of a detailed flow of determining, when application success information returned by the computing nodes in the computing node set in fig. 2 is received, the computing nodes returning the application success information as target nodes, and deploying the container to be deployed in the target nodes;
fig. 7 is a schematic diagram of a detailed flow of determining, when application success information returned by the computing nodes in the computing node set in fig. 2 is received, the computing nodes returning the application success information as target nodes, and deploying the container to be deployed in the target nodes.
The implementation, functional features and advantages of the present invention will be further described with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The main idea of the scheme of the embodiment of the invention is as follows: when a container deployment instruction is received, acquiring resource demand information of a container to be deployed according to configuration information included in the container deployment instruction, wherein the resource demand information includes general resource demand and special resource demand; acquiring current available universal resource information of each computing node in a cluster from a preset database, and determining a computing node set capable of meeting the universal resource requirement in each computing node according to the available universal resource information; sending special resource application information to each computing node in the computing node set, wherein the special resource application information comprises the detail information of the special resource requirements; and when receiving application success information returned by the computing nodes in the computing node set, determining the computing nodes returning the application success information as target nodes, and deploying the container to be deployed in the target nodes.
The container deployment method related to the embodiment of the invention is mainly applied to a special resource management terminal, and the terminal can be realized in various ways. For example, the special resource management terminal may include a mobile terminal such as a mobile phone, a tablet computer, a notebook computer, a palm computer, and the like, and may further include a fixed terminal such as a smart television, a PC terminal, and the like.
While a PC terminal will be explained as a specific resource management terminal in the following description, those skilled in the art will appreciate that the configuration according to the embodiment of the present invention can be applied to other types of terminals besides elements specifically used for mobile purposes.
Referring to fig. 1, fig. 1 is a schematic diagram of a hardware structure of a special resource management terminal according to an embodiment of the present invention. In the embodiment of the present invention, the special resource management terminal may include a processor 1001 (e.g., a CPU), a communication bus 1002, a user interface 1003, a network interface 1004, and a memory 1005. The communication bus 1002 is used for implementing connection communication among the components; the user interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard); the network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface); the memory 1005 may be a high-speed RAM memory or a non-volatile memory (non-volatile memory), such as a disk memory, and the memory 1005 may optionally be a storage device separate from the processor 1001.
Optionally, the special resource management terminal may further include a camera, a Radio Frequency (RF) circuit, a sensor, an audio circuit, a WiFi module, and the like. Such as light sensors, motion sensors, and other sensors. Specifically, the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust brightness of the display screen according to brightness of ambient light, and the proximity sensor may turn off the display screen and/or the backlight according to a distance between the photosensitive device and the reference object. As one of the motion sensors, the gravity acceleration sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when the terminal is stationary, and can be used for identifying applications of special resource management terminal gestures (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration identification related functions (such as pedometer and tapping) and the like; of course, the special resource management terminal may also be configured with other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which are not described herein again.
It will be appreciated by those skilled in the art that the hardware configuration of the particular resource management terminal shown in fig. 1 is not limiting for the particular resource management terminal and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
With continued reference to FIG. 1, memory 1005, which is one type of computer-readable storage medium in FIG. 1, may include an operating system, a network communication module, and a special resource management program.
In fig. 1, the network communication module is mainly used for connecting to a preset database and performing data communication with the preset database; and the processor 1001 may call a special resource manager stored in the memory 1005 and perform the following operations:
when a container deployment instruction is received, acquiring resource demand information of a container to be deployed according to configuration information included in the container deployment instruction, wherein the resource demand information includes general resource demand and special resource demand;
acquiring current available universal resource information of each computing node in a cluster from a preset database, and determining a computing node set capable of meeting the universal resource requirement in each computing node according to the available universal resource information;
sending special resource application information to each computing node in the computing node set, wherein the special resource application information comprises the detail information of the special resource requirements;
and when receiving application success information returned by the computing nodes in the computing node set, determining the computing nodes returning the application success information as target nodes, and deploying the container to be deployed in the target nodes.
Further, the special resource at least includes a resource other than the CPU, the memory, and the hard disk.
Further, the processor 1001 may also call a special resource management program stored in the memory 1005 and perform the following operations:
and starting threads with the same number as the number of the computing nodes in the computing node set, and simultaneously sending special resource application information to each computing node in the computing node set.
Further, the processor 1001 may also call a special resource management program stored in the memory 1005 and perform the following operations:
dividing each computing node in the computing node set into more than two subsets according to a preset rule;
and starting threads with preset quantity, and sequentially sending special resource application information to the computing nodes in each subset.
Further, the processor 1001 may also call a special resource management program stored in the memory 1005 and perform the following operations:
when application success information returned by the computing nodes in the computing node set is received, determining the computing nodes returning the application success information as target nodes, and locking the target nodes and container deployment instructions;
sending confirmation information to the target node, so that the target node reserves special resources in the target node according to the confirmation information;
and deploying the container to be deployed in the target node according to the special resource in the target node.
Further, the processor 1001 may also call a special resource management program stored in the memory 1005 and perform the following operations:
when application success information returned by more than two computing nodes in the computing node set is received, determining a target node in the multiple computing nodes returning the application success information according to the receiving sequence of the application success information, and deploying the container to be deployed in the target node.
Further, the processor 1001 may also call a special resource management program stored in the memory 1005 and perform the following operations:
when application success information returned by more than two computing nodes in the computing node set is received, determining a target node in the computing nodes returning the application success information according to the available special resource information included in the application success information, and deploying the container to be deployed in the target node.
Further, the processor 1001 may also call a special resource management program stored in the memory 1005 and perform the following operations:
and if the returned information of each computing node in the computing node set is application failure information, outputting the information of the computing nodes without deployability.
Based on the hardware structure of the special resource management terminal, the invention provides various embodiments of the container deployment method.
The invention provides a container deployment method.
Referring to fig. 2, fig. 2 is a flowchart illustrating a first embodiment of a container deployment method according to the present invention.
In this embodiment, the container deployment method includes the following steps:
step S10, when a container deployment instruction is received, acquiring resource demand information of a container to be deployed according to configuration information included in the container deployment instruction, wherein the resource demand information includes general resource demand and special resource demand;
container technology is an important element in cloud technology. In a cloud service, if it is desired to deploy an application or provide a service in a container, a target node (a single computer) needs to be determined in a cluster (a group of computers) for deploying a container. The selection of the target node has certain requirements, and resources (CPU performance, memory volume, etc.) that can be provided by the target node for deploying the container must meet container resource requirements; therefore, before deploying the container, it is necessary to collect resource data of all the computing nodes in the cluster, and then determine a target node satisfying the resource requirement of the container among the computing nodes, so as to complete the deployment of the container. However, the resource data that can be collected by the conventional container deployment method only includes general resource data (e.g., CPU performance, memory size, etc.) of all the computing nodes, and special resource data (e.g., codec chips, encryption chips, etc.) cannot be collected, so that it is impossible to determine whether special resources in each computing node can meet special resource requirements of the container, and thus the container having requirements for the special resources cannot be deployed normally.
Based on the foregoing problems, the present embodiment provides a container deployment method, which aims to solve the technical problem that a container having a requirement on a special resource cannot be deployed.
Specifically, in this embodiment, a container with special resource requirements is deployed in a computer cluster (cluster). This cluster of computers includes a group of computers (including at least two computers), where each computer may be referred to as a node in the cluster; each node in the cluster can be divided into a computing node and a management node according to the main role of the node in the cluster; the computing nodes are mainly used for processing computing tasks, and the management nodes are mainly used for sending corresponding operation instructions to the corresponding nodes according to task instructions triggered by users. It should be noted that the node types in this embodiment do not limit this embodiment, and the node types may also be divided according to other rules.
In this embodiment, the user performs the container deployment operation through the management node, so the management node may be called a special resource management terminal. A user performs container deployment operation in a special resource management terminal to trigger a corresponding container deployment instruction (in this embodiment, a management node is mainly used for control and management, and is not considered to be used for deploying a container); when the user performs the container deployment operation, the user also sets a corresponding configuration file according to the resource demand information of the container. The resource requirements of the container in this embodiment include universal resource requirements and special resource requirements, where the universal resource refers to a resource held by each computing node, and includes CPU performance, memory size, hard disk capacity, network transmission rate, and the like; the special resources refer to resources held by part of the computing nodes and comprise a coding and decoding chip, an encryption chip, a coprocessor and the like. In this embodiment, when receiving a container deployment instruction triggered by a user, a special resource management terminal acquires resource demand information of a deployment container according to configuration information of a configuration file, where the resource demand information includes a general resource demand and a special resource demand, and a special resource required by the container at least includes a resource other than a CPU, a memory, and a hard disk.
Step S20, acquiring current available universal resource information of each computing node in a cluster from a preset database, and determining a computing node set capable of meeting the universal resource requirement in each computing node according to the available universal resource information;
in this embodiment, when acquiring the resource demand information of the container, the special resource management terminal queries the preset database to determine a computing node capable of meeting the demand of the container for general resources. Specifically, a database is preset in the cluster, and the preset database includes currently available universal resource information of each node of the cluster. When the special resource management terminal acquires the resource demand information of the container, the special resource management terminal queries the preset database to find the computing nodes meeting the general resource demand, and the computing nodes can form a set which can be called a computing node set. For example, the cluster includes seven computing nodes, i.e., C1, C2, S1, S2, S3, S4, and S5; the C1 has general resources of 2 cores CPU, 2G memory and 20G hard disk, and has no special resource; c2 has universal resources of 8-core CPU, 8G memory and 80G hard disk, and does not have special resources; the S1 comprises a general resource 2-core CPU, a 4G internal memory, a 40G hard disk and special resources 4 encryption chips; s2, the system comprises a general resource 2-core CPU, a 4G internal memory, a 40G hard disk and special resources 8 coding and decoding chips; s3, the system comprises a general resource 4-core CPU, a 4G internal memory, a 40G hard disk and special resources 32 coding and decoding chips; s4, the system comprises general resources of 4 cores of CPU, 8G memory, 40G hard disk and special resources of 64 coding and decoding chips; s5, the system comprises a general resource 2-core CPU, a 2G memory, a 40G hard disk and special resources 64 coding and decoding chips; the resource information of the seven computing nodes is shown in the following table;
node code number | Node universal resource | Node specific resources |
C1 | 2-core CPU, 2G memory and 20G hard disk | Is free of |
C2 | 8-core CPU, 8G memory and 80G hard disk | Is free of |
S1 | 2-core CPU, 4G memory and 40G hard disk | 4 encryption chips |
S2 | 4-core CPU, 4G memory and 40G hard disk | 8 coding and decoding chips |
S3 | 4-core CPU, 8G memory and 40G hard disk | 32 coding and decoding chips |
S4 | 8-core CPU, 8G memory and 40G hard disk | 64 coding and decoding chips |
S5 | 2-core CPU, 2G memory and 40G hard disk | 64 coding and decoding chips |
The general resource requirements of the deployment container are 2-core CPU, 4G internal memory and 40G hard disk, and the special resource requirements are 10 coding and decoding chips; when the special resource management terminal searches for a computing node meeting the conditions in a preset database, the special resource management terminal mainly searches for a computing node capable of meeting the requirement of the universal resource of the container; therefore, after searching, computing nodes meeting the requirements of the universal resources are determined to be C2, S1, S2, S3 and S4; these five compute nodes make up a set of compute nodes.
Further, in this embodiment, the general resource information of each computing node in the preset database may be updated and set after the special resource management terminal collects and acquires the general resource information, the special resource management terminal creates a corresponding thread, and the thread automatically and periodically polls each computing node, collects the available general resource information of each computing node in the cluster, and updates the preset database according to the information.
Certainly, each computing node may also actively report its own state information to the special resource management terminal at regular time, where the state information includes its own general resource information; when the special resource management terminal receives the state information sent at fixed time, the preset database is updated according to the information.
Step S30, sending special resource application information to each computing node in the computing node set, wherein the special resource application information comprises the detail information of the special resource requirements;
in this embodiment, when a set of computing nodes satisfying the requirement of the container universal resource is obtained, a target node satisfying the requirement of the container special resource is searched in the set of computing nodes. Specifically, since the common cluster database does not update and set the special resource information of the computing nodes correspondingly, in this embodiment, based on consideration of factors such as compatibility of the database and the container scheduling engine and workload of algorithm customization, in searching for a target node satisfying the container special resource requirement, the special resource management terminal does not obtain the available special resource requirement information of each computing node to determine the target node satisfying the special resource requirement, but sends the special resource application information to each computing node in the computing node set, where the special resource application information includes the requirement detail information of the container special resource requirement, that is, includes the detail contents such as the type and the amount of the required special resource; when receiving the application information, the computing node compares the special resource requirements with the own special resources so as to determine whether the computing node meets the condition of container deployment; if the special resource requirements are not met, the computing node returns application failure information to the special resource management terminal; and if the special resource requirements are met, the computing node returns application success information to the special resource management terminal.
Further, there are various cases where the requirement for special resources is not satisfied. Specifically, as the resource information table of the computing node and the container in step S20 are required, the general resource requirements of the container are 2 cores of CPU, 4G of memory, and 40G of hard disk, and the special resource requirements are 10 codec chips; after the special resource management terminal queries a preset database, a computing node set is obtained, wherein the set comprises five computing nodes C2, S1, S2, S3 and S4; the special resource management terminal can simultaneously send special resource application information to the five computing nodes, wherein the special resource requirement is 10 coding and decoding chips; when the five computing nodes receive the application information, the special resources of the computing nodes are compared, and whether the computing nodes meet the special resource requirements or not is determined; after the comparison, the S3 and the S4 can be known to meet the special resource requirement, and application success information is returned to the special resource management terminal; c2, S1 and S2 can not meet the requirement of the special resource, and application failure information is returned to the special resource management terminal; the situations that C2, S1, and S2 are not satisfied are different, C2 and S1 do not hold corresponding special resources, which do not support a special resource application interface and cause application failure, and S2 does not hold the number of special resources that does not satisfy the number of requirements, so the failure information returned by the three computing nodes is also different.
And S40, when application success information returned by the computing nodes in the computing node set is received, determining the computing nodes returning the application success information as target nodes, and deploying the containers to be deployed in the target nodes.
In this embodiment, when receiving the application success information returned by the computing node, the special resource management terminal may determine that there is a computing node in the cluster that can simultaneously satisfy the requirement of the container for the general resource and the requirement of the special resource, and at this time, the node may be referred to as a target node, and at this time, the special resource management terminal may perform deployment operation in the target node to complete deployment of the container.
Further, if the return information received by the special resource management terminal is application failure information, it indicates that there is no computing node in the cluster that can simultaneously meet the requirements of the container universal resource and the special resource, and at this time, the special resource management terminal outputs the related information of the non-deployable computing node to notify the user to modify the deployment scheme or perform other processing.
Furthermore, when the deployment is completed, the special resource management terminal updates the available universal resource information of the target node in the preset database according to the universal resource requirement of the container when the container deployment is completed, so that the real-time performance and the accuracy of the information in the preset data are ensured.
In a specific implementation, if the deployed container only includes a general resource requirement and does not require a special resource, when the computing node set is obtained in step S20, one computing node may be directly selected from the computing node set to perform container deployment without performing step S30 and step S40.
In this embodiment, when a container deployment instruction is received, resource demand information of a container to be deployed is obtained according to configuration information included in the container deployment instruction, where the resource demand information includes general resource demands and special resource demands; acquiring current available universal resource information of each computing node in a cluster from a preset database, and determining a computing node set capable of meeting the universal resource requirement in each computing node according to the available universal resource information; sending special resource application information to each computing node in the computing node set, wherein the special resource application information comprises the detail information of the special resource requirements; and when receiving application success information returned by the computing nodes in the computing node set, determining the computing nodes returning the application success information as target nodes, and deploying the container to be deployed in the target nodes. Through the above manner, when a container with special resource requirements needs to be deployed, in the embodiment, computing nodes capable of meeting the general resource requirements of the container are determined through information in the database, then target nodes capable of meeting the special resource requirements of the container are found out from the computing nodes, and then container deployment is completed on the target nodes; in the whole process, local computing nodes need to be inquired, the integral logic algorithm of the cluster does not need to be modified, and the support and the deployment of containers with special resource requirements can be realized under the condition of compatible container deployment supporting common resource requirements.
Referring to fig. 3, fig. 3 is a detailed schematic diagram of the step of sending the special resource application information to each computing node in the computing node set shown in fig. 2.
Based on the embodiment shown in fig. 2, step S30 includes:
and step S31, starting threads with the same number as the number of the computing nodes in the computing node set, and simultaneously sending special resource application information to each computing node in the computing node set.
In this embodiment, when a set of computing nodes satisfying the requirement of the container universal resource is obtained, the special resource management terminal searches for a target node satisfying the requirement of the container special resource in the set of computing nodes. Specifically, at this time, the special resource management terminal counts the number of the calculation nodes in the calculation set, and determines how many nodes meet the requirement of the universal resource; when the number of the computing nodes is determined, the threads with the same number are started, an information sending task is distributed to each thread, and the threads send special resource application information to each computing node at the same time. For example, if there are 5 compute nodes in the compute node set, the special resource management terminal will start 5 threads to send the special resource application information to the 5 compute nodes at the same time. By means of sending the application information at the same time, the time spent on applying the special resources can be reduced, and the container deployment efficiency is improved.
Referring to fig. 4, fig. 4 is a detailed schematic diagram of the step of sending the special resource application information to each computing node in the computing node set shown in fig. 2.
Based on the above embodiment shown in fig. 2, step S30 further includes:
step S32, dividing each computing node in the computing node set into more than two subsets according to a preset rule;
and step S33, starting threads with preset quantity, and sequentially sending special resource application information to the computing nodes in each subset.
In this embodiment, when a set of computing nodes satisfying the requirement of the container universal resource is obtained, the special resource management terminal searches for a target node satisfying the requirement of the container special resource in the set of computing nodes. Specifically, the container deployment divides the computing nodes in the computing node set to obtain different subsets; and successively applying special resources to the computing nodes in each subset according to a certain sequence. The dividing rule may be preset according to actual conditions. For example, the special resource management terminal is provided with a maximum task thread number, and the maximum task thread number is 5; and there are 10 nodes in the set of compute nodes; the special resource management terminal may divide the compute nodes in the compute node set into two subsets a and b, where each subset includes 5 different compute nodes; when the subset division is completed, the special resource management terminal starts the maximum task thread number (5 threads), and simultaneously sends special resource application information to 5 computing nodes in one set; and when the application information is sent to the computing nodes of the subset, sending the special resource application information to the computing nodes of the other subset. For another example, the calculation nodes in the area a may be divided into a subset (a subset including x calculation nodes) and a subset (B subset including y calculation nodes) according to the area where the calculation node entity device is located; when the subset division is completed, the special resource management terminal starts a certain number of task threads, firstly sends special resource application information to the computing node of one subset, and then sends special resource application information to the computing node of the other subset; the number of the task threads in the two sending processes can be different, when the data are sent to the A subset, the number of the task threads is x, and when the data are sent to the B subset, the number of the task threads is y. Of course, the subset may be divided according to other rules.
In this embodiment, when sending application information to a compute node in a compute node set, the compute node set may be first divided into a plurality of subsets, and the subsets are sequentially applied for; by the mode, when the number of the computing nodes is large, serious occupation of system resources caused by starting of excessive task threads can be avoided.
Further, the two information transmission methods of step S31 and steps S32 and S33 may be used in combination. Specifically, the special resource management terminal is provided with a maximum task thread number, and the maximum task thread number is 5; when the calculation node set is determined, counting the number of calculation nodes in the calculation node set; if the number of the computing nodes is less than or equal to the maximum task thread number, starting task threads with the same number as the number of the computing nodes, and directly sending application messages to the computing nodes; if the number of the computing nodes is larger than the maximum task thread number, the computing node set is firstly divided into a plurality of subsets, the number of the computing nodes in each subset does not exceed the maximum task thread number, then a certain task thread is started, and the number of the computing nodes in the subsets is sequentially applied.
Furthermore, since the preset database has the specific information of the available general resources of each computing node, when determining the computing node set, the node application sequence can be set according to the information of the available general resources of the database, and the sequence is mainly sorted according to the number of the available resources of each computing node; and when the sequencing is finished, starting a task thread, and sequentially sending application information to each computing node according to the node application sequence until a target node capable of meeting the special resource requirement is found, or sending the application information to all the computing nodes is finished but the target node capable of meeting the special resource requirement cannot be found. For example, the cluster includes seven computing nodes C1, C2, S1, S2, S3, S4, and S5, and the resource information of the seven computing nodes is shown in the following table;
node code number | Node universal resource | Node specific resources |
C1 | 2-core CPU, 2G memory and 20G hard disk | Is composed of |
C2 | 8-core CPU, 8G memory and 80G hard disk | Is free of |
S1 | 2-core CPU, 4G memory and 40G hard disk | 4 encryption chips |
S2 | 4-core CPU, 4G memory and 40G hard disk | 8 coding and decoding chips |
S3 | 4-core CPU, 8G memory and 40G hard disk | 32 codes and decodesChip and method for manufacturing the same |
S4 | 8-core CPU, 8G memory and 40G hard disk | 64 coding and decoding chips |
S5 | 2-core CPU, 2G memory and 40G hard disk | 64 coding and decoding chips |
The general resource requirements of the deployment container are 2-core CPU, 4G internal memory and 40G hard disk, and the special resource requirements are 10 coding and decoding chips; the special resource management terminal determines computing nodes meeting the universal resource requirements as C2, S1, S2, S3 and S4 according to the node universal resource information of the preset database; the five computing nodes form a computing node set; when a computing node set is obtained, sequencing the available resources of each computing node from more to less so as to obtain a node application sequence of S1 → S2 → S3 → S4 → C1; and when the sequencing is finished, starting a task thread, and sending application information to each computing node in sequence according to the sequence until a target node capable of meeting the special resource requirement is found, or sending the application information to all computing nodes is finished but the target node capable of meeting the special resource requirement cannot be found. In a specific implementation, the priorities between different types of universal resources may also be set for sorting, for example, the CPU performance priority is set to be higher than the memory priority, the universal resource requirements are 2-core CPU and 2G memory, the a-node universal resources are 4-core CPU and 2G memory, and the B-node universal resources are 2-core CPU and 4G memory, and since the CPU performance priority is higher than the memory priority, the node application sequence between a and B is a → B. In addition, the sorting method is used for sorting according to the number of the available resources of each computing node from more to less, and through the sorting method, the computing node with the optimal universal resource can be applied preferentially, and the optimal universal resource of the target node deployed by the container can be ensured to a certain extent. Of course, the available resources of each computing node can be sorted from less to more, and by the sorting method, on the premise of meeting the requirement of the universal resources of the container, small resource nodes with less universal resources can be firstly allocated, and large resource nodes with more universal resources can be reserved, so that the deployment of other containers with high requirements can be supported.
Referring to fig. 5, fig. 5 is a detailed flowchart illustrating that, when application success information returned by the computing nodes in the computing node set is received, the computing nodes returning the application success information are determined as target nodes, and the containers to be deployed are deployed in the target nodes, as shown in fig. 2.
Based on the embodiment shown in fig. 2, step S40 includes:
step S41, when application success information returned by the computing nodes in the computing node set is received, determining the computing nodes returning the application success information as target nodes, and locking the target nodes and container deployment instructions;
step S42, sending confirmation information to the target node, so that the target node reserves special resources in the target node according to the confirmation information;
step S43, deploying the container to be deployed in the target node according to the special resource in the target node.
In this embodiment, when receiving the application information of the special resource, the computing node in the computing node set compares the special resource requirement in the application information with the available special resource requirement of the computing node; if the resource condition of the computing node can meet the special resource requirement of the container, the computing node locks the corresponding number of special resources of the computing node, and the resources are prevented from being occupied by other programs or tasks; meanwhile, the computing node returns corresponding application success information to the special resource management terminal so as to report the condition that the self resource meets the requirement of the special resource to the special resource management terminal. When receiving the application success information, the special resource management terminal can determine that a computing node capable of meeting the requirements of the container general resource and the special resource exists in the cluster, the computing node can be called a target node, and sends confirmation information to the target node; preparation for container deployment is performed simultaneously. When the target node receives the confirmation information, the target node determines that the special resource management terminal has received the application success information sent by the target node, and at the moment, the target node reserves the previously locked special resource for the special resource management terminal to be used by the special resource management terminal deployment container. And when the special resource management terminal finishes the preparation work of deployment, the container can be deployed in the target node according to the special resources reserved by the target node.
Further, for the computing node, if the resource condition of the computing node can meet the requirement of the special resource of the container, the computing node locks the corresponding number of the special resources and returns corresponding application success information to the special resource management terminal; after that, if the confirmation information sent by the special resource management terminal is not received within a certain time, the computing node releases the previously locked special resources to apply the resources to other programs or tasks, thereby avoiding resource waste caused by resource suspension.
Referring to fig. 6, fig. 6 is a detailed flowchart illustrating a process of determining, when application success information returned by the compute nodes in the compute node set in fig. 2 is received, the compute node returning the application success information as a target node and deploying the container to be deployed in the target node.
Based on the above embodiment shown in fig. 2, step S40 further includes:
and step S44, when application success information returned by more than two computing nodes in the computing node set is received, determining a target node in the multiple computing nodes returning the application success information according to the receiving sequence of the application success information, and deploying the container to be deployed in the target node.
In this embodiment, when receiving the special resource application information, the compute nodes in the compute node set compare the special resource requirements in the application information with the available special resource requirements of the compute nodes; if the resource condition of the computing node can meet the special resource requirement of the container, the computing node returns corresponding application success information to the special resource management terminal. However, since the resource conditions of more than two computing nodes in the computing node set may satisfy the requirement of the container special resource, the special resource management terminal may receive the application success information returned by the multiple computing nodes. When the special resource management terminal receives application success information returned by a plurality of computing nodes in the computing node set, the special resource management terminal needs to determine a target node in the computing nodes returning the application success information because the container only needs to be deployed in one of the nodes. In this embodiment, the special resource management terminal may determine the target node according to the receiving sequence of the application success information, and determine the sending node of the application success information received first as the target node. For example, if the special resource management terminal receives the application success information returned by the node a at point 10 1 and receives the application success information returned by the node b at point 10 and 2, the special resource management terminal will determine the node a as the target node. When the target node is determined, the special resource management terminal can perform deployment operation in the target node to complete the deployment of the container.
In this embodiment, when receiving application success information returned by a plurality of computing nodes, the special resource management terminal may determine a sending node of the application success information received first as a target node, thereby increasing the speed of searching for the target node and increasing the efficiency of container deployment.
Referring to fig. 7, fig. 7 is a schematic diagram of a refining flow of determining, when receiving application success information returned by the computing nodes in the computing node set shown in fig. 2, the computing nodes returning the application success information as target nodes and deploying the container to be deployed in the target nodes.
Based on the above embodiment shown in fig. 2, step S40 further includes:
and step S45, when application success information returned by more than two computing nodes in the computing node set is received, determining a target node in the computing nodes returning the application success information according to the available special resource information included in the application success information, and deploying the container to be deployed in the target node.
In this embodiment, when receiving the application information of the special resource, the computing node in the computing node set compares the special resource requirement in the application information with the available special resource requirement of the computing node; if the resource condition of the computing node can meet the special resource requirement of the container, the computing node returns corresponding application success information to the special resource management terminal. However, since the resource conditions of more than two computing nodes in the computing node set may satisfy the requirement of the container special resource, the special resource management terminal may receive the application success information returned by the multiple computing nodes. When the special resource management terminal receives application success information returned by a plurality of computing nodes in the computing node set, the special resource management terminal needs to determine a target node in the computing nodes returning the application success information because the container only needs to be deployed in one of the nodes. In this embodiment, the special resource management terminal may determine the target node according to the number of available special resources of the computing node. For example, the special resource requirement of the container is 10 codec chips; the combination of the computing nodes comprises two nodes a and b, wherein a comprises 10 coding and decoding chips, and b comprises 12 coding and decoding chips; because a and b meet the requirement, a and b can return application success information to the special resource management terminal, and the application success information comprises own special resource information; when receiving the application success information returned by the a and the b, the special resource management terminal determines the special resource quantity of the a and the b according to the special resource information, and then the special resource management terminal can determine the b node with more special resource quantity as a target node and perform container deployment in the b node.
In this embodiment, when receiving the application success information returned by the plurality of computing nodes, the special resource management terminal determines the computing node with a larger number of special resources as the target node, so that the container has sufficient special resources to be utilized. Certainly, the special resource management terminal may also determine a computing node with a small number of special resources as a target node, and reserve a large resource node with a large number of special resources on the premise of meeting the requirement of special resources of a container, so as to support the deployment of other containers with high requirements.
In addition, the present invention also provides a special resource management apparatus, including:
the system comprises a demand acquisition module, a demand management module and a management module, wherein the demand acquisition module is used for acquiring resource demand information of a container to be deployed according to configuration information included in a container deployment instruction when the container deployment instruction is received, and the resource demand information comprises general resource demands and special resource demands;
the system comprises a set determining module, a cluster processing module and a cluster processing module, wherein the set determining module is used for acquiring the current available universal resource information of each computing node in a cluster from a preset database and determining a computing node set capable of meeting the universal resource requirement in each computing node according to the available universal resource information;
an application sending module, configured to send special resource application information to each computing node in the computing node set, where the special resource application information includes requirement detail information of the special resource requirement;
a container deployment module, configured to, when receiving application success information returned by a computing node in the computing node set, determine the computing node returning the application success information as a target node, and deploy the container to be deployed in the target node
Further, the special resource at least includes a resource other than the CPU, the memory, and the hard disk.
Further, the application sending module includes:
and the first sending unit is used for starting threads with the same number as the number of the computing nodes in the computing node set and sending the special resource application information to each computing node in the computing node set at the same time.
Further, the application sending module further includes:
the set dividing unit is used for dividing each computing node in the computing node set into more than two subsets according to a preset rule;
and the second sending unit is used for starting threads with preset number and sending the special resource application information to the computing nodes in each subset in sequence.
Further, the container deployment module comprises:
the node determining unit is used for determining the computing node returning the application success information as a target node and locking the target node and a container deployment instruction when receiving the application success information returned by the computing nodes in the computing node set;
an information sending unit, configured to send acknowledgement information to the target node, so that the target node reserves a special resource in the target node according to the acknowledgement information;
and the container deployment unit is used for deploying the container to be deployed in the target node according to the special resource in the target node.
Further, the container deployment module is further configured to, when application success information returned by two or more computing nodes in the computing node set is received, determine a target node among the multiple computing nodes returning the application success information according to a receiving sequence of the application success information, and deploy the container to be deployed in the target node.
Further, the container deployment module is further configured to, when receiving application success information returned by two or more computing nodes in the computing node set, determine a target node among the computing nodes returning the application success information according to available special resource information included in the application success information, and deploy the container to be deployed in the target node
Each module of the special resource management corresponds to each step in the method embodiment, and each function and implementation process are not described in detail here.
In addition, the invention also provides a computer readable storage medium.
The computer readable storage medium of the present invention stores a special resource management program, wherein the special resource management program, when executed by a processor, implements the steps of the container deployment method as described above.
The method for implementing the special resource management program when executed may refer to each embodiment of the container deployment method of the present invention, and details thereof are not repeated herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention or the portions contributing to the prior art may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) as described above and includes several instructions for enabling a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention, and all equivalent structures or equivalent processes performed by the present invention or directly or indirectly applied to other related technical fields are also included in the scope of the present invention.
Claims (8)
1. A method of deploying a container, the method comprising the steps of:
when a container deployment instruction is received, acquiring resource demand information of a container to be deployed according to configuration information included in the container deployment instruction, wherein the resource demand information includes general resource demand and special resource demand;
acquiring current available universal resource information of each computing node in a cluster from a preset database, and determining a computing node set capable of meeting the universal resource requirement in each computing node according to the available universal resource information;
sending special resource application information to each computing node in the computing node set, wherein the special resource application information comprises demand detail information of special resource demands, and the demand detail information of the special resource demands comprises the types and the demand quantity of the special resources;
when application success information returned by the computing nodes in the computing node set is received, determining the computing nodes returning the application success information as target nodes, and deploying the container to be deployed in the target nodes;
the special resource at least comprises one resource except a CPU, a memory and a hard disk;
when receiving application success information returned by the computing nodes in the computing node set, determining the computing nodes returning the application success information as target nodes, and deploying the container to be deployed in the target nodes includes:
when application success information returned by the computing nodes in the computing node set is received, determining the computing nodes returning the application success information as target nodes, and locking the target nodes and container deployment instructions;
sending confirmation information to the target node, so that the target node reserves special resources in the target node according to the confirmation information;
deploying the container to be deployed in the target node according to the special resource in the target node.
2. The container deployment method of claim 1, wherein the step of sending special resource application information to each compute node in the set of compute nodes comprises:
and starting threads with the same number as the number of the computing nodes in the computing node set, and simultaneously sending special resource application information to each computing node in the computing node set.
3. The container deployment method of claim 1, wherein the step of sending special resource application information to each compute node in the set of compute nodes further comprises:
dividing each computing node in the computing node set into more than two subsets according to a preset rule;
and starting threads with a preset number, and sequentially sending special resource application information to the computing nodes in each subset.
4. The container deployment method according to claim 1, wherein the step of determining, when receiving application success information returned by the compute nodes in the compute node set, the compute node returning the application success information as a target node and deploying the container to be deployed in the target node further comprises:
when application success information returned by more than two computing nodes in the computing node set is received, determining a target node in the multiple computing nodes returning the application success information according to the receiving sequence of the application success information, and deploying the container to be deployed in the target node.
5. The container deployment method according to claim 1, wherein the step of determining, when receiving application success information returned by the compute nodes in the compute node set, the compute node returning the application success information as a target node and deploying the container to be deployed in the target node further comprises:
when application success information returned by more than two computing nodes in the computing node set is received, determining a target node in the computing nodes returning the application success information according to available special resource information included in the application success information, and deploying the container to be deployed in the target node.
6. The container deployment method according to any one of claims 1 to 5, wherein after the step of sending the special resource application information to each computing node in the set of computing nodes, further comprising:
and if the returned information of each computing node in the computing node set is application failure information, outputting the information of the computing nodes without deployability.
7. A special resource management terminal, characterized in that the special resource at least comprises one resource except CPU, memory and hard disk, the special resource management terminal comprises a processor, a memory and a special resource management program stored on the memory and executable by the processor, wherein when the special resource management program is executed by the processor, the steps of the container deployment method according to any one of claims 1 to 6 are implemented.
8. A readable storage medium, having stored thereon a special resource management program, wherein the special resource management program, when executed by a processor, implements the steps of the container deployment method of any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710550300.0A CN109213493B (en) | 2017-07-06 | 2017-07-06 | Container deployment method, special resource management terminal and readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710550300.0A CN109213493B (en) | 2017-07-06 | 2017-07-06 | Container deployment method, special resource management terminal and readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109213493A CN109213493A (en) | 2019-01-15 |
CN109213493B true CN109213493B (en) | 2023-04-14 |
Family
ID=64991113
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710550300.0A Active CN109213493B (en) | 2017-07-06 | 2017-07-06 | Container deployment method, special resource management terminal and readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109213493B (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110990024B (en) * | 2019-11-28 | 2024-02-09 | 合肥讯飞数码科技有限公司 | Application deployment method, device, equipment and storage medium |
CN111190696A (en) * | 2019-12-28 | 2020-05-22 | 浪潮电子信息产业股份有限公司 | Docker container deployment method, system, device and storage medium |
CN112130931B (en) * | 2020-09-27 | 2023-01-06 | 联想(北京)有限公司 | Application deployment method, node, system and storage medium |
CN112214321B (en) * | 2020-10-10 | 2023-06-16 | 中国联合网络通信集团有限公司 | Node selection method and device for newly added micro service and micro service management platform |
CN113360164B (en) * | 2021-05-27 | 2022-09-23 | 上海信宝博通电子商务有限公司 | Method, device and storage medium for rapidly deploying application |
CN113473488B (en) * | 2021-07-02 | 2024-01-30 | 福建晶一科技有限公司 | Container-based CU and MEC common platform deployment method |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102244903A (en) * | 2010-05-10 | 2011-11-16 | 华为技术有限公司 | Admission control method and device of relay network |
CN103220285A (en) * | 2013-04-10 | 2013-07-24 | 中国科学技术大学苏州研究院 | Access system based on RESTful interface in ubiquitous service environment |
CN105468362A (en) * | 2015-11-17 | 2016-04-06 | 广州杰赛科技股份有限公司 | Application deployment method and cloud computing system |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9396031B2 (en) * | 2013-09-27 | 2016-07-19 | International Business Machines Corporation | Distributed UIMA cluster computing (DUCC) facility |
-
2017
- 2017-07-06 CN CN201710550300.0A patent/CN109213493B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102244903A (en) * | 2010-05-10 | 2011-11-16 | 华为技术有限公司 | Admission control method and device of relay network |
CN103220285A (en) * | 2013-04-10 | 2013-07-24 | 中国科学技术大学苏州研究院 | Access system based on RESTful interface in ubiquitous service environment |
CN105468362A (en) * | 2015-11-17 | 2016-04-06 | 广州杰赛科技股份有限公司 | Application deployment method and cloud computing system |
Also Published As
Publication number | Publication date |
---|---|
CN109213493A (en) | 2019-01-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109213493B (en) | Container deployment method, special resource management terminal and readable storage medium | |
CN108055264B (en) | Scheduling apparatus and method for push streaming server, and computer-readable storage medium | |
CN106919445B (en) | Method and device for scheduling containers in cluster in parallel | |
CN108776934B (en) | Distributed data calculation method and device, computer equipment and readable storage medium | |
CN111694649A (en) | Resource scheduling method and device, computer equipment and storage medium | |
CN111464659A (en) | Node scheduling method, node pre-selection processing method, device, equipment and medium | |
US20210360058A1 (en) | Job allocation support system and method | |
CN106569898A (en) | Resource distribution method and mobile terminal | |
CN111988429A (en) | Algorithm scheduling method and system | |
CN112631751A (en) | Task scheduling method and device, computer equipment and storage medium | |
US7647591B1 (en) | Method for dynamically enabling the expansion of a computer operating system | |
CN113553178A (en) | Task processing method and device and electronic equipment | |
CN113886069A (en) | Resource allocation method and device, electronic equipment and storage medium | |
CN116166395A (en) | Task scheduling method, device, medium and electronic equipment | |
CN114625533A (en) | Distributed task scheduling method and device, electronic equipment and storage medium | |
CN115686346A (en) | Data storage method and device and computer readable storage medium | |
CN110912967A (en) | Service node scheduling method, device, equipment and storage medium | |
CN115618032A (en) | View generation method and device, electronic equipment and storage medium | |
CN111813541B (en) | Task scheduling method, device, medium and equipment | |
CN112463376A (en) | Resource allocation method and device | |
CN107634978B (en) | Resource scheduling method and device | |
CN113722079B (en) | Task scheduling distribution method, device, equipment and medium based on target application | |
CN114327846A (en) | Cluster capacity expansion method and device, electronic equipment and computer readable storage medium | |
CN115629853A (en) | Task scheduling method and device | |
CN114897426A (en) | Case division information processing method and device, computer equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |