WO2021104096A1 - Procédé et appareil de planification de tâche dans un environnement en nuage de conteneurs, serveur et appareil de stockage - Google Patents

Procédé et appareil de planification de tâche dans un environnement en nuage de conteneurs, serveur et appareil de stockage Download PDF

Info

Publication number
WO2021104096A1
WO2021104096A1 PCT/CN2020/129208 CN2020129208W WO2021104096A1 WO 2021104096 A1 WO2021104096 A1 WO 2021104096A1 CN 2020129208 W CN2020129208 W CN 2020129208W WO 2021104096 A1 WO2021104096 A1 WO 2021104096A1
Authority
WO
WIPO (PCT)
Prior art keywords
server
resource utilization
utilization rate
task
model
Prior art date
Application number
PCT/CN2020/129208
Other languages
English (en)
Chinese (zh)
Inventor
叶可江
孙永仲
须成忠
Original Assignee
中国科学院深圳先进技术研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国科学院深圳先进技术研究院 filed Critical 中国科学院深圳先进技术研究院
Publication of WO2021104096A1 publication Critical patent/WO2021104096A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5019Workload prediction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5022Workload threshold
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/504Resource capping

Definitions

  • the present invention relates to the technical field of cloud platforms, in particular to a task scheduling method, device, server and storage device in a container cloud environment.
  • container technology Compared with virtual machine technology, container technology has lower overhead and faster speed, so it is gradually replacing virtual machine technology and applied to cloud data centers. Users order services on-demand through the container cloud platform, which reduces the investment cost of infrastructure and reduces the difficulty of maintaining hardware equipment.
  • the container cloud distributes computing tasks on a resource pool composed of a large number of servers, enabling various application systems to obtain computing resources, data resources, and storage resources as needed. Due to the heterogeneity of container cloud servers and the complexity of application tasks, how to effectively schedule application tasks through scheduling strategies and reasonably allocate computing resources has become a key issue in container clouds.
  • the existing technology is generally based on traditional task scheduling algorithms such as first-come-first-served, weighted round-robin, Min-Min and Max-Min.
  • task scheduling algorithms such as first-come-first-served, weighted round-robin, Min-Min and Max-Min.
  • There are disadvantages such as uneven node load distribution and job starvation.
  • the task scheduling process Multi-resource constraints such as CPU, memory, and disk capacity need to be considered, that is, when the problem is abstracted as a multi-objective optimization problem under multi-resource constraints, the solution effect of traditional algorithms is not very satisfactory.
  • most existing technologies only determine the assignment of tasks based on the server resource utilization rate at the scheduled time. Then there may be a situation: a large number of tasks are submitted to a server with a low current resource utilization rate but will continue to grow in the future, resulting in the next The server is overloaded at all times.
  • the present invention provides a task scheduling method, device, server and storage device in a container cloud environment to solve the problem that the existing task scheduling method does not consider whether the server will be overloaded in the future.
  • the present invention discloses a task scheduling method in a container cloud environment, including:
  • one or more sets of containers are generated on the target server.
  • the step of obtaining the historical resource utilization rate of each server in the cloud data center and predicting the predicted resource utilization rate of each server at the next moment based on the historical resource utilization rate includes:
  • Pre-set time series which is composed of multiple historical resource utilization rates
  • the step of updating the time series according to the historical resource utilization rate includes:
  • the new historical resource utilization rate is added to the end of the time series, and the earliest historical resource utilization rate in the time series is deleted to update the time series.
  • constructing a Prophet model As a further improvement of the present invention, it also includes constructing a Prophet model.
  • the steps of constructing a Prophet model include:
  • the trend model is:
  • C(t) is the resource capacity of the server at time t;
  • the season model is:
  • P is the expected period of the preset time series
  • a n , b n ⁇ ⁇ , ⁇ is generated by a normal distribution in the interval from 0 to ⁇
  • N and ⁇ are preset;
  • the holiday model is:
  • D i indicates a period of time before and after the holiday
  • k (k 1 ,... ,k L ) T
  • k i represents the corresponding change of the predicted holiday
  • k is generated by a normal distribution in the interval from 0 to v
  • v is preset
  • the Prophet model is:
  • ⁇ t is the error term and obeys normal distribution.
  • the current resource utilization rate of each server is obtained, and optimized calculation is performed in combination with the predicted resource utilization rate, the required occupied resources and the resource threshold of each server to obtain the deployment matrix between the application task and the server.
  • the steps include:
  • each particle selects a guiding particle from the optimal solution set according to the crowding distance
  • the mutation scheme includes no mutation, uniform mutation and uneven mutation
  • the deployment matrix is obtained.
  • the objective function is:
  • the first constraint function is:
  • the second constraint function is:
  • the second constraint function indicates that when the application task is deployed to the server, the current resource utilization rate and the predicted resource utilization rate are less than or equal to the resource threshold.
  • the present invention also provides a task scheduling device in a container cloud environment, including:
  • the obtaining module is used to obtain the historical resource utilization rate of each server in the cloud data center, and predict the predicted resource utilization rate of each server at the next moment based on the historical resource utilization rate;
  • Confirmation module used to confirm the resources occupied by the application task when the application task is received
  • the optimization module is used to obtain the current resource utilization of each server when scheduling application tasks, and combine the predicted resource utilization, the required occupied resources and the resource threshold of each server to perform optimization calculations to obtain the relationship between the application task and the server Deployment matrix
  • the deployment module is used to generate one or more sets of containers on the target server according to the deployment matrix and application tasks.
  • the present invention also provides a server.
  • the server includes a processor and a memory coupled with the processor, wherein:
  • the memory stores program instructions used to implement any one of the above-mentioned task scheduling methods in a container cloud environment
  • the processor is used to execute program instructions stored in the memory to balance the load of the container cloud server cluster.
  • the present invention also provides a storage device that stores program files that can implement any one of the above-mentioned task scheduling methods in a container cloud environment.
  • the present invention derives the predicted resource utilization rate of the server in the future by collecting the historical resource utilization rate of the server, and then combines the current resource utilization rate of the server, the predicted resource utilization rate, the resource threshold of the server and the application Tasks need to take up resources for optimization calculations, so as to obtain the optimal deployment plan for deploying tasks to the server. It comprehensively considers the impact of the current load of the server and the future load on task scheduling, and models the load balancing problem as a multi-resource constraint
  • the multi-objective optimization problem achieves the purpose of server cluster load balancing, and at the same time avoids the problem of overloading the server where the application task is deployed at some point in the future.
  • FIG. 1 is a schematic flowchart of a task scheduling method in a container cloud environment according to a first embodiment of the present invention
  • FIG. 2 is a schematic flowchart of a task scheduling method in a container cloud environment according to a second embodiment of the present invention
  • FIG. 3 is a schematic flowchart of a task scheduling method in a container cloud environment according to a third embodiment of the present invention.
  • FIG. 4 is a schematic flowchart of a task scheduling method in a container cloud environment according to a fourth embodiment of the present invention.
  • FIG. 5 is a schematic structural diagram of a task scheduling apparatus in a container cloud environment according to an embodiment of the present invention.
  • FIG. 6 is a schematic structural diagram of a server according to an embodiment of the present invention.
  • FIG. 7 is a schematic structural diagram of a storage device according to an embodiment of the present invention.
  • first”, “second”, and “third” in the present invention are only used for descriptive purposes, and cannot be understood as indicating or implying relative importance or implicitly indicating the number of indicated technical features. Thus, the features defined with “first”, “second”, and “third” may explicitly or implicitly include at least one of the features.
  • “plurality” means at least two, such as two, three, etc., unless otherwise specifically defined. All directional indicators (such as up, down, left, right, front, back%) in the embodiments of the present invention are only used to explain the relative positional relationship between the components in a certain posture (as shown in the drawings) , Movement status, etc., if the specific posture changes, the directional indication will also change accordingly.
  • FIG. 1 is a schematic flowchart of a task scheduling method in a container cloud environment according to a first embodiment of the present invention. It should be noted that, if there is substantially the same result, the method of the present invention is not limited to the sequence of the process shown in FIG. 1. As shown in Figure 1, the method includes steps:
  • Step S1 Obtain the historical resource utilization rate of each server in the cloud data center, and predict the predicted resource utilization rate of each server at the next moment based on the historical resource utilization rate.
  • step S101 the historical resource utilization rate of the server is periodically collected, and the resource utilization rate of the server at a future time is predicted based on the historical resource utilization rate to obtain the predicted resource utilization rate.
  • step S1 includes the following steps:
  • step S10 a time sequence is preset, and the time sequence is composed of multiple historical resource utilization rates.
  • a time sequence of length L is preset.
  • Step S11 Collect historical resource utilization rates at predetermined intervals.
  • forecast period is preset.
  • Step S12 Update the time series according to the historical resource utilization.
  • the time series is updated according to the historical resource utilization rate, and the time series is updated and maintained in time to avoid excessive time span and the time series is no longer representative.
  • step S12 specifically includes: adding a new historical resource utilization rate to the end of the time series, and deleting the earliest historical resource utilization rate in the time series to update the time series.
  • Step S13 Input the time series into the built Prophet model, and predict the predicted resource utilization rate of each server at the next moment.
  • the steps to construct a Prophet model include:
  • Step S20 Set the position of the change point on the time series to divide the time series into multiple segments.
  • the change point is the trend change point.
  • the position of the change point is set in the time series.
  • the setting of the change point can be set manually or automatically.
  • step S21 the change trend of each time series is detected.
  • step S22 a trend model is constructed using the change trend.
  • the trend model is:
  • C(t) is the resource capacity of the server at time t;
  • step S23 a season model is constructed using the preset period.
  • the seasonal model is:
  • N the more complex seasonality can be fitted, but it will increase the risk of over-fitting.
  • the larger ⁇ the more obvious the seasonal effect is.
  • Step S24 Obtain the number of holidays included in the time series, and use the number of holidays to construct a holiday model.
  • the time series contains X holidays
  • D i represents a period of time before and after the holiday
  • the holiday model is:
  • k (k 1 ,...,k L ) T
  • k i represents the corresponding change of the predicted holiday, which is generated by the normal distribution of the interval from 0 to v
  • v is preset
  • the larger v is, the holiday to the holiday model The greater the impact.
  • step S25 a Prophet model is constructed using a trend model, a seasonal model, and a holiday model.
  • ⁇ t is the error term and obeys normal distribution.
  • Step S2 When the application task is received, it is confirmed that the application task needs to occupy resources.
  • Step S3 When scheduling application tasks, obtain the current resource utilization rate of each server, and perform optimization calculations based on the predicted resource utilization rate, required occupied resources, and resource thresholds of each server to obtain the deployment matrix between application tasks and servers .
  • optimization calculations are performed based on current resource utilization, predicted resource utilization, required occupied resources, and resource thresholds of each server, so as to obtain a deployment matrix between application tasks and servers.
  • step S3 specifically includes:
  • Step S30 Obtain the current resource utilization rate of each server.
  • step S31 an objective function and a constraint function are constructed based on the current resource utilization rate, the predicted resource utilization rate, the required occupied resources and the resource threshold, and the particle swarm is initialized, and the population size and the number of iterations are set.
  • parameters such as objective function, constraint function, population size, and number of iterations are set, and the particle swarm is initialized.
  • S i represents the i-th server, a server for S i, which is estimated current resource usage footprint of a desired value and its current resource usage and application tasks, predicted resource utilization is estimated prediction resources The sum of the utilization rate and the required resources of the application task.
  • the objective function is:
  • Is the estimated value of the current resource utilization Is the standard deviation of the estimated value of the current resource utilization
  • Res represents the resource type
  • the resource type includes CPU, Mem (memory), and Disk (disk);
  • the first constraint function is:
  • the second constraint function is:
  • the second constraint function indicates that when the application task is deployed to the server, the current resource utilization rate and the predicted resource utilization rate are less than or equal to the resource threshold.
  • step S32 the non-dominant particles in the particle swarm are stored as guiding particles in the optimal solution set and the final solution set, and the crowding distance of each guiding particle is calculated.
  • Step S33 Iteration is performed, and each particle selects a guiding particle from the optimal solution set according to the crowding distance.
  • step S34 the flight process is performed for each particle.
  • the position of each example represents a candidate solution of the application task assignment problem.
  • D is the individual optimal position of particle i
  • P gd is the global optimal position of the particle swarm, during flight .
  • the velocity and position of particle i are updated according to the following method:
  • v id (t+1) wv id (t)+c 1 r 1 (P id -x id (t-1))+c 2 r 2 (P gd -x id (t-1));
  • x id (t+1) x id (t)+v id (t);
  • w is the inertia weight
  • c 1 and c 2 are acceleration constants
  • r 1 and r 2 are random numbers uniformly distributed in [0,1].
  • Step S35 Divide the particle swarm into three parts of the same size, and use different mutation schemes for each part.
  • the mutation scheme includes no mutation, uniform mutation and uneven mutation.
  • the first part of the sub-particle swarm will not be mutated, and the second part of the sub-particle swarm will be uniformly mutated.
  • the third part of the sub-particle group undergoes non-uniform mutation.
  • Step S36 when the current position of the particle dominates the individual optimal position of the particle or the two do not dominate each other, the individual optimal position is updated to the current position, and the particle is added to the optimal solution set and the final solution set.
  • step S37 when the positions of all particles are updated, the crowded distance of the particles is updated.
  • Step S38 when the maximum number of iterations is reached, the deployment matrix is obtained.
  • the OMOPSO algorithm is used to solve the optimal deployment matrix.
  • Step S4 Generate one or more sets of containers on the target server according to the deployment matrix and application tasks.
  • This embodiment collects the historical resource utilization of the server to derive the predicted resource utilization of the server in the future, and then combines the current resource utilization of the server, the predicted resource utilization, the resource threshold of the server, and the resources required for application tasks. Optimize the calculation to obtain the optimal deployment plan for deploying tasks to the server. It comprehensively considers the impact of the current load and future load of the server on task scheduling, and models the load balancing problem as a multi-objective optimization problem under multi-resource constraints. In order to balance the load of the server cluster, at the same time avoid the problem of overloading of the server that deploys the application task at a certain time in the future.
  • Fig. 5 is a schematic structural diagram of a task scheduling apparatus in a container cloud environment according to an embodiment of the present invention.
  • the device 50 includes an acquisition module 51, a confirmation module 52, an optimization module 53 and a deployment module 54.
  • the obtaining module 51 is configured to obtain the historical resource utilization rate of each server in the cloud data center, and predict the predicted resource utilization rate of each server at the next moment based on the historical resource utilization rate.
  • the confirmation module 52 is used for confirming the resource occupied by the application task when the application task is received.
  • the optimization module 53 is used to obtain the current resource utilization rate of each server when scheduling application tasks, and combine the predicted resource utilization rate, the required occupied resources and the resource threshold of each server to perform optimization calculations to obtain the relationship between the application task and the server Deployment matrix.
  • the deployment module 54 is used to generate one or more sets of containers on the target server according to the deployment matrix and application tasks.
  • the obtaining module 51 obtains the historical resource utilization rate of each server in the cloud data center, and predicts the predicted resource utilization rate of each server at the next moment based on the historical resource utilization rate.
  • the operation may be: preset time sequence, time The sequence is composed of multiple historical resource utilization rates; the historical resource utilization rate is collected at a preset interval; the time series is updated according to the historical resource utilization rate; the time series is input to the built Prophet model, and the forecast of each server at the next moment is predicted Resource utilization.
  • the operation of the obtaining module 51 to update the time series according to the historical resource utilization rate may also be: adding a new historical resource utilization rate to the end of the time series, and deleting the earliest historical resource utilization rate in the time series to correct The time series are updated.
  • the operation of constructing the Prophet model may be: setting the position of the change point on the time series to divide the time series It is multi-segment; detects the change trend of each time series; uses the change trend to construct a trend model; uses a preset period to construct a seasonal model; obtains the number of holidays included in the time series, and uses the number of holidays to construct a holiday model;
  • the trend model is:
  • C(t) is the resource capacity of the server at time t;
  • the season model is:
  • P is the expected period of the preset time series
  • a n , b n ⁇ ⁇ , ⁇ is generated by a normal distribution in the interval from 0 to ⁇
  • N and ⁇ are preset;
  • the holiday model is:
  • D i indicates a period of time before and after the holiday
  • k (k 1 ,... ,k L ) T
  • k i represents the corresponding change of the predicted holiday
  • k is generated by a normal distribution in the interval from 0 to v
  • v is preset
  • the Prophet model is:
  • ⁇ t is the error term and obeys the normal distribution.
  • the optimization module 53 obtains the current resource utilization rate of each server, and performs optimization calculations in combination with the predicted resource utilization rate, the required occupied resources, and the resource threshold of each server to obtain the deployment matrix between the application task and the server.
  • the operation can be: obtain the current resource utilization of each server; construct the objective function and constraint function based on the current resource utilization, predicted resource utilization, required occupied resources and resource threshold, initialize the particle swarm, set the population size and the number of iterations ; Save the non-dominated particles in the particle swarm as the guiding particles in the optimal solution set and the final solution set, and calculate the crowding distance of each guiding particle; iterate, and each particle from the optimal solution set according to the crowding distance Choose a guide particle; perform the flight process for each particle; divide the particle swarm into three parts of the same size, and use different mutation plans for each part.
  • the mutation plan includes no mutation, uniform mutation, and uneven mutation; When the current position of the particle dominates the optimal position of the particle or the two do not dominate each other, the individual optimal position is updated to the current position, and the particle is added to the optimal solution set and the final solution set; when the positions of all particles After the update is complete, update the crowded distance of the particles; when the maximum number of iterations is reached, the deployment matrix is obtained.
  • the objective function is:
  • the first constraint function is:
  • the second constraint function is:
  • the second constraint function indicates that when the application task is deployed to the server, the current resource utilization rate and the predicted resource utilization rate are less than or equal to the resource threshold.
  • FIG. 6 is a schematic structural diagram of a server according to an embodiment of the present invention.
  • the server 60 includes a processor 61 and a memory 62 coupled to the processor 61.
  • the memory 62 stores program instructions for implementing the task scheduling method in the container cloud environment described in any of the foregoing embodiments.
  • the processor 61 is configured to execute program instructions stored in the memory 62 to balance the load of the container cloud server cluster.
  • the processor 61 may also be referred to as a CPU (Central Processing Unit, central processing unit).
  • the processor 61 may be an integrated circuit chip with signal processing capability.
  • the processor 61 may also be a general-purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, a discrete gate or transistor logic device, a discrete hardware component .
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • FIG. 7 is a schematic structural diagram of a storage device according to an embodiment of the present invention.
  • the storage device in the embodiment of the present invention stores a program file 71 that can implement all the above methods.
  • the program file 71 can be stored in the above storage device in the form of a software product, and includes a number of instructions to enable a computer device (which can A personal computer, a server, or a network device, etc.) or a processor (processor) executes all or part of the steps of the methods described in the various embodiments of the present invention.
  • the aforementioned storage devices include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disks or optical disks and other media that can store program codes. , Or computer, server, mobile phone, tablet and other server equipment.
  • the disclosed system, device, and method may be implemented in other ways.
  • the device embodiments described above are only illustrative, for example, the division of units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components can be combined or integrated. To another system, or some features can be ignored, or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the functional units in the various embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit. The above are only the embodiments of the present invention, and do not limit the scope of the present invention. Any equivalent structure or equivalent process transformation made by using the content of the description and drawings of the present invention, or directly or indirectly applied to other related technical fields, The same reasoning is included in the scope of patent protection of the present invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

L'invention concerne un procédé et un appareil de planification de tâche dans un environnement en nuage de conteneurs, ainsi qu'un serveur et un appareil de stockage. Le procédé consiste à : acquérir un taux d'utilisation de ressources historique de chaque serveur dans un centre de données en nuage, puis prédire un taux d'utilisation de ressources prédit de chaque serveur à l'instant suivant d'après le taux d'utilisation de ressources historique ; lorsqu'une tâche d'application est reçue, confirmer les ressources devant être occupées par la tâche d'application ; acquérir le taux d'utilisation de ressources actuel de chaque serveur, puis combiner le taux d'utilisation de ressources prédit, les ressources qui doivent être occupées et une valeur seuil de ressources de chaque serveur pour effectuer un calcul d'optimisation, de façon à obtenir une matrice de déploiement entre la tâche d'application et le serveur ; et générer un ou plusieurs groupes de conteneurs sur un serveur cible en fonction de la matrice de déploiement et de la tâche d'application. En prédisant une future charge d'un serveur en nuage de conteneurs et en déployant de manière rationnelle une tâche d'application conjointement avec l'état de charge actuel, la surcharge du serveur en nuage de conteneurs à un moment futur après la planification des tâches est évitée.
PCT/CN2020/129208 2019-11-29 2020-11-17 Procédé et appareil de planification de tâche dans un environnement en nuage de conteneurs, serveur et appareil de stockage WO2021104096A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911198319.9A CN111026550A (zh) 2019-11-29 2019-11-29 容器云环境下的任务调度方法、装置、服务器及存储装置
CN201911198319.9 2019-11-29

Publications (1)

Publication Number Publication Date
WO2021104096A1 true WO2021104096A1 (fr) 2021-06-03

Family

ID=70203282

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/129208 WO2021104096A1 (fr) 2019-11-29 2020-11-17 Procédé et appareil de planification de tâche dans un environnement en nuage de conteneurs, serveur et appareil de stockage

Country Status (2)

Country Link
CN (1) CN111026550A (fr)
WO (1) WO2021104096A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220405077A1 (en) * 2021-06-21 2022-12-22 Microsoft Technology Licensing, Llc Computer-implemented exposomic classifier

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111026550A (zh) * 2019-11-29 2020-04-17 中国科学院深圳先进技术研究院 容器云环境下的任务调度方法、装置、服务器及存储装置
CN111610994B (zh) * 2020-05-20 2023-10-20 山东汇贸电子口岸有限公司 一种云数据中心的部署方法、装置、设备及存储介质
CN111488200A (zh) * 2020-06-28 2020-08-04 四川新网银行股份有限公司 一种基于动态分析模型的虚拟机资源使用率分析方法
CN112001116A (zh) * 2020-07-17 2020-11-27 新华三大数据技术有限公司 一种云资源容量预测方法及装置
CN112087504A (zh) * 2020-08-31 2020-12-15 浪潮通用软件有限公司 一种基于工作负载特性的动态负载均衡的方法及装置
CN112187894B (zh) * 2020-09-17 2022-06-10 杭州谐云科技有限公司 一种基于负载相关性预测的容器动态调度方法
CN112631750B (zh) * 2020-12-21 2024-04-09 中山大学 面向云数据中心的基于压缩感知的预测性在线调度与混合任务部署方法
CN113553180B (zh) * 2021-07-20 2023-10-13 唯品会(广州)软件有限公司 一种容器的调度方法、装置及电子设备
CN113792971B (zh) * 2021-08-11 2023-12-29 邹平市供电有限公司 一种区域电力调度组网方法及系统
CN113992525A (zh) * 2021-10-12 2022-01-28 支付宝(杭州)信息技术有限公司 一种应用的容器数量调节方法及装置
CN114428666A (zh) * 2022-01-27 2022-05-03 中国铁道科学研究院集团有限公司电子计算技术研究所 一种基于cpu和内存占用率的智能弹性伸缩方法及系统
CN115756823B (zh) * 2022-10-20 2024-04-16 广州汽车集团股份有限公司 服务发布方法、装置、车辆及存储介质
CN117472589B (zh) * 2023-12-27 2024-03-12 山东合能科技有限责任公司 一种园区网络服务管理方法及系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104065745A (zh) * 2014-07-07 2014-09-24 电子科技大学 云计算动态资源调度系统和方法
CN104765642A (zh) * 2015-03-24 2015-07-08 长沙理工大学 云环境下基于动态预测模型的虚拟机部署方法及系统
CN105320559A (zh) * 2014-07-30 2016-02-10 中国移动通信集团广东有限公司 一种云计算系统的调度方法和装置
CN110086650A (zh) * 2019-03-20 2019-08-02 武汉大学 面向分布式机器学习任务的云资源在线调度方法及装置
CN111026550A (zh) * 2019-11-29 2020-04-17 中国科学院深圳先进技术研究院 容器云环境下的任务调度方法、装置、服务器及存储装置

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109992392B (zh) * 2017-12-29 2021-07-06 中移(杭州)信息技术有限公司 一种资源部署方法、装置及资源服务器

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104065745A (zh) * 2014-07-07 2014-09-24 电子科技大学 云计算动态资源调度系统和方法
CN105320559A (zh) * 2014-07-30 2016-02-10 中国移动通信集团广东有限公司 一种云计算系统的调度方法和装置
CN104765642A (zh) * 2015-03-24 2015-07-08 长沙理工大学 云环境下基于动态预测模型的虚拟机部署方法及系统
CN110086650A (zh) * 2019-03-20 2019-08-02 武汉大学 面向分布式机器学习任务的云资源在线调度方法及装置
CN111026550A (zh) * 2019-11-29 2020-04-17 中国科学院深圳先进技术研究院 容器云环境下的任务调度方法、装置、服务器及存储装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220405077A1 (en) * 2021-06-21 2022-12-22 Microsoft Technology Licensing, Llc Computer-implemented exposomic classifier
US11775277B2 (en) * 2021-06-21 2023-10-03 Microsoft Technology Licensing, Llc Computer-implemented exposomic classifier

Also Published As

Publication number Publication date
CN111026550A (zh) 2020-04-17

Similar Documents

Publication Publication Date Title
WO2021104096A1 (fr) Procédé et appareil de planification de tâche dans un environnement en nuage de conteneurs, serveur et appareil de stockage
CN104317658B (zh) 一种基于MapReduce的负载自适应任务调度方法
CN111045828B (zh) 基于配电网台区终端的分布式边缘计算方法和相关装置
CN108009016B (zh) 一种资源负载均衡控制方法及集群调度器
US8856797B1 (en) Reactive auto-scaling of capacity
CN109617826B (zh) 一种基于布谷鸟搜索的storm动态负载均衡方法
CN108845878A (zh) 基于无服务器计算的大数据处理方法及装置
CN103927229A (zh) 在动态可用服务器集群中调度映射化简作业
CN113434253B (zh) 集群资源调度方法、装置、设备及存储介质
CN109976901A (zh) 一种资源调度方法、装置、服务器及可读存储介质
CN111143036A (zh) 一种基于强化学习的虚拟机资源调度方法
CN110414569A (zh) 聚类实现方法及装置
CN109740870A (zh) 云计算环境下Web应用的资源动态调度方法
US20230229487A1 (en) Virtual machine deployment method, virtual machine management method having the same and virtual machine management system implementing the same
CN116501711A (zh) 一种基于“存算分离”架构的算力网络任务调度方法
CN108845886A (zh) 基于相空间的云计算能耗优化方法和系统
CN107203256B (zh) 一种网络功能虚拟化场景下的节能分配方法与装置
CN110490316A (zh) 基于神经网络模型训练系统的训练处理方法、训练系统
Ma et al. NSGA-II with local search for multi-objective application deployment in multi-cloud
CN113190342A (zh) 用于云-边协同网络的多应用细粒度卸载的方法与系统架构
Moallem et al. Using artificial life techniques for distributed grid job scheduling
Naik et al. Developing a cloud computing data center virtual machine consolidation based on multi-objective hybrid fruit-fly cuckoo search algorithm
Duong et al. Virtual machine placement via q-learning with function approximation
Prado et al. On providing quality of service in grid computing through multi-objective swarm-based knowledge acquisition in fuzzy schedulers
CN116339932A (zh) 资源调度方法、装置和服务器

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20891521

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20891521

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 110123)

122 Ep: pct application non-entry in european phase

Ref document number: 20891521

Country of ref document: EP

Kind code of ref document: A1