CN113760496A - Container scheduling method and scheduler - Google Patents

Container scheduling method and scheduler Download PDF

Info

Publication number
CN113760496A
CN113760496A CN202011594780.9A CN202011594780A CN113760496A CN 113760496 A CN113760496 A CN 113760496A CN 202011594780 A CN202011594780 A CN 202011594780A CN 113760496 A CN113760496 A CN 113760496A
Authority
CN
China
Prior art keywords
container
busy
container group
containers
idle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011594780.9A
Other languages
Chinese (zh)
Other versions
CN113760496B (en
Inventor
尤凤凯
李品
李帅
赵辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Century Trading Co Ltd
Beijing Wodong Tianjun Information Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Wodong Tianjun Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Wodong Tianjun Information Technology Co Ltd filed Critical Beijing Jingdong Century Trading Co Ltd
Priority to CN202011594780.9A priority Critical patent/CN113760496B/en
Publication of CN113760496A publication Critical patent/CN113760496A/en
Application granted granted Critical
Publication of CN113760496B publication Critical patent/CN113760496B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/24323Tree-organised classifiers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5021Priority

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The invention discloses a container scheduling method and a scheduler, and relates to the technical field of computers. One embodiment of the method comprises: obtaining current data of the container within the sliding time window from a service provider; determining the working state of the container according to the current data of the container and the trained detection model; wherein the working state of the container is used for measuring the busy degree of the container; adjusting the containers in the container group according to the working state of the containers; and providing the grouping result to a service provider so that the service provider schedules the container according to the grouping result. The implementation method can reasonably distribute the resources of the service cluster, so that the load of the service cluster is balanced.

Description

Container scheduling method and scheduler
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a container scheduling method and a scheduler.
Background
To meet the increasing demand of service invokers, service providers deploy services in service clusters formed by a plurality of containers. Meanwhile, in order to reasonably utilize each container and improve the service quality, the service provider divides the containers into different container groups and schedules the containers according to grouping results.
In the prior art, service providers typically schedule individual containers according to the offered services. E.g., business one and business two.
However, performing container scheduling according to the traffic may cause improper resource allocation of the service cluster, resulting in resource waste.
Disclosure of Invention
In view of this, embodiments of the present invention provide a container scheduling method and a scheduler, which can reasonably allocate resources of a service cluster.
In a first aspect, an embodiment of the present invention provides a container scheduling method, including:
obtaining current data of the container within the sliding time window from a service provider;
determining the working state of the container according to the current data of the container and the trained detection model; wherein the working state of the container is used for measuring the busy degree of the container;
adjusting the containers in the container group according to the working state of the containers;
providing the grouping result to the service provider so that the service provider schedules the container according to the grouping result.
Alternatively,
further comprising:
acquiring historical data of the container;
counting the historical data of the container according to a specified period to obtain metadata;
extracting service features and hardware features from the metadata;
generating a training sample according to the service characteristic and the hardware characteristic;
and training the detection model according to the training sample.
Alternatively,
the service features include: any one or more of the average query quantity of the items, the average query depth of the requests and the total amount of the requests; wherein the request average query depth is determined by the number of entries and the type of entries.
Alternatively,
the hardware features include: the number of times that the CPU utilization rate exceeds the first threshold, the number of times that the memory utilization rate exceeds the second threshold, the number of times that the disk utilization rate exceeds the third threshold, the number of times that the network inflow rate exceeds the fourth threshold, and the number of times that the load value exceeds the fifth threshold.
Alternatively,
the adjusting of the containers in the container group according to the working state of the containers comprises:
determining the busy degree of the container group according to the working state of the container;
determining the working state of the container group according to the busy degree of the container group;
and adjusting the containers in the container group according to the working state of the container group.
Alternatively,
the working state of the container comprises: busy, normal and idle;
the determining the busy degree of the container group according to the working state of the container comprises the following steps:
and calculating the busy degree of the container group according to the total number of the containers of the container group and the number of the busy containers.
Alternatively,
the working state of the container group comprises: busy, normal and idle;
the adjusting the containers in the container group according to the working state of the container group comprises:
calculating the allocation limit of the busy container group according to the number of the busy containers in the busy container group;
calculating the allocation limit of the idle container group according to the number of idle containers in the idle container group;
and adjusting the containers in the busy container group and the idle container group according to the allocation limit of the busy container group and the allocation limit of the idle container group.
Alternatively,
the adjusting the busy container group and the containers in the idle container group according to the allocation limit of the busy container group and the allocation limit of the idle container group comprises the following steps:
calculating the sum of allocation limits of each idle container group;
calculating the number of containers distributed by each busy container group according to the number of the busy container groups and the sum of the allocation quota of each idle container group;
and allocating the containers in each idle container group to each busy container group according to the number of the containers allocated by each busy container group.
Alternatively,
the adjusting the busy container group and the containers in the idle container group according to the allocation limit of the busy container group and the allocation limit of the idle container group comprises the following steps:
arranging the busy capacitor groups in the sequence from high to low to obtain the priority of each busy capacitor group;
arranging the idle container groups in the order of allocating quota from high to low to obtain the priority of each idle container group;
and adjusting the containers in the busy container group and the free container group according to the priority and allocation limit of each busy container group and the priority and allocation limit of each free container group.
Alternatively,
the current data includes: any one or more of request timestamp, container name, article query quantity, request query depth, millisecond request quantity, CPU utilization rate, memory utilization rate, network inflow rate, disk utilization rate and load value; wherein the request query depth is determined by the number of entries and the type of entries.
In a second aspect, an embodiment of the present invention provides a scheduler, including:
an acquisition module configured to acquire current data of the container within the sliding time window from a service provider;
the determining module is configured to determine the working state of the container according to the current data of the container and the trained detection model; wherein the working state of the container is used for measuring the busy degree of the container;
the adjusting module is configured to adjust the containers in the container group according to the working states of the containers; and providing the grouping result to a service provider so that the service provider schedules the container according to the grouping result.
In a third aspect, an embodiment of the present invention provides an electronic device, including:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement a method as in any one of the embodiments described above.
In a fourth aspect, the present invention provides a computer readable medium, on which a computer program is stored, and when the program is executed by a processor, the computer program implements the method according to any one of the above embodiments.
One embodiment of the above invention has the following advantages or benefits: and determining the busy degree of the container according to the current data of the container, and adjusting the containers in each container group based on the busy degree. Because the busy degree of the container is considered, the embodiment of the invention can more reasonably distribute the resources of the service cluster and avoid the condition that a certain container is over-pressurized and crashed. Meanwhile, the embodiment of the invention determines the working state of the container by adopting a model prediction mode, can more accurately measure the busyness degree of the container, and further more reasonably schedules the container.
Further effects of the above-mentioned non-conventional alternatives will be described below in connection with the embodiments.
Drawings
The drawings are included to provide a better understanding of the invention and are not to be construed as unduly limiting the invention. Wherein:
FIG. 1 is a flow chart of a method for scheduling containers according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method for scheduling containers according to another embodiment of the present invention;
FIG. 3 is a schematic diagram of a grouping result provided by an embodiment of the present invention;
fig. 4 is a schematic diagram of a scheduler according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a dynamic adjustment system for service clusters according to an embodiment of the present invention;
fig. 6 is a schematic diagram of a method for dynamically adjusting a service cluster according to an embodiment of the present invention;
fig. 7 is a schematic block diagram of a computer system suitable for use in implementing a terminal device or server of an embodiment of the invention.
Detailed Description
Exemplary embodiments of the present invention are described below with reference to the accompanying drawings, in which various details of embodiments of the invention are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Grouping containers according to the traffic may result in uneven allocation of container resources in the service cluster, resulting in resource waste. For example, traffic with smaller traffic invests more containers, while traffic with larger traffic invests fewer containers.
In view of this, as shown in fig. 1, an embodiment of the present invention provides a container scheduling method, including:
step 101: the current data of the container within the sliding time window is obtained from the service provider.
The method is applied to a scheduler. And the service provider collects the current data of the container through a preset burying point. Specifically, the service provider can collect various data indexes of the container in the sliding time window based on Flink. Flink is an open source streaming framework, and the core of the Flink is a distributed streaming data streaming engine written in Java and Scala. Flink executes arbitrary stream data programs in a data parallel and pipelined manner, and Flink's pipelined runtime system can execute batch and stream processing programs.
The width of the sliding time window can be set according to different services, and the current data collected along with the movement of the sliding time window are different. In order to distinguish from the sliding time window corresponding to the historical data, the embodiment of the present invention is denoted as a first sliding time window.
The scheduler obtains the current data collected from the service provider.
The current data may include:
(1) data entered by the service invoker, e.g., quantity of item queries;
during the interaction, the service provider will respond to the request of the service invoker based on the participation of the service invoker. Therefore, the processing procedure of the service provider is greatly influenced by the entry parameter of the service caller. For example, when the number of query items is 1, the complexity is O (1), and when the number of query items is 100, the complexity is O (100), the two types of entries have different pressures on the service provider.
(2) Data returned by the service provider, e.g., request query depth;
and the service provider inquires according to the access parameters of the service caller, and the inquiry depth of each time can be different. For example, the service caller A inquires basic information of an article; and the service caller B inquires basic information and picture information of the article, and data such as article related extended attributes and the like. Compared with the service caller a, the service caller B corresponds to a deeper request query depth, and the service provider needs to return deeper data to the service caller B. In a practical application scenario, the request query depth may be determined by the number of entries and the type of the entries, for example, the number of entries is larger, and the request query depth is deeper.
(3) Traffic data, e.g., millisecond requests;
in the embodiment of the invention, daily flow data and flow data with the maximum peak value can be counted.
(4) Hardware information such as CPU utilization, memory utilization, network inflow rate, disk utilization, and load values.
Step 102: determining the working state of the container according to the current data of the container and the trained detection model; wherein the working state of the container is used for measuring the busy degree of the container.
In the embodiment of the present invention, the working states of the containers may have different dividing manners, for example, in one scenario, the working states of the containers include: busy, idle and normal; in another scenario, the operational state of the container includes: first level busy, second level busy, and fourth level busy.
The embodiment of the invention can more accurately determine the working state of the container through machine learning, thereby enabling the container allocation to be more reasonable. In an actual application scenario, a container score may also be calculated according to current data of the container, and the operating state of the container may be determined according to the container score. For example, the sum of the scores of various data such as the CPU utilization rate and the memory utilization rate is determined as the container score, and the working state of the container is determined according to the numerical value interval in which the container score is located.
Step 103: and adjusting the containers in the container group according to the working state of the containers.
The embodiment of the invention can adjust the containers in the container group and avoid excessive containers with high busy degree in the container group. And adjusting the containers in the container group to obtain a grouping result.
Step 104: and providing the grouping result to a service provider so that the service provider schedules the container according to the grouping result.
And determining the busy degree of the container according to the current data of the container, and adjusting the containers in each container group based on the busy degree. Because the busy degree of the container is considered, the embodiment of the invention can more reasonably distribute the resources of the service cluster and avoid the condition that a certain container is over-pressurized and crashed. Meanwhile, the embodiment of the invention determines the working state of the container by adopting a model prediction mode, can more accurately measure the busyness degree of the container, and further more reasonably schedules the container.
In one embodiment of the invention, in order to avoid occupying resources of the service provider and improve the service quality, the grouping result is sent to the configuration center, so that the service provider obtains the grouping result from the configuration center.
In one embodiment of the invention, the method further comprises:
acquiring historical data of a container;
counting the historical data of the container according to a specified period to obtain metadata;
extracting service features and hardware features from the metadata;
generating a training sample according to the service characteristics and the hardware characteristics;
and training the detection model according to the training sample.
Similar to the current data, embodiments of the present invention may obtain historical data of the container within the second sliding time window. In order to ensure the consistency of the training process and the prediction process and improve the prediction accuracy, the width of the second sliding time window is equal to that of the first sliding time window.
The acquisition time of the historical data is before the current data acquisition time, the historical data can be directly subjected to feature extraction, and a training sample is generated based on the extracted features; the historical data can be counted first, and feature extraction can be performed on the metadata obtained through statistics. In consideration of the fact that the amount of the historical data is large and there is data in the order of milliseconds, in order to increase the generation speed of the training samples, the historical data may be counted on the basis of a specified period. For example, the millisecond-level request amount is combined into the second-level request amount in a period of 1 second.
In an embodiment of the present invention, the label of the training sample may be determined by human experience, for example, 0 indicates busy, 1 indicates normal, and 2 indicates idle.
In the embodiment of the present invention, the service features may include: any one or more of an average number of queries for an item, an average depth of queries for a request, and a total number of requests.
Wherein the request average query depth is an average of the request query depths.
The hardware features may include: the number of times that the CPU utilization rate exceeds the first threshold, the number of times that the memory utilization rate exceeds the second threshold, the number of times that the disk utilization rate exceeds the third threshold, the number of times that the network inflow rate exceeds the fourth threshold, and the number of times that the load value exceeds the fifth threshold.
In different application scenarios, the first threshold, the second threshold, the third threshold and the fourth threshold may vary. Taking the CPU utilization rate as an example, if the CPU utilization rate exceeds the first threshold, it indicates that the CPU utilization rate is too high, and the higher the number of times the CPU utilization rate exceeds the first threshold, it indicates that the pressure of the container in terms of the CPU utilization rate is greater. Therefore, the hardware feature can reflect the operation state of the container from the dimension of the CPU utilization rate.
The detection model may be a decision tree model. In an actual application scenario, the training samples may also be generated only according to the service features or the hardware features, which is not described herein again.
The embodiment of the invention trains the detection model based on the service characteristics and the hardware characteristics, so that the trained detection model can consider the service characteristics and the hardware characteristics in the prediction process, the determined working state of the container is more accurate, and the resource allocation is more balanced.
In the process of training the detection model, a cross validation method can be further adopted to optimize the hyper-parameters of the detection model, for example, ten-fold cross validation and the like.
In one embodiment of the invention, the adjustment of the containers in the group of containers according to their operating conditions comprises:
determining the busy degree of the volume group according to the working state of the container;
determining the working state of the container group according to the busy degree of the container group;
and adjusting the containers in the container group according to the working state of the container group.
Similar to the container, the working state of the container group may also have different division situations, for example, the working state of the container group includes: busy, normal and idle. The groups of containers whose operating states are busy and free need to be adjusted, while the groups of containers whose operating states are normal need not be adjusted.
According to the embodiment of the invention, the containers in the container group are adjusted according to the working state of the container group, so that the container group can be prevented from being too idle or busy, and resources can be more reasonably distributed.
In an actual application scene, the container in the container group can be directly adjusted according to the busy degree of the container group without determining the working state of the container group in advance.
In one embodiment of the present invention, if the working status of the container includes busy, normal and idle, determining the busy degree of the container group according to the working status of the container includes:
and calculating the busy degree of the container group according to the total number of the containers of the container group and the number of the busy containers.
The busy level of the bank may be a ratio of the number of busy containers in the bank to the total number of containers in the bank. The busy degree of the container group can be more accurately measured through the ratio, so that the resource allocation is more balanced.
In one embodiment of the invention, the working state of the container group comprises: busy, normal and idle;
adjusting the containers in the container group according to the working state of the container group, comprising:
calculating the allocation limit of the busy container group according to the number of the busy containers in the busy container group;
calculating the allocation limit of the idle container group according to the number of the idle containers in the idle container group;
and adjusting the containers in the busy container group and the idle container group according to the allocation limit of the busy container group and the allocation limit of the idle container group.
The allocation limit of the busy container group can be the number of the busy containers in the busy container group, or the product of the number of the busy containers in the busy container group and a busy coefficient. Similarly, the allocation amount of the free container group may be the number of free containers in the free container group, or the product of the number of free containers in the free container group and the free coefficient. The allocation limit of the busy container group and the allocation limit of the free container group are not limited to these two calculation methods.
In the container allocation process, the number of idle containers in the idle container group and the number of busy containers in the busy container group are considered, so that allocation is more reasonable, and the load of the service cluster is more balanced.
The embodiment of the invention provides at least two adjusting modes:
mode one, fair scheduling
According to the allotment amount of the busy container group and the allotment amount of the idle container group, the container in the busy container group and the idle container group is adjusted, which comprises the following steps:
calculating the sum of allocation limits of each idle container group;
calculating the number of containers distributed by each busy container group according to the number of the busy container groups and the sum of the allocation limit of each idle container group;
and allocating the containers in each free container group to each busy container group according to the number of the containers allocated by each busy container group.
According to the embodiment of the invention, the allocation limit or the container number obtained by each busy container group is the same, the adjusting mode has higher efficiency, and the method is suitable for daily adjustment in a non-promotion scene.
Mode two, priority scheduling
According to the allotment amount of the busy container group and the allotment amount of the idle container group, the container in the busy container group and the idle container group is adjusted, which comprises the following steps:
arranging the busy capacitor groups in the order of high busy degree to low busy degree to obtain the priority of each busy capacitor group;
arranging each idle container group according to the sequence of the allocation limit from high to low to obtain the priority of each idle container group;
and adjusting the containers in the busy container group and the idle container group according to the priority and allocation limit of each busy container group and the priority and allocation limit of each idle container group.
The busy container and the idle container arranged in front have higher priority, the demand of the busy container group with higher priority is met preferentially, the allocation limit of a busy container group can be met by the allocation limits of a plurality of different idle container groups, and the allocation limit of an idle container group can meet the allocation limits of a plurality of different busy container groups.
As shown in fig. 2, an embodiment of the present invention provides a container scheduling method, including:
step 201: current data for each container within the sliding time window is obtained from the service provider.
The current data includes: the system comprises a request timestamp, a container name, an article query quantity, a request query depth, a millisecond-level request quantity, a CPU utilization rate, a memory utilization rate, a network inflow rate, a disk utilization rate and a load value.
Wherein, the request time stamp is millisecond level, and the container name is composed of a container group name and a container IP address. The width of the sliding time window can be set according to different services, and 300 seconds is adopted in the embodiment of the invention. As the sliding time window moves, the collected current data is different, and the embodiment of the present invention is described by taking only one sliding time window as an example.
A piece of data collected in the embodiment of the present invention is shown in table 1.
TABLE 1
Figure BDA0002870022000000111
Step 202: and counting the current data of the container according to a specified period to obtain metadata.
And performing data aggregation on the current data collected in the sliding time window to form metadata. Specifically, the history data of the millisecond order is merged into the metadata of the second order by the container name. The millisecond-level request quantities are combined in a summing mode, and other indexes such as request query depth and the like are combined in an expectation value obtaining mode.
For example, another piece of data collected by an embodiment of the present invention is shown in table 2.
TABLE 2
Figure BDA0002870022000000112
T1 and T2 are data collected at different times in the same second. The current data of the second is counted, and the metadata of the container (S1-1.1.1.1) obtained after counting is shown in Table 3.
TABLE 3
Figure BDA0002870022000000113
Figure BDA0002870022000000121
Step 203: and extracting the service features and the hardware features from the metadata.
The extracted service features include: average number of queries for an item, average depth of queries for a request, and total number of requests.
Average number of queries for items: the quantity of the items which are inquired by the service calling party at a time influences the throughput and the response time of the container in unit time, and reflects the busy degree of the container.
The calculation method comprises the following steps: and counting the average query quantity of the articles of the container in the sliding time window, namely averaging the query quantity of the articles per second.
Request average query depth: the service provider provides related queries of different types and numbers according to different service lines, and is embodied by called input parameters and types. For example, the service caller a needs to query basic information of a commodity; the service invoker B needs to query basic information of the goods, picture information of the goods, and related extended attributes, etc. Different types of participation forms correspond to different query depths, and the query depths can influence the busyness degree of the container.
The calculation method comprises the following steps: and (4) counting the average query depth of the requests of the container in the sliding time window, namely averaging the query depth of the requests per second.
Request total amount: the cluster container has a limited number of requests per unit time that can be handled, and the total number of requests can also reflect how busy the container is.
The calculation method comprises the following steps: and counting the total amount of requests received by the container in the sliding time window.
The extracted hardware features include:
number of times of high utilization of CPU: counting the times that the CPU utilization rate of the container exceeds a threshold A in the sliding time window;
number of times of high memory utilization: counting the times that the memory utilization rate of the container exceeds a threshold value B in the sliding time window;
number of times of high utilization of magnetic disk: counting the times that the disk utilization rate of the container exceeds a threshold value C in the sliding time window;
network inflow high rate times: counting the times that the network inflow rate of the container exceeds a threshold value D in the sliding time window;
high load times: counting the times that the load value of the container exceeds the threshold value E in the sliding time window.
Step 204: and generating a prediction sample according to the service characteristics and the hardware characteristics.
Step 205: and inputting the prediction sample into the trained detection model to obtain the working state of the container.
The detection model adopted by the embodiment of the invention is a decision tree model based on Xgboost, and in other scenes, a decision tree model based on LightGBM and the like can be adopted.
Through a trained detection model, whether the working state of the container is busy, normal or idle can be determined.
Step 206: and calculating the busy degree of the container group according to the total number of the containers of the container group and the number of the busy containers.
Figure BDA0002870022000000131
Step 207: and determining the working state of the container group according to the busy degree of the container group.
When the busy degree of the container group is more than 70%, the working state of the container group is busy; when the busy degree of the container group is 30% -70%, the working state of the container group is normal; when the busy level of the container group is below 30%, the operating state of the container group is idle.
Wherein, the busy state indicates that support is needed, and other groups are allocated to the busy container group from other groups; in a normal state, scheduling processing is not performed; and an idle state, which indicates that part of the resources can be allocated to the busy container group.
Step 208: and calculating the allocation amount of the busy container group according to the number of the busy containers in the busy container group.
The allocation amount of the busy container group is the number of the busy containers in the busy container group multiplied by a busy coefficient; the busy coefficient can be dynamically configured according to the service requirement and is located between 0 and 1.
Step 209: and calculating the allocation limit of the idle container group according to the number of the idle containers in the idle container group.
The allocation limit of the idle container group is equal to the number of idle containers in the idle container group multiplied by an idle coefficient; the idle coefficient can be dynamically configured according to business requirements and is located between 0 and 1.
TABLE 4
Container set Number of containers Busyness degree Working state Allocation amount
A 100 75% Busy 38
B 150 20% Free up 40
C 80 86% Busy 43
D 110 25% Free up 36
The scheduling quota obtained by the embodiment of the invention is shown in table 4.
Step 210: and arranging the busy degree of each busy container group from high to low to obtain the priority of each busy container group.
The busy container group has a priority of C, A.
Step 211: and arranging the idle container groups according to the sequence of the allocated quota from high to low to obtain the priority of each idle container group.
The priority of the free container group is B, D. The execution sequence of step 210 and step 211 may be parallel, or step 211 may be executed first. Step 208 is similar to step 209 and will not be described herein.
Step 212: and adjusting the containers in the busy container group and the idle container group according to the priority and allocation limit of each busy container group and the priority and allocation limit of each idle container group.
The demand of C is met preferentially, the allocation quota of C is 43, the allocation quota of B is 40, 40 containers are allocated from B to C, and 3 containers are allocated from D to C, so that all the demands of C are met. And (3) preparing 33 containers from the D into the A so as to meet the part requirement of the A. The grouping results before and after adjustment are shown in fig. 3, and the number of containers in A, B, C, D after adjustment is 133, 110, 123, and 74, respectively.
Step 213: and sending the grouping result to a configuration center so that the service provider acquires the grouping result from the configuration center and schedules the container according to the grouping result.
The embodiment of the invention analyzes the current data and reasonably distributes resources, so that the flow of each instance of the service provider is as even as possible, the possibility of service downtime caused by grouping problems is reduced, and the high availability of the service is provided. Meanwhile, the server resources are reasonably evaluated, and the waste of the server cost is avoided.
As shown in fig. 4, an embodiment of the present invention provides a scheduler, including:
an acquisition module 401 configured to acquire current data of the container within the sliding time window from the service provider;
a determining module 402 configured to determine a working state of the container according to the current data of the container and the trained detection model; wherein the working state of the container is used for measuring the busy degree of the container;
an adjusting module 403 configured to adjust the containers in the container group according to the working state of the containers; and providing the grouping result to a service provider so that the service provider schedules the container according to the grouping result.
In one embodiment of the invention, the determining module 402 is configured to obtain historical data of the container; counting the historical data of the container according to a specified period to obtain metadata; extracting service features and hardware features from the metadata; generating a training sample according to the service characteristic and the hardware characteristic; and training the detection model according to the training sample.
In one embodiment of the present invention, the service features include: any one or more of the average query quantity of the items, the average query depth of the requests and the total amount of the requests; wherein the request average query depth is determined by the number of entries and the type of entries.
In one embodiment of the invention, the hardware features include: the number of times that the CPU utilization rate exceeds the first threshold, the number of times that the memory utilization rate exceeds the second threshold, the number of times that the disk utilization rate exceeds the third threshold, the number of times that the network inflow rate exceeds the fourth threshold, and the number of times that the load value exceeds the fifth threshold.
In an embodiment of the present invention, the adjusting module 403 is configured to determine a busy level of the group of containers according to an operating status of the container; determining the working state of the container group according to the busy degree of the container group; and adjusting the containers in the container group according to the working state of the container group.
In one embodiment of the invention, the operating state of the container comprises: busy, normal and idle; an adjusting module 403 configured to calculate a busy degree of the container group according to the total number of containers and the number of busy containers of the container group.
In an embodiment of the present invention, the operating state of the container group includes: busy, normal and idle; an adjusting module 403, configured to calculate a deployment amount of the busy container group according to the number of busy containers in the busy container group; calculating the allocation limit of the idle container group according to the number of idle containers in the idle container group; and adjusting the containers in the busy container group and the idle container group according to the allocation limit of the busy container group and the allocation limit of the idle container group.
In an embodiment of the present invention, the adjusting module 403 is configured to calculate a sum of allocation credits of the idle container groups; calculating the number of containers distributed by each busy container group according to the number of the busy container groups and the sum of the allocation quota of each idle container group; and allocating the containers in each idle container group to each busy container group according to the number of the containers allocated by each busy container group.
In an embodiment of the present invention, the adjusting module 403 is configured to arrange the busy capacitor banks in order from high to low, so as to obtain the priority of each busy capacitor bank; arranging the idle container groups in the order of allocating quota from high to low to obtain the priority of each idle container group; and adjusting the containers in the busy container group and the free container group according to the priority and allocation limit of each busy container group and the priority and allocation limit of each free container group.
In one embodiment of the present invention, the current data includes: any one or more of request timestamp, container name, article query quantity, request query depth, millisecond request quantity, CPU utilization rate, memory utilization rate, network inflow rate, disk utilization rate and load value; wherein the request query depth is determined by the number of entries and the type of entries.
As shown in fig. 5, an embodiment of the present invention provides a system for dynamically adjusting a service cluster, including: a service provider 501, a scheduler 502 and a configuration center 503.
As shown in fig. 6, based on the above dynamic adjustment system for a service cluster, an embodiment of the present invention provides a dynamic adjustment method for a service cluster, including:
step 601: the scheduler obtains the current data of the container from the service provider.
The process of determining the grouping result by the scheduler has been described in detail in the foregoing embodiments, and is not described herein again.
Step 602: the dispatcher determines the working state of the container according to the current data of the container; wherein the working state of the container is used for measuring the busy degree of the container.
Step 603: the dispatcher adjusts the containers in the container group according to the working states of the containers.
Step 604: the scheduler sends the grouping result to the configuration center.
The scheduler converts the grouping result into JSON or XML format and pushes the grouping result to a configuration center, for example:
Figure BDA0002870022000000171
wherein, the configuration center can be ducc, tpconfig, dbconfig, etc.
Step 605: and the service provider acquires the grouping result from the configuration center and modifies the currently issued container grouping according to the grouping result.
For example, there are 200 containers in the online cluster, of which 50 containers bear a larger flow rate and have a higher overall utilization rate, and are located in group a; the other 150 containers, which have a relatively flat flow rate, are located in group B. The utilization rate of the whole CPU of the group A is 67%, and the flow rate is 30 w/s; the overall CPU utilization rate of the group B is 20%, and the flow rate is 9 w/s. After dynamic adjustment, 90 containers in the group B support the group A in a grouping adjusting mode, and the overall CPU utilization rate of the group A and the group B reaches 35%.
In the embodiment of the invention, the service provider can acquire the current data of the online container, the scheduler can adjust the container grouping according to the current data and send the grouping result to the configuration center, and the service provider can dynamically adjust the service cluster according to the grouping result of the configuration center. The embodiment of the invention can dynamically adjust the service cluster according to the current data of the container, balance the load of each container group and avoid the breakdown of the container due to overlarge pressure.
An embodiment of the present invention provides an electronic device, which is characterized by including:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement a method as in any one of the embodiments described above.
An embodiment of the present invention provides a computer-readable medium, on which a computer program is stored, where the computer program is configured to, when executed by a processor, implement the method according to any one of the above embodiments.
Referring now to FIG. 7, shown is a block diagram of a computer system 700 suitable for use with a terminal device implementing an embodiment of the present invention. The terminal device shown in fig. 7 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention.
As shown in fig. 7, the computer system 700 includes a Central Processing Unit (CPU)701, which can perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)702 or a program loaded from a storage section 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data necessary for the operation of the system 700 are also stored. The CPU 701, the ROM 702, and the RAM 703 are connected to each other via a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
The following components are connected to the I/O interface 705: an input portion 706 including a keyboard, a mouse, and the like; an output section 707 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 708 including a hard disk and the like; and a communication section 709 including a network interface card such as a LAN card, a modem, or the like. The communication section 709 performs communication processing via a network such as the internet. A drive 710 is also connected to the I/O interface 705 as needed. A removable medium 711 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 710 as necessary, so that a computer program read out therefrom is mounted into the storage section 708 as necessary.
In particular, according to the embodiments of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program can be downloaded and installed from a network through the communication section 709, and/or installed from the removable medium 711. The computer program performs the above-described functions defined in the system of the present invention when executed by the Central Processing Unit (CPU) 701.
It should be noted that the computer readable medium shown in the present invention can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present invention, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present invention, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present invention may be implemented by software or hardware. The described modules may also be provided in a processor, which may be described as: a processor includes a sending module, an obtaining module, a determining module, and a first processing module. The names of these modules do not form a limitation on the modules themselves in some cases, and for example, the sending module may also be described as a "module sending a picture acquisition request to a connected server".
As another aspect, the present invention also provides a computer-readable medium that may be contained in the apparatus described in the above embodiments; or may be separate and not incorporated into the device. The computer readable medium carries one or more programs which, when executed by a device, cause the device to comprise:
acquiring current data of a container;
determining the working state of the container according to the current data of the container; wherein the working state of the container is used for measuring the busy degree of the container;
and adjusting the containers in the container group according to the working state of the containers.
According to the technical scheme of the embodiment of the invention, the busy degree of the container is determined according to the current data of the container, and the containers in each container group are adjusted based on the busy degree. Because the busy degree of the container is considered, the embodiment of the invention can more reasonably distribute the resources of the service cluster and avoid the condition that a certain container is over-pressurized and crashed.
The above-described embodiments should not be construed as limiting the scope of the invention. Those skilled in the art will appreciate that various modifications, combinations, sub-combinations, and substitutions can occur, depending on design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (12)

1. A method for scheduling containers, comprising:
obtaining current data of the container within the sliding time window from a service provider;
determining the working state of the container according to the current data of the container and the trained detection model; wherein the working state of the container is used for measuring the busy degree of the container;
adjusting the containers in the container group according to the working state of the containers;
providing the grouping result to a service provider to enable the service provider to schedule the container according to the grouping result.
2. The method of claim 1, further comprising:
acquiring historical data of the container;
counting the historical data of the container according to a specified period to obtain metadata;
extracting service features and hardware features from the metadata;
generating a training sample according to the service characteristic and the hardware characteristic;
and training the detection model according to the training sample.
3. The method of claim 2,
the service features include: any one or more of the average query quantity of the items, the average query depth of the requests and the total amount of the requests; wherein the request average query depth is determined by the number of entries and the type of the entries;
and/or the presence of a gas in the gas,
the hardware features include: the number of times that the CPU utilization rate exceeds the first threshold, the number of times that the memory utilization rate exceeds the second threshold, the number of times that the disk utilization rate exceeds the third threshold, the number of times that the network inflow rate exceeds the fourth threshold, and the number of times that the load value exceeds the fifth threshold.
4. The method of claim 1,
the adjusting of the containers in the container group according to the working state of the containers comprises:
determining the busy degree of the container group according to the working state of the container;
determining the working state of the container group according to the busy degree of the container group;
and adjusting the containers in the container group according to the working state of the container group.
5. The method of claim 4,
the working state of the container comprises: busy, normal and idle;
the determining the busy degree of the container group according to the working state of the container comprises the following steps:
and calculating the busy degree of the container group according to the total number of the containers of the container group and the number of the busy containers.
6. The method of claim 4,
the working state of the container group comprises: busy, normal and idle;
the adjusting the containers in the container group according to the working state of the container group comprises:
calculating the allocation limit of the busy container group according to the number of the busy containers in the busy container group;
calculating the allocation limit of the idle container group according to the number of idle containers in the idle container group;
and adjusting the containers in the busy container group and the idle container group according to the allocation limit of the busy container group and the allocation limit of the idle container group.
7. The method of claim 6,
the adjusting the busy container group and the containers in the idle container group according to the allocation limit of the busy container group and the allocation limit of the idle container group comprises the following steps:
calculating the sum of allocation limits of each idle container group;
calculating the number of containers distributed by each busy container group according to the number of the busy container groups and the sum of the allocation quota of each idle container group;
and allocating the containers in each idle container group to each busy container group according to the number of the containers allocated by each busy container group.
8. The method of claim 6,
the adjusting the busy container group and the containers in the idle container group according to the allocation limit of the busy container group and the allocation limit of the idle container group comprises the following steps:
arranging the busy capacitor groups in the sequence from high to low to obtain the priority of each busy capacitor group;
arranging the idle container groups in the order of allocating quota from high to low to obtain the priority of each idle container group;
and adjusting the containers in the busy container group and the free container group according to the priority and allocation limit of each busy container group and the priority and allocation limit of each free container group.
9. The method of any one of claims 1-8,
the current data includes: any one or more of request timestamp, container name, article query quantity, request query depth, millisecond request quantity, CPU utilization rate, memory utilization rate, network inflow rate, disk utilization rate and load value; wherein the request query depth is determined by the number of entries and the type of entries.
10. A scheduler, comprising:
an acquisition module configured to acquire current data of the container within the sliding time window from a service provider;
the determining module is configured to determine the working state of the container according to the current data of the container and the trained detection model; wherein the working state of the container is used for measuring the busy degree of the container;
the adjusting module is configured to adjust the containers in the container group according to the working states of the containers; providing the grouping result to the service provider so that the service provider schedules the container according to the grouping result.
11. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-9.
12. A computer-readable medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1-9.
CN202011594780.9A 2020-12-29 2020-12-29 Container dispatching method and dispatcher Active CN113760496B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011594780.9A CN113760496B (en) 2020-12-29 2020-12-29 Container dispatching method and dispatcher

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011594780.9A CN113760496B (en) 2020-12-29 2020-12-29 Container dispatching method and dispatcher

Publications (2)

Publication Number Publication Date
CN113760496A true CN113760496A (en) 2021-12-07
CN113760496B CN113760496B (en) 2024-10-18

Family

ID=78786228

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011594780.9A Active CN113760496B (en) 2020-12-29 2020-12-29 Container dispatching method and dispatcher

Country Status (1)

Country Link
CN (1) CN113760496B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115562889A (en) * 2022-10-12 2023-01-03 中航信移动科技有限公司 Application control method, electronic device and storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102130938A (en) * 2010-12-03 2011-07-20 中国科学院软件研究所 Resource supply method oriented to Web application host platform
US20140108474A1 (en) * 2012-10-16 2014-04-17 Rackspace Us, Inc. System and Method for Exposing Cloud Stored Data to a Content Delivery Network
US8990290B1 (en) * 2009-09-03 2015-03-24 Rao V. Mikkilineni Network model for distributed computing networks
CN105868222A (en) * 2015-09-17 2016-08-17 乐视网信息技术(北京)股份有限公司 Task scheduling method and device
US20170126506A1 (en) * 2015-10-29 2017-05-04 Cisco Technology, Inc. Container management and application ingestion engine
CN106790726A (en) * 2017-03-30 2017-05-31 电子科技大学 A kind of priority query's dynamic feedback of load equilibrium resource regulating method based on Docker cloud platforms
CN107193652A (en) * 2017-04-27 2017-09-22 华中科技大学 The flexible resource dispatching method and system of flow data processing system in container cloud environment
CN108667873A (en) * 2017-03-31 2018-10-16 北京京东尚科信息技术有限公司 A kind of shunt method, part flow arrangement, electronic equipment and readable storage medium storing program for executing
CN108683720A (en) * 2018-04-28 2018-10-19 金蝶软件(中国)有限公司 A kind of container cluster service configuration method and device
CN108762914A (en) * 2018-04-17 2018-11-06 广东智媒云图科技股份有限公司 A kind of Intelligent telescopic method, apparatus, electronic equipment and the storage medium of system architecture
US20200220926A1 (en) * 2019-01-04 2020-07-09 eCIFM Solutions Inc. State container synchronization system and method

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8990290B1 (en) * 2009-09-03 2015-03-24 Rao V. Mikkilineni Network model for distributed computing networks
CN102130938A (en) * 2010-12-03 2011-07-20 中国科学院软件研究所 Resource supply method oriented to Web application host platform
US20140108474A1 (en) * 2012-10-16 2014-04-17 Rackspace Us, Inc. System and Method for Exposing Cloud Stored Data to a Content Delivery Network
CN105868222A (en) * 2015-09-17 2016-08-17 乐视网信息技术(北京)股份有限公司 Task scheduling method and device
US20170126506A1 (en) * 2015-10-29 2017-05-04 Cisco Technology, Inc. Container management and application ingestion engine
CN106790726A (en) * 2017-03-30 2017-05-31 电子科技大学 A kind of priority query's dynamic feedback of load equilibrium resource regulating method based on Docker cloud platforms
CN108667873A (en) * 2017-03-31 2018-10-16 北京京东尚科信息技术有限公司 A kind of shunt method, part flow arrangement, electronic equipment and readable storage medium storing program for executing
CN107193652A (en) * 2017-04-27 2017-09-22 华中科技大学 The flexible resource dispatching method and system of flow data processing system in container cloud environment
CN108762914A (en) * 2018-04-17 2018-11-06 广东智媒云图科技股份有限公司 A kind of Intelligent telescopic method, apparatus, electronic equipment and the storage medium of system architecture
CN108683720A (en) * 2018-04-28 2018-10-19 金蝶软件(中国)有限公司 A kind of container cluster service configuration method and device
US20200220926A1 (en) * 2019-01-04 2020-07-09 eCIFM Solutions Inc. State container synchronization system and method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
刘策;霍利民;: "基于WAMS的多STATCOM稳态协调控制策略", 电力科学与工程, no. 06, 28 June 2017 (2017-06-28) *
陈国发;张文庆;刘锋;樊尚明;张永年;吴丽珍;: "基于改进蜂群算法的含DG配网多目标无功优化", 智慧电力, no. 03, 20 March 2019 (2019-03-20) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115562889A (en) * 2022-10-12 2023-01-03 中航信移动科技有限公司 Application control method, electronic device and storage medium
CN115562889B (en) * 2022-10-12 2024-01-23 中航信移动科技有限公司 Application control method, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN113760496B (en) 2024-10-18

Similar Documents

Publication Publication Date Title
CN105900064B (en) The method and apparatus for dispatching data flow task
US11556541B2 (en) Data query method, apparatus and device
US11171849B2 (en) Collecting samples hierarchically in a datacenter
CN108776934A (en) Distributed data computational methods, device, computer equipment and readable storage medium storing program for executing
CN103699445A (en) Task scheduling method, device and system
WO2019062000A1 (en) Incoming call assignment method for attendants, electronic device and computer readable storage medium
CN110716800B (en) Task scheduling method and device, storage medium and electronic equipment
CN116909751B (en) Resource allocation method in cloud computing system
CN109697637A (en) Object type determination method and device, electronic equipment and computer storage medium
CN113760640A (en) Monitoring log processing method, device, equipment and storage medium
CN114625523A (en) Resource allocation method, device and computer readable storage medium
CN113760496A (en) Container scheduling method and scheduler
CN105872082B (en) Fine granularity resource response system based on container cluster load-balancing algorithm
CN113422808B (en) Internet of things platform HTTP information pushing method, system, device and medium
CN110413393A (en) Cluster resource management method, device, computer cluster and readable storage medium storing program for executing
CN113886086A (en) Cloud platform computing resource allocation method, system, terminal and storage medium
WO2016206441A1 (en) Method and device for allocating virtual resource, and computer storage medium
CN105930216A (en) Automatic scheduling method and system for electronic signature system and server
CN114338696B (en) Method and device for distributed system
CN110502339A (en) Data service resource allocation methods, device, system and storage medium
CN112637793B (en) Scene charging method, system, electronic equipment and storage medium based on 5G
CN115114005A (en) Service scheduling control method, device, equipment and computer readable storage medium
CN111061697B (en) Log data processing method and device, electronic equipment and storage medium
CN114520773A (en) Service request response method, device, server and storage medium
CN115048284A (en) Method, computing device and storage medium for testing applications of a system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant