CN110362411B - CPU resource scheduling method based on Xen system - Google Patents

CPU resource scheduling method based on Xen system Download PDF

Info

Publication number
CN110362411B
CN110362411B CN201910680641.9A CN201910680641A CN110362411B CN 110362411 B CN110362411 B CN 110362411B CN 201910680641 A CN201910680641 A CN 201910680641A CN 110362411 B CN110362411 B CN 110362411B
Authority
CN
China
Prior art keywords
cpu
rnn
vcpu
task
cpu resource
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910680641.9A
Other languages
Chinese (zh)
Other versions
CN110362411A (en
Inventor
张伟哲
方滨兴
何慧
刘川意
陈煌
王德胜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology
Original Assignee
Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology filed Critical Harbin Institute of Technology
Priority to CN201910680641.9A priority Critical patent/CN110362411B/en
Publication of CN110362411A publication Critical patent/CN110362411A/en
Application granted granted Critical
Publication of CN110362411B publication Critical patent/CN110362411B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Debugging And Monitoring (AREA)

Abstract

A CPU resource scheduling method based on a Xen system relates to the technical field of CPU resource scheduling. The invention aims to solve the problems that the existing CPU resource scheduling method cannot distinguish specific running tasks and cannot be combined with a real-time running environment to adjust CPU resources and the like. Taking the vcpu queue as input data, and training an RNN classification model; obtaining global queue information credit [ ] and pri [ ] of the Xen system, and classifying the global queue information credit [ ] and pri [ ] of the Xen system by using a trained RNN classification model; judging whether the array is empty or not, if so, ending, and otherwise, updating the Q-table by the classification result through a Q-learning algorithm; and adjusting the time slice by using the currently updated q-table to complete the scheduling of the CPU resource. The effective utilization rate of resources is improved, and the energy consumption of the cloud data center is reduced.

Description

CPU resource scheduling method based on Xen system
Technical Field
The invention relates to a CPU resource scheduling method based on a Xen system, and relates to the technical field of CPU resource scheduling.
Background
In recent years, with the development and the rise of cloud computing technology, virtualization technology is beginning to appear in the sight of people again. The virtualization technology aims to virtualize physical resources, reasonably allocate the virtualized physical resources to a plurality of Virtual machines for use through a Virtual Machine Monitor (VMM), and ensure that the plurality of Virtual machines are independent from each other and do not affect the operation of respective tasks. The current mainstream virtual machine architectures mainly include: VMware's ESX, Microsoft's Hyper-V, open-source Xen, and KVM. With the expansion of the scale of the cloud computing environment, the energy consumption is continuously increased, and the resource scheduling is more and more important as the core of resource use.
In the existing method for scheduling the CPU resources, most of the optimization at home and abroad is to adjust the allocation of the credit value, and the credit value is increased or decreased according to the task execution completion degree and the time of the virtual machine for actually running the CPU; 2. reserving a part of credit value for the I/O type task, so that the I/O type task has sufficient CPU resources to complete the task; 3. and adjusting the total credit value according to the proportion of the I/O domain and the CPU domain, so that when the I/O tasks are more, the allocation amount of the credit is reduced, the whole allocation period can be reduced, the speed of allocating the tasks to the resources is increased, and the actual representation is the reduction of the time delay of the I/O tasks. In addition, in some of the credit adjustments, the CPU queues in the multi-core are integrated in the same state to form a resource pool, and centralized allocation is performed. The existing CPU resource scheduling method cannot distinguish specifically running tasks in the Xen system and cannot be combined with a real-time running environment to adjust the CPU resources, so that the IO intensive tasks in the Xen system have low running efficiency.
Disclosure of Invention
The technical problem to be solved by the invention is as follows:
the invention aims to solve the problem that the existing CPU resource scheduling method cannot distinguish the specifically running tasks in the Xen system and cannot be combined with a real-time running environment to adjust the CPU resource, so that the IO intensive task running efficiency in the Xen system is low.
The technical scheme adopted by the invention for solving the technical problems is as follows:
a CPU resource scheduling method based on Xen system, the realization process of the method is:
step one, acquiring a vcpu queue in a Xen system;
step two, taking the vcpu queue as input data, and training an RNN classification model;
step three, obtaining global queue information credit [ ] and pri [ ] of the Xen system, classifying the global queue information credit [ ] and pri [ ] of the Xen system by using a trained RNN classification model, and adding a label to a classification result;
step four, judging whether the arrays credit [ ] and pri [ ] are empty, if yes, ending, otherwise, turning to step five,
step five, updating the Q-table according to the classification result through a Q-learning algorithm; the RNN classification result is obtained and used as input data of a Q-learning algorithm;
and step six, utilizing the currently updated q-table to adjust the time slice to complete the scheduling of the CPU resource.
Further, in the second step, the specific process of training the RNN classification model by using the vcpu queue as input data is as follows:
for the design of the RNN classification model, basic LSTM units are used as the basic part of the network, static RNN is used as the neural network model, the number of neurons of the hidden layer is set to be 128, the size of input sequence data is 16, the size of each batch is defined to be 50, the activation function selected for classification is softmax, the selected optimizer is Adam optimizer (AdamaOptizer), and the definition of the loss function in the RNN classification model is whether the judged task type is consistent with the type in the label; the data input of RNN is extracted by pri state and the rest credit value in vcpu.
Further, in step six, the scheduling of the CPU resources is completed by adjusting the time slice using the currently updated q-table, specifically:
adjusting the size of the time slice, setting the threshold value of the time slice to be 10-50, and setting the floating size of each time slice adjustment to be-5;
the q-table update procedure is defined as shown in equations (1) to (4)
t cpu =∑T i (1)
i is the sequence number of running CPU task, t cpu The running time sum of the CPU intensive tasks is obtained;
t io =∑T i (2)
t io the sum of the IO intensive task running time; i is the serial number of the running IO task;
Figure BDA0002144214220000021
r t is Reward designed in reinforcement learning,
t cpu to representThe sum of the CPU intensive task run times in the last cycle,
t io representing the last cycle intensive task running time sum,
num CPU representing the sum of the number of CPU-intensive task runs in the last cycle,
num IO representing the sum of the number of IO-intensive task runs in the last cycle,
Figure BDA0002144214220000031
Q(s t ,a t ) Is a state s t Take action a t The value of Q obtained is the value of,
Q(s ,a ) Is the last cycle state s Take action a The value of Q obtained is the value of,
alpha is a parameter for adjusting the size,
gamma is a loss parameter rewarded for future missions,
equation (4) is a state update equation for the q-table.
Further, in step one, the vcpu queue in the Xen system is obtained as the remaining value of credit after each run period of vcpu and the state value of vcpu.
The invention has the following beneficial effects: for the problems of long task response time delay, resource allocation and the like caused by low IO intensive task operation efficiency in the Xen system, the efficient utilization of CPU resources is ensured by optimizing the CPU resources, and meanwhile, the operation efficiency of the whole system meets the requirements of a Service Level Agreement (SLA). Through the dispatching of the CPU resources, when the application programs in the user domain DomainU have good resource supply during the operation period, the response delay of the tasks can be guaranteed to meet the requirements of users, the user service quality grade is guaranteed to be achieved on the basis of the stability and the high efficiency of the cloud data center, the effective utilization rate of the resources is improved, and meanwhile, the energy consumption of the cloud data center is reduced as far as possible.
When the method is implemented, the tasks operated by the DomainU are mainly classified, and because the demands of the CPU intensive tasks and the IO intensive tasks on the CPU resources are completely different, if the two types of tasks are treated as the Credit algorithm, the IO intensive tasks often cannot acquire a small amount of needed CPU resources, the response delay of the tasks is high, and basically, the delay can reach 2 times of the time slice size. The invention requires adjustment of the time slice size.
The invention judges the number and proportion of CPU intensive type and IO intensive type in all domain running tasks in the whole situation by obtaining the credit remaining condition in all the vcpu running queues on the whole physical core and the state of the credit remaining condition. The time slice size is adjusted by the ratio of the CPU intensive task and the IO intensive task. The default time slice size of the credit algorithm is 30ms, when the time slice is adjusted to be small, the frequency of scheduling tasks is more frequent, the number of times of contexts is increased, however, the probability that each vcpu is scheduled is increased, therefore, when the number of the CPU intensive tasks is large, the time slice is long, the number of times of context switching can be reduced, the CPU intensive tasks are enabled to be operated more quickly, namely, the tasks are executed in a sequence similar to the sequence of the tasks, and the operation time of the total tasks can be reduced. However, for the IO-intensive task, because it needs to continuously respond to IO events to ensure that the delay is within a Service Level Agreement (SLA), at this time, the size of the time slice needs to be shortened to ensure that the probability that each vcpu is scheduled in each cycle is increased, so that the IO events can be responded faster, and the user experience is improved.
Drawings
FIG. 1 is a flow chart of a time slice adjustment algorithm, FIG. 2 is a test result chart of the effect of time slice size on a task, and FIG. 3 is a screenshot of CPU usage when an IO intensive task is running; FIG. 4 is a screenshot of a result of a small scale environmental algorithm performance test; fig. 5 is a screenshot of a result of a large-scale environment algorithm performance test, and fig. 6 is a CPU resource usage rate variation graph.
Detailed Description
With reference to fig. 1 to 6, the following description is made on the implementation of the CPU resource scheduling method based on the Xen system according to the present invention:
when the algorithm is implemented, it is mainly considered to classify tasks operated by DomainU, and because the demands of CPU-intensive and IO-intensive tasks on CPU resources are completely different, if the two types of tasks are treated as the Credit algorithm, the IO-intensive task often cannot acquire a small amount of CPU resources, which results in high response delay of the task, and the delay may reach 2 times the size of a time slice basically.
The algorithm judges the number and proportion of CPU intensive type and IO intensive type in all domain running tasks in the whole situation by obtaining the credit remaining conditions in all the vcpu running queues on the whole physical core and the states of the credit remaining conditions. The time slice size is adjusted by the ratio of the CPU intensive task and the IO intensive task. The default time slice size of the credit algorithm is 30ms, when the time slice is adjusted to be small, the frequency of scheduling tasks is more frequent, the number of times of contexts is increased, however, the probability that each vcpu is scheduled is increased, therefore, when the number of the CPU intensive tasks is large, the time slice is long, the number of times of context switching can be reduced, the CPU intensive tasks are enabled to be operated more quickly, namely, the tasks are executed in a sequence similar to the sequence of the tasks, and the operation time of the total tasks can be reduced. However, for the IO-intensive task, because it needs to continuously respond to IO events to ensure that the delay is within the Service Level Agreement (SLA), at this time, the size of the time slice needs to be shortened to ensure that the probability that each vcpu is scheduled in each period is increased, so that the IO events can be responded faster, and the user experience is improved
The whole algorithm adopts RNN combined with Q-learning technology to dispatch CPU resources, for the design of RNN, basic LSTM units are used as the basic parts of the network, static RNN is used as a neural network model, the number of neurons of a hidden layer is set to be 128, the size of input sequence data is 16, the size of each batch is defined to be 50, the activation function selected for classification is softmax, the selected optimizer is Adam optimizer (AdamaOptimizer), and the definition of a loss function is whether the judged task type is consistent with the type in the label or not. The type in the tag is artificially added in the data preprocessing process, and the pri state and the residual credit value in the vcpu are extracted to be used as the data input of the RNN. And taking the result of RNN classification as input data of the Q-learning algorithm. In the Q-learning algorithm, the action we take is to adjust the size of the time slice, the threshold of the time slice is set to 10, 50, because the ratio of the number of task types does not produce too large a sudden change, so our action is set to between-5 and 5.
Defining the following q-table updating process as shown in formulas 2-1-2-4
t cpu =∑T i (i is the serial number for running the CPU task) (2-1)
t io =∑T i (i is the serial number for running the IO task) (2-2)
Figure BDA0002144214220000051
Figure BDA0002144214220000052
Wherein, the formula 2-3 illustrates the Reward designed by us in reinforcement learning, and the formula 2-4 is a state updating formula of q-table. The whole algorithm flow is as follows:
Figure BDA0002144214220000053
in the above calculation 4, what the multicore task needs to obtain is the vcpu run queue under all the global physical cores to determine the global run information. And when no task runs in the whole situation, quitting the real-time adjustment of Q-learning, and in the process of 6-10, using the Q-learning to carry out real-time training on the system running task to update the Q-table, so that when the training period is large enough, the action selection with the maximum reward for the current state can be made according to the Q-table. The reward is obtained by subtracting the average value of the period from the average value of the CPU task time and the IO task time in the overall task in the previous period, and can be represented as a shortened value of the task running time, namely, the task can be completed more quickly. The flow of the whole algorithm is shown in fig. 1.
The technical effects of the invention are verified as follows:
the test environment of the experiment is mainly divided into two types, namely small-scale test and large-scale test, the small-scale test mainly means that the number of the domains is small, each Domain can acquire a specific physical CPU, competition of physical CPU resources is basically not generated among the domains, if all the domains are bound to a single core, competition relationship exists, at the moment, the number of the domains is generally 2, and therefore the small-scale test also belongs to the small-scale environment. The large-scale environment mainly shows that the number of vcpus contained in the Domain exceeds the number of physical CPUs, so that physical CPU competition relationship exists among the domains.
The experiment adopts a dacapo test set which is mainly applied to memory and CPU management tests, comprises a series of open-source real applications and is compiled by Java language. Some of the major CPU test sets above are shown in table 1:
TABLE 1 part test set of dacapo
Figure BDA0002144214220000061
Phoronix-test-suite is a test platform capable of conveniently adding new benchmark test items, and by means of openbenchmark. The method can automatically generate a result report, provide various formats and support automatic testing.
TABLE 2 pts section test set
Figure BDA0002144214220000062
The host machine uses a server with an eosin 64 kernel, and specific server information and installed system information are shown in fig. 3:
TABLE 3 host parameter information
Figure BDA0002144214220000071
Virtual machine configuration information is shown in Table 4
TABLE 4 virtual machine parameter information
Figure BDA0002144214220000072
When the influence of the time slice size on the task is verified, only one vcpu is set in each client domain of 2 client domains running in the system, the vcpus of the two client domains are bound to the same physical CPU for testing, the change of the time slice size value ranges from 10 to 50, the interval is 10, and the test result is shown in fig. 2.
As can be seen from fig. 2, when the time slice length is set to 10ms, the CPU-intensive task, the memory-intensive task, and the disk-intensive operation time is longer than the time slice length set to 50ms, and the overall curve shows a linear descending trend. For network intensive tasks, when the time slice length is set to be short, the switching of the tasks can be accelerated, and each task can be scheduled as soon as possible. Therefore, when the number ratio of the network intensive tasks in the system is large, the execution efficiency of the whole system task is facilitated by adopting the time slices with small length, and when the number ratio of the CPU intensive tasks, the memory intensive tasks and the disk intensive tasks is large, the length of the time slices should be set to be long so as to reduce the system switching overhead.
In the scheduling algorithm test of the CPU resource, the testing environment mainly comprises a system native Credit algorithm, a Cap adjusting algorithm and a time Slice adjusting algorithm, and the testing environment is mainly divided into a small-scale environment and a large-scale environment for respective testing.
As for the usage amount of the CPU resource of the IO intensive task, by monitoring the real-time CPU utilization rate thereof, as can be known from fig. 3, during the entire operation period of the IO intensive task, only when the network packet arrives locally, the CPU resource is used, and during the rest period, the CPU is not used basically, and is basically in an idle state, so that during the CPU allocation period, the IO intensive task is not allocated to the CPU resource, and therefore, the IO intensive task may need to wait for a period to acquire the CPU resource to process the network packet, thereby performing the subsequent IO task, and finally causing the response delay of the IO intensive task to be high.
When a CPU resource scheduling algorithm is tested, the system respectively operates in a small-scale environment and a large-scale environment, in the small-scale environment, the number of client domains operated by the system is less than 10, competition between each vcpu and physical CPU resources is less, in the large-scale environment, the number of the client domains operated by the system exceeds 30, the number of the vcpus of each client domain is 2-8, the total number of the vcpus is greater than the number of all the physical CPUs of a host machine, competition resources between each client domain are fierce, the task quantity of operation of the client domains is large, and resource competition between the vcpus inside the client domains is obvious. The experimental test results are shown in fig. 4 and fig. 5:
from the experimental result, it is known that, when the task runs in the small-scale environment, the running performance of the Cap value adjusting algorithm is higher than that of the time slice adjusting algorithm than that of the Credit algorithm, because in the low-load environment, the number of vcpus of each client domain is small, the number of times of context switching existing between the vcpus in the client domain is small, and at this time, the CPU resource upper limit of each vcpu, that is, the setting influence degree of the Cap value, is larger than the adjustment of the time slice size. When the system runs in a large-scale environment, the running performance of the time slice adjusting algorithm is higher than that of the Cap value adjusting algorithm and the Credit algorithm, and the number of client domains is large. At the moment, the number of client domains in the system is large, so that the number of vcpus is large, the types of tasks running in the system are complex, the proportion of the types of the tasks is more prominent for the task execution of the system, the modification of the time slice has a large influence on the task running of the whole system, the influence range of the Cap value is only limited to the running efficiency of the tasks on a single client domain, and the task execution efficiency of the whole system cannot be better improved. Therefore, it can be found from the test result of the overall algorithm that when the number of the client domains operated by the system is small, the Cap value adjustment algorithm can be selected to adjust the upper limit of the CPU resource to allocate the CPU resource, and when the number of the client domains operated by the system is large, the time slice adjustment algorithm should be adopted to improve the execution efficiency of the overall task of the system.
Experiment during the running period, the CPU resource usage of Domain0 can be monitored to see the CPU resource amount used by the CPU resource scheduling algorithm during the running period, where the CPU resource variation is shown in fig. 6:
as can be seen from fig. 6, the CPU resource usage on Domain0 is about 18% on average during the period that the CPU resource scheduling algorithm is not running, and the CPU resource usage is about 37.5 after the CPU resource scheduling algorithm is running, so the amount of CPU resource occupied by the CPU resource scheduling algorithm is about 19.5%, and the CPU resource overhead is within the acceptable range for 64-core servers.

Claims (3)

1. A CPU resource scheduling method based on Xen system is characterized in that the method is realized by the following steps:
step one, acquiring a vcpu queue in a Xen system;
step two, taking the vcpu queue as input data, and training an RNN classification model;
step three, obtaining global queue information credit [ ] and pri [ ] of the Xen system, classifying the global queue information credit [ ] and pri [ ] of the Xen system by using a trained RNN classification model, and adding a label to a classification result;
step four, judging whether the arrays credit [ ] and pri [ ] are empty, if yes, ending, otherwise, turning to step five,
step five, updating the Q-table according to the classification result through a Q-learning algorithm; the RNN classification result is obtained and used as input data of a Q-learning algorithm;
step six, utilizing the currently updated q-table to adjust the time slice to complete the scheduling of the CPU resource specifically comprises the following steps:
adjusting the size of the time slice, setting the threshold value of the time slice to be 10-50, and setting the floating size of each time slice adjustment to be-5;
the q-table update procedure is defined as shown in equations (1) to (4)
Figure DEST_PATH_IMAGE001
(1)
Figure 556652DEST_PATH_IMAGE002
Is the serial number of the running CPU task,
Figure DEST_PATH_IMAGE003
Figure 677055DEST_PATH_IMAGE004
(2)
Figure DEST_PATH_IMAGE005
Figure DEST_PATH_IMAGE006
is the serial number of the running IO task;
Figure DEST_PATH_IMAGE007
(3)
Figure 326299DEST_PATH_IMAGE008
designed in reinforcement learningThe Reward is that the Reward is awarded,
Figure DEST_PATH_IMAGE009
represents the last cycle
Figure 862454DEST_PATH_IMAGE010
Figure DEST_PATH_IMAGE011
Figure 880088DEST_PATH_IMAGE012
Represents the last cycle
Figure DEST_PATH_IMAGE013
Figure 965200DEST_PATH_IMAGE014
Represents the last cycle
Figure DEST_PATH_IMAGE015
Figure 842021DEST_PATH_IMAGE017
Figure 306500DEST_PATH_IMAGE018
Is in a state
Figure 867057DEST_PATH_IMAGE019
Take action down
Figure 1366DEST_PATH_IMAGE020
The value of Q obtained is the value of,
Figure 693378DEST_PATH_IMAGE021
is the last cycle state
Figure 961549DEST_PATH_IMAGE022
Take action down
Figure DEST_PATH_IMAGE023
The value of Q obtained is the value of,
Figure 688196DEST_PATH_IMAGE024
Figure 524565DEST_PATH_IMAGE025
the loss parameters that are rewarded for future tasks,
equation (4) is a state update equation for the q-table.
2. The method for CPU resource scheduling based on Xen system according to claim 1, wherein in step two, the specific process of training the RNN classification model with the vcpu queue as input data is:
for the design of the RNN classification model, basic LSTM units are used as the basic part of the network, static RNN is used as the neural network model, the number of neurons of a hidden layer is set to be 128, the size of input sequence data is 16, the size of each batch is defined to be 50, the activation function selected for classification is softmax, the selected optimizer is an Adam optimizer, and the loss function in the RNN classification model is defined to determine whether the judged task type is consistent with the type in the label; the data input of RNN is extracted by pri state and the rest credit value in vcpu.
3. The method for scheduling CPU resources based on the Xen system according to claim 2, wherein, in the step one, the obtained vcpu queue in the Xen system is a residual value of credit after each operating cycle of vcpu and a state value of vcpu.
CN201910680641.9A 2019-07-25 2019-07-25 CPU resource scheduling method based on Xen system Active CN110362411B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910680641.9A CN110362411B (en) 2019-07-25 2019-07-25 CPU resource scheduling method based on Xen system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910680641.9A CN110362411B (en) 2019-07-25 2019-07-25 CPU resource scheduling method based on Xen system

Publications (2)

Publication Number Publication Date
CN110362411A CN110362411A (en) 2019-10-22
CN110362411B true CN110362411B (en) 2022-08-02

Family

ID=68221878

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910680641.9A Active CN110362411B (en) 2019-07-25 2019-07-25 CPU resource scheduling method based on Xen system

Country Status (1)

Country Link
CN (1) CN110362411B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111427758A (en) * 2020-03-17 2020-07-17 北京百度网讯科技有限公司 Task calculation amount determining method and device and electronic equipment
CN113282408B (en) * 2021-05-08 2024-04-05 杭州电子科技大学 CPU scheduling method for improving real-time performance of data-intensive application
CN113448705B (en) * 2021-06-25 2023-03-28 皖西学院 Unbalanced job scheduling algorithm

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104657221A (en) * 2015-03-12 2015-05-27 广东石油化工学院 Multi-queue peak-alternation scheduling model and multi-queue peak-alteration scheduling method based on task classification in cloud computing

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103064746B (en) * 2013-01-23 2015-08-12 上海交通大学 The accurate distribution method of processor resource of forecast dispatching is carried out based on current credit
CN103678003B (en) * 2013-12-18 2016-08-31 华中科技大学 The virtual cpu dispatching method that a kind of real-time strengthens
CN105260230B (en) * 2015-10-30 2018-06-26 广东石油化工学院 Data center's resources of virtual machine dispatching method based on segmentation service-level agreement
CN110637308A (en) * 2017-05-10 2019-12-31 瑞典爱立信有限公司 Pre-training system for self-learning agents in a virtualized environment
CN108595267A (en) * 2018-04-18 2018-09-28 中国科学院重庆绿色智能技术研究院 A kind of resource regulating method and system based on deeply study
CN109388484B (en) * 2018-08-16 2020-07-28 广东石油化工学院 Multi-resource cloud job scheduling method based on Deep Q-network algorithm
CN109947567B (en) * 2019-03-14 2021-07-20 深圳先进技术研究院 Multi-agent reinforcement learning scheduling method and system and electronic equipment

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104657221A (en) * 2015-03-12 2015-05-27 广东石油化工学院 Multi-queue peak-alternation scheduling model and multi-queue peak-alteration scheduling method based on task classification in cloud computing

Also Published As

Publication number Publication date
CN110362411A (en) 2019-10-22

Similar Documents

Publication Publication Date Title
CN110362411B (en) CPU resource scheduling method based on Xen system
CN102103516B (en) Virtual-CPU based frequency and voltage scaling
CN102662763B (en) Virtual machine resource scheduling method based on service quality
US9405585B2 (en) Management of heterogeneous workloads
CN103955398B (en) Virtual machine coexisting scheduling method based on processor performance monitoring
Khalid et al. Troodon: A machine-learning based load-balancing application scheduler for CPU–GPU system
CN101894047A (en) Kernel virtual machine scheduling policy-based implementation method
CN102662750A (en) Virtual machine resource optimal control method and control system based on elastic virtual machine pool
Ouyang et al. Straggler detection in parallel computing systems through dynamic threshold calculation
CN108694090A (en) A kind of cloud computing resource scheduling method of Based on Distributed machine learning
Liu et al. A waterfall model to achieve energy efficient tasks mapping for large scale GPU clusters
Razaghi et al. Host-compiled multicore RTOS simulator for embedded real-time software development
Gupta et al. Hybrid fuzzy-based deep remora reinforcement learning based task scheduling in heterogeneous multicore-processor
Lösch et al. Performance-centric scheduling with task migration for a heterogeneous compute node in the data center
Panneerselvam et al. An approach to optimise resource provision with energy-awareness in datacentres by combating task heterogeneity
Meyer et al. IADA: A dynamic interference-aware cloud scheduling architecture for latency-sensitive workloads
Li et al. An adaptive cpu-gpu governing framework for mobile games on big. little architectures
Horstmann et al. A framework to design and implement real-time multicore schedulers using machine learning
Zhao et al. Performance and cost-aware task scheduling via deep reinforcement learning in cloud environment
CN107423114B (en) Virtual machine dynamic migration method based on service classification
Jin et al. Dynamic processor resource configuration in virtualized environments
CN105917313A (en) Methods and apparatus to optimize platform simulation resource consumption
Hauser et al. Predictability of resource intensive big data and hpc jobs in cloud data centres
Wang et al. Efficient hybrid central processing unit/input–output resource scheduling for virtual machines
CN206115425U (en) But performance accurate control multinuclear multi -thread processor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant