CN108304254B - Method and device for controlling process scheduling of rapid virtual machine - Google Patents

Method and device for controlling process scheduling of rapid virtual machine Download PDF

Info

Publication number
CN108304254B
CN108304254B CN201711477075.9A CN201711477075A CN108304254B CN 108304254 B CN108304254 B CN 108304254B CN 201711477075 A CN201711477075 A CN 201711477075A CN 108304254 B CN108304254 B CN 108304254B
Authority
CN
China
Prior art keywords
queue
process queue
main process
fast
resources
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711477075.9A
Other languages
Chinese (zh)
Other versions
CN108304254A (en
Inventor
杨立群
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Hotdoor Technology Co ltd
Original Assignee
Zhuhai Hotdoor Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Hotdoor Technology Co ltd filed Critical Zhuhai Hotdoor Technology Co ltd
Priority to CN201711477075.9A priority Critical patent/CN108304254B/en
Publication of CN108304254A publication Critical patent/CN108304254A/en
Application granted granted Critical
Publication of CN108304254B publication Critical patent/CN108304254B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multi Processors (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The invention provides a method for controlling process scheduling of a rapid virtual machine, which comprises the following steps: creating a main process queue and a rapid process queue of the process, and emptying the main process queue and the rapid process queue to realize the initialization of the main process queue and the rapid process queue; receiving a process created by a virtual machine, and estimating the number of resources occupied by the process during execution; acquiring the total amount of resources occupied by the current process queue, if the total amount of resources occupied by the current main process queue is larger than a preset main process queue threshold value and the number of resources called during the process execution is smaller than the preset resource threshold value, inserting the process into the tail of the fast process queue, otherwise, inserting the process into the tail of the main process queue; and respectively acquiring the processes from the head of the main process queue and the head of the fast process queue according to the sequence. The invention has the beneficial effects that: the execution efficiency of the whole application program of each virtual machine is improved by responding the process occupying less resources as soon as possible.

Description

Method and device for controlling process scheduling of rapid virtual machine
Technical Field
The invention relates to the technical field of virtual machines, in particular to a method and a device for controlling process scheduling of a quick virtual machine.
Background
With the popularization of cloud computing technology, more and more organizations adopt cloud computing as an alternative to local computing to efficiently execute a variety of applications in a parallel manner. Because the cloud computing technology is based on the same hardware platform consisting of a plurality of physically independent servers, in order to conveniently and uniformly manage a plurality of servers with different equipment parameters, a virtual machine technology is generally adopted to virtualize the hardware platform so as to realize technical indexes such as isolation, expandability, safety and the like of the platform. After the plurality of servers are virtualized, different types of resources of each server respectively and jointly form various resource pools (such as hard disk pools for storing data), and the plurality of virtual machines realize joint calculation and resource sharing, so that flexible allocation and efficient utilization of various computing resources are realized.
However, when the application programs of the virtual machines create a large number of processes out of order, the processor generally executes the processes according to the sequence of the process execution requests of the processes, so that when a certain application program runs in an excessively large proportion of the occupied processor resources, other application programs will be started for an excessively long time or cannot run. In severe cases, this will cause the system to crash. For example, when the resource occupied by an application is too large, for example, the processor has a long execution time, the processor needs to execute the application first and then execute the requests of other subsequent applications. This will result in the following application program that occupies less resources not being executed in time, resulting in slow response of some application programs.
Disclosure of Invention
The invention aims to overcome the defects in the prior art and provides a method and a device for controlling process scheduling of a rapid virtual machine, so that the effect of improving the overall process execution efficiency of a cloud computing platform is achieved by preferentially processing fewer processes occupying resources.
In order to achieve the above object, the present invention adopts the following technical means.
Firstly, the invention provides a method for controlling process scheduling of a fast virtual machine, which comprises the following steps: creating a main process queue and a rapid process queue of the process, and emptying the main process queue and the rapid process queue to realize the initialization of the main process queue and the rapid process queue; receiving a process created by a virtual machine, and estimating the number of resources occupied by the process during execution; acquiring the total amount of resources occupied by the current process queue, if the total amount of resources occupied by the current main process queue is larger than a preset main process queue threshold value and the number of resources called during the process execution is smaller than the preset resource threshold value, inserting the process into the tail of the fast process queue, otherwise, inserting the process into the tail of the main process queue; and respectively acquiring the processes from the head of the main process queue and the head of the fast process queue according to the sequence.
In an embodiment of the method of the present invention, after receiving a process sent by a virtual machine, it is estimated whether the process data is normal, and an abnormal process is closed.
In an embodiment of the method of the present invention, after receiving a process sent by a virtual machine, the process performs splitting according to whether threads can perform parallel computation.
In one embodiment of the method of the present invention, after the process is inserted into the tail of the fast process queue, the fast process queue is sorted according to the number of resources called when the process is executed.
In one embodiment of the method of the present invention, when the total amount of resources occupied by the fast process queue is greater than a preset fast process queue threshold, the main process queue threshold is increased and/or the resource threshold is decreased.
Further, in the above method embodiment of the present invention, when the fast process queue is an empty queue within a preset time period, the main process queue threshold is decreased and/or the resource threshold is increased.
In an embodiment of the method of the present invention, when the total amount of resources occupied by the main process queue is greater than a preset main process queue threshold, a process, in which the number of resources called by the process in the main process queue when executing is less than the resource threshold, is proposed and inserted into the tail of the fast process queue.
In an embodiment of the method of the present invention, when the length of the fast process queue is greater than a preset fast process queue threshold and the total amount of occupied resources of the main process queue is less than the main process queue threshold, a process located at the tail of the fast process queue and exceeding the preset fast process queue threshold is proposed and inserted into the tail of the main process queue.
Further, in the above method embodiment of the present invention, the resource amount called when the process executes is the processor use frequency.
Alternatively, in the above method embodiments of the present invention, the amount of resources that a process invokes when executing is the length of the processor time slice taken up by the process.
Secondly, the invention also provides a device for controlling the process scheduling of the rapid virtual machine, which comprises the following modules: the initialization module is used for creating a main process queue and a quick process queue of the process and emptying the main process queue and the quick process queue to realize the initialization of the main process queue and the quick process queue; the estimation module is used for receiving a process sent by the virtual machine and estimating the number of resources occupied by the process during execution; the enqueuing module is used for acquiring the total amount of resources occupied by the current main process queue, if the total amount of resources occupied by the current main process queue is larger than a preset main process queue threshold value and the number of resources called during the process execution is smaller than the preset resource threshold value, the process is inserted into the tail of the fast process queue, otherwise, the process is inserted into the tail of the main process queue; and the dequeuing module is used for respectively acquiring the processes from the head of the main process queue and the head of the fast process queue according to the sequence.
In an embodiment of the apparatus of the present invention, after receiving a process sent by a virtual machine, an estimation module estimates whether the process data is normal, and closes an abnormal process.
In an embodiment of the apparatus of the present invention, after receiving a process sent by a virtual machine, an estimation module performs splitting according to whether parallel computation can be performed according to threads in the process.
In an embodiment of the apparatus of the present invention, after the process is inserted into the tail of the fast process queue, the enqueuing module sorts the fast process queue according to the number of resources called when the process is executed.
In an embodiment of the apparatus of the present invention, when the total amount of resources occupied by the fast process queue is greater than a preset fast process queue threshold, the enqueue module increases the main process queue threshold and/or decreases the resource threshold.
Further, in the above apparatus embodiment of the present invention, when the fast process queue is an empty queue within a preset time period, the enqueue module decreases the main process queue threshold and/or increases the resource threshold.
In an embodiment of the apparatus of the present invention, when the total amount of resources occupied by the main process queue is greater than a preset main process queue threshold, the enqueuing module proposes a process in the main process queue, in which the number of resources called when the process is executed is less than the resource threshold, and inserts the process into the tail of the fast process queue.
In an embodiment of the apparatus of the present invention, when the length of the fast process queue is greater than a preset fast process queue threshold and the total amount of occupied resources of the main process queue is less than the main process queue threshold, the enqueuing module extracts and inserts a process located at the tail of the fast process queue and exceeding the preset fast process queue threshold into the tail of the main process queue.
Further, in the above device embodiment of the present invention, the resource amount called when the process executes is the processor usage frequency.
Alternatively, in the above apparatus embodiments of the present invention, the amount of resources that a process invokes when executing is the length of the processor time slice taken up by the process.
Finally, the invention also discloses a computer-readable storage medium having stored thereon computer instructions which, when executed by a processor, carry out the steps of the method according to any of the preceding claims.
The invention has the beneficial effects that: by respectively constructing and maintaining the main process queue and the rapid process queue of the application program, the processes occupying less resources can be responded as soon as possible, and the execution efficiency of the whole application program of each virtual machine is improved.
Drawings
FIG. 1 is a flowchart illustrating a method for fast virtual machine process scheduling control according to the present invention;
FIG. 2 is a flowchart of a method for determining whether a process is inserted into a fast process queue in FIG. 1;
FIG. 3 is a diagram illustrating insertion of a process in a main process queue into a fast process queue according to one embodiment of the invention;
FIG. 4 is a diagram illustrating a state change of a host process queue in another embodiment of the invention;
FIG. 5 is a flowchart of a method for snapping processes in the fast process queue of FIG. 4;
FIG. 6 is a diagram illustrating a state change after a process is inserted into a fast process queue according to an embodiment of the invention;
FIG. 7 is a diagram illustrating host process queue threshold adjustment in one embodiment of the invention;
fig. 8 is a block diagram of a fast virtual machine process scheduling control apparatus according to the present invention.
Detailed Description
The conception, the specific structure and the technical effects of the present invention will be clearly and completely described in conjunction with the embodiments and the accompanying drawings to fully understand the objects, the schemes and the effects of the present invention. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The same reference numbers will be used throughout the drawings to refer to the same or like parts.
Referring to the method flowchart shown in fig. 1, in an embodiment of the present invention, a method for controlling fast virtual machine process scheduling includes the following steps: creating a main process queue and a rapid process queue of the process, and emptying the main process queue and the rapid process queue to realize the initialization of the main process queue and the rapid process queue; receiving a process created by a virtual machine, and estimating the number of resources occupied by the process during execution; acquiring the total amount of resources occupied by the current main process queue, if the total amount of resources occupied by the current main process queue is larger than a preset main process queue threshold value and the number of resources called during the process execution is smaller than the preset resource threshold value, inserting the process into the tail of the fast process queue, otherwise, inserting the process into the tail of the main process queue; and respectively acquiring the processes from the head of the main process queue and the head of the fast process queue according to the sequence. Wherein the host process queue threshold and the resource threshold may be set at initialization. For example, the main process queue threshold may be initialized to 500ms, i.e., the processes in the main process queue take 500ms of processor time. At this time, referring to the flowchart of the sub-method shown in fig. 2, when the total amount of occupied resources of the main process queue is less than 500ms, the process newly acquired from the process pool is directly inserted into the tail of the main process queue. All processes in the main process queue are executed in sequence according to the first-in first-out sequence. When the total amount of resources occupied by the main process queue is greater than or equal to 500ms, a process newly acquired from a process pool is preprocessed to predict the number of resources occupied by the process during execution, and if the number of resources occupied by the process during execution is small (for example, only a small amount of processor time is occupied), the process is inserted into the tail of the fast process queue for priority processing; otherwise, inserting the data into the tail of the main process queue, and processing according to the normal sequence. Specifically, the processes at the head of the main process queue and the head of the fast process queue are extracted. Since the virtual machine virtualizes a plurality of processes on the hardware platform, the processes may actually be processed by different processors. The virtual machine can implement various existing technologies according to the actual situation, for the specific process processing mode. The invention is not limited in this regard.
Since a process typically includes multiple threads that can split parallel computations, in one embodiment of the invention, the splitting within the process is performed according to whether the threads can compute in parallel. On one hand, the process requested by the application program is executed by as many processors as possible after being split, so that the execution efficiency of the process is improved; on the other hand, after the process is split, a single process occupies relatively less resources, so that the problem that the execution of other application programs is influenced due to the fact that some resources are monopolized by the single process for a long time is avoided. In addition, after the process is obtained from the process pool, the process may also be checked to determine whether it contains abnormal data (e.g., whether the received data packet is incomplete or an unknown type of process occurs), and such obviously erroneous process may be filtered out of the process pool.
Referring to the schematic diagram of the state changes of the main process queue and the fast process queue shown in fig. 3 (the numerical value in the right square of the process in the figure is the processor time required to be occupied by the process when executing the process, and is used to measure the amount of resources occupied by the process when executing the process), in an embodiment of the present invention, when the total amount of occupied resources of the main process queue is greater than a preset main process queue threshold, a process whose amount of resources occupied by the process when executing the process in the main process queue is less than the resource threshold is proposed and inserted into the tail of the fast process queue. Specifically, as shown in fig. 3, if the main process queue threshold is 250ms, the total amount of occupied resources of the main process queue is 259 ms. If the preset resource threshold is 20ms, the processes occupying resource quantity in the main process queue and less than the threshold (i.e. the processes corresponding to the "process a", "process C" and "process H" in the figure) are extracted from the main process queue and inserted into the tail of the fast process queue, so that the processes occupying less resource quantity in the main process queue can be executed in time.
On the contrary, referring to the schematic diagram of the state change of the main process queue in another embodiment shown in fig. 4, when the total amount of occupied resources of the fast process queue is greater than the preset fast process queue threshold and the total amount of occupied resources of the main process queue is less than the main process queue threshold, the process execution efficiency in the fast process queue will be less than that of the main process queue. In order to improve the overall execution efficiency of the process, referring to the flowchart of the sub-method shown in fig. 5, the processes located at the tail of the fast process queue and exceeding the preset threshold part of the fast process queue (i.e., "f process" and "g process" in fig. 4) may be proposed and inserted into the tail of the main process queue.
Referring to the schematic diagram of the process insertion fast process queue shown in fig. 6, in an embodiment of the present invention, after a process is inserted into the tail of the fast process queue, the fast process queue is sorted according to the number of resources occupied by the process when executing. The value in the right-hand box of the process in the figure is the processor time that the process needs to occupy when executing (similarly, "5 ms" of a process means that the process needs to occupy approximately 5ms of the processor), as a measure of the amount of resources that the process occupies when executing. As shown in the figure, the new f-process takes approximately 7ms of processor time. At this time, because the total amount of the occupied resources of the main process queue is greater than the main process queue threshold, the f process is inserted into the tail of the fast process queue first, and then the fast process queue is sequenced. Processes that occupy a smaller amount of resources during execution may be executed as soon as possible because the processes will be queued up in front of the fast process queue. Furthermore, since the fast process queue is maintained in order, new processes can be inserted into the fast process queue at the correct location in a shorter time (in fact, the time complexity of the operation is the logarithm of the current queue length).
Referring to the method flowchart shown in fig. 7, in an embodiment of the present invention, when the total amount of occupied resources of the fast process queue is greater than a preset fast process queue threshold, the execution efficiency of the processes in the fast process queue may decrease, even equal to or lower than the main process queue. In order to ensure that the processes allocated to the fast process queue can be executed preferentially, the threshold for inserting into the fast process queue needs to be raised. In this embodiment, the host process queue threshold is raised (to or above the total amount of occupied resources of the current host process queue) to reduce the number of processes assigned to the fast process queue. Alternatively, the threshold for insertion into the fast process queue may be increased by lowering the resource threshold, making it more difficult for processes to be allocated to the fast process queue.
In the foregoing embodiments of the present invention, the amount of resources that a process calls when executing is measured as the length of a processor time slice occupied by the process. Because the value can be easily estimated and the estimated value and the actual value have less difference, the value is often used to measure the cost of executing the process. Alternatively, another measure of the amount of resources that a process can call when executing is the frequency of use of the processor. Generally, the more frequently used processes of a processor, the greater the number of resources that are invoked during execution. The foregoing embodiments may accordingly employ a frequency of use scaling process for the processor.
Referring to the module structure diagram shown in fig. 8, in an embodiment of the present invention, the fast virtual machine process scheduling control apparatus includes the following modules: the initialization module is used for creating a main process queue and a quick process queue of the process and emptying the main process queue and the quick process queue to realize the initialization of the main process queue and the quick process queue; the estimation module is used for receiving a process sent by the virtual machine and estimating the number of resources occupied by the process during execution; the enqueuing module is used for acquiring the total amount of resources occupied by the current main process queue, if the total amount of resources occupied by the current main process queue is larger than a preset main process queue threshold value and the number of resources called during the process execution is smaller than the preset resource threshold value, the process is inserted into the tail of the fast process queue, otherwise, the process is inserted into the tail of the main process queue; and the dequeuing module is used for respectively acquiring the processes from the head of the main process queue and the head of the fast process queue according to the sequence. Wherein the main process queue threshold and the resource threshold are settable by the initialization module. For example, the main process queue threshold may be initialized to 500ms, i.e., the processes in the main process queue take 500ms of processor time. At this time, referring to the flowchart of the sub-method shown in fig. 2, when the total amount of resources occupied by the processes in the main process queue is less than 500ms, the enqueue module directly inserts the newly acquired process into the tail of the main process queue. And the process dequeuing module sequentially acquires all processes in the main process queue according to the first-in first-out sequence. When the total resource occupied by the process in the main process queue is less than 500ms, the estimation module is used for estimating the number of the resources called by the newly acquired process during execution, and if the number of the resources called by the process during execution is small (for example, only a small amount of processor time is occupied), the insertion module is used for inserting the resources into the tail of the fast process queue for priority processing; otherwise, the inserting module inserts the main process queue into the tail of the main process queue and processes the main process queue according to the normal sequence. Specifically, the dequeue module extracts processes at the head of the main process queue and the head of the fast process queue. Since the virtual machine virtualizes a plurality of processes on the hardware platform, the processes may actually be processed by different processors. The virtual machine can implement various existing technologies according to the actual situation, for the specific process processing mode. The invention is not limited in this regard.
Since a process typically includes multiple threads that can split parallel computations, in one embodiment of the invention, the splitting within the process is performed according to whether the threads can compute in parallel. On one hand, the process requested by the application program is executed by as many processors as possible after being split, so that the execution efficiency of the process is improved; on the other hand, after the process is split, a single process occupies relatively less resources, so that the problem that the execution of other application programs is influenced due to the fact that some resources are monopolized by the single process for a long time is avoided. In addition, after the process is obtained from the process pool, the process may also be checked to determine whether it contains abnormal data (e.g., whether the received data packet is incomplete or an unknown type of process occurs), and such obviously erroneous process may be filtered out of the process pool.
Referring to the schematic diagram of the state changes of the main process queue and the fast process queue shown in fig. 3 (the numerical value in the right square of the process in the diagram is the processor time required to be occupied by the process when executing the process, and is used to measure the amount of resources occupied by the process when executing the process), in an embodiment of the present invention, when the total amount of occupied resources of the main process queue is greater than a preset main process queue threshold, the enqueuing module extracts and inserts the process, in which the amount of resources occupied by the process in the main process queue when executing the process is less than the resource threshold, into the tail of the fast process queue. Specifically, as shown in fig. 3, if the main process queue threshold is 250ms, the total amount of occupied resources of the main process queue is 259 ms. If the preset resource threshold is 20ms, the enqueuing module extracts the processes (i.e. the processes corresponding to the "process a", "process C" and "process H" in the figure) occupying the resource quantity smaller than the threshold from the main process queue, and inserts the processes into the tail of the fast process queue, so that the processes occupying less resource quantity when executed in the main process queue can be executed in time.
On the contrary, referring to the schematic diagram of the state change of the main process queue in another embodiment shown in fig. 4, when the total amount of occupied resources of the fast process queue is greater than the preset fast process queue threshold and the total amount of occupied resources of the main process queue is less than the main process queue threshold, the process execution efficiency in the fast process queue will be less than that of the main process queue. In order to improve the overall execution efficiency of the process, the enqueue module may extract and insert the processes (i.e., "f-process" and "g-process" in fig. 4) located at the tail of the fast process queue and exceeding the preset fast process queue threshold portion into the tail of the main process queue.
Referring to fig. 6, a schematic diagram of a process inserting into a fast process queue is shown, in an embodiment of the present invention, after a process is inserted into the tail of the fast process queue, an enqueuing module sorts the fast process queue according to the amount of resources occupied by the process when the process is executed. The value in the right-hand box of the process in the figure is the processor time that the process needs to occupy when executing (similarly, "5 ms" of a process means that the process needs to occupy approximately 5ms of the processor), as a measure of the amount of resources that the process occupies when executing. As shown in the figure, the new f-process takes approximately 7ms of processor time. At this time, because the total amount of resources occupied by the main process queue is greater than the main process queue threshold, the enqueue module firstly inserts the f process into the tail of the fast process queue, and then sorts the fast process queue. Processes that occupy a smaller amount of resources during execution may be executed as soon as possible because the processes will be queued up in front of the fast process queue. Furthermore, since the fast process queue is maintained in order, new processes can be inserted into the fast process queue at the correct location in a shorter time (in fact, the time complexity of the operation is the logarithm of the current queue length).
In an embodiment of the present invention, when the total amount of occupied resources of the fast process queue is greater than a preset fast process queue threshold, the execution efficiency of the processes in the fast process queue will decrease, even equal to or lower than the main process queue. In order to ensure that the processes allocated to the fast process queue can be executed preferentially, the threshold for inserting into the fast process queue needs to be raised. In this embodiment, the host process queue threshold is raised (to or above the total amount of occupied resources of the current host process queue) to reduce the number of processes assigned to the fast process queue. Alternatively, the threshold for insertion into the fast process queue may be increased by lowering the resource threshold, making it more difficult for processes to be allocated to the fast process queue.
In the foregoing embodiments of the present invention, the amount of resources that a process calls when executing is measured as the length of a processor time slice occupied by the process. Because the value can be easily estimated and the estimated value and the actual value have less difference, the value is often used to measure the cost of executing the process. Alternatively, another measure of the amount of resources that a process can call when executing is the frequency of use of the processor. Generally, the more frequently used processes of a processor, the greater the number of resources that are invoked during execution. The foregoing embodiments may accordingly employ a frequency of use scaling process for the processor.
While the present invention has been described in considerable detail and with particular reference to a few illustrative embodiments thereof, it is not intended to be limited to any such details or embodiments or any particular embodiments, but it is to be construed as effectively covering the intended scope of the invention by providing a broad, potential interpretation of such claims in view of the prior art with reference to the appended claims. Furthermore, the foregoing describes the invention in terms of embodiments foreseen by the inventor for which an enabling description was available, notwithstanding that insubstantial modifications of the invention, not presently foreseen, may nonetheless represent equivalent modifications thereto.

Claims (7)

1. A fast virtual machine process scheduling control method is characterized by comprising the following steps:
creating a main process queue and a rapid process queue of the process, and emptying the main process queue and the rapid process queue to realize the initialization of the main process queue and the rapid process queue;
receiving a process created by a virtual machine, and estimating the number of resources occupied by the process during execution;
acquiring the total amount of resources occupied by the current main process queue, if the total amount of resources occupied by the current main process queue is larger than a preset main process queue threshold value and the number of resources called during the process execution is smaller than the preset resource threshold value, inserting the process into the tail of the fast process queue, otherwise, inserting the process into the tail of the main process queue; and
respectively acquiring processes from the head of the main process queue and the head of the fast process queue according to the sequence;
when the total amount of resources occupied by the fast process queue is greater than a preset fast process queue threshold, increasing a main process queue threshold and/or reducing a resource threshold;
after the process is inserted into the tail of the fast process queue, the fast process queue sorts according to the number of resources called when the process is executed;
when the total amount of resources occupied by the fast process queue is greater than a preset fast process queue threshold value and the total amount of resources occupied by the main process queue is smaller than the main process queue threshold value, the processes which are located at the tail of the fast process queue and exceed the preset fast process queue threshold value are proposed and inserted into the tail of the main process queue.
2. The method of claim 1, wherein after receiving the process sent by the virtual machine, estimating whether the process data is normal, and closing an abnormal process.
3. The method of claim 1, wherein after receiving a process issued by a virtual machine, a thread within the process performs splitting according to whether parallel computing is possible.
4. The method according to claim 1, wherein when the total amount of resources occupied by the main process queue is greater than a preset main process queue threshold, processes in the main process queue, which have a resource number smaller than the resource threshold and are called when the processes execute, are extracted and inserted into the tail of the fast process queue.
5. A method as claimed in any one of claims 1 to 4, wherein the amount of resources that a process invokes when executing is the length of a processor time slice taken up by the process and/or the frequency of use of the processor.
6. The device for controlling the process scheduling of the rapid virtual machine is characterized by comprising the following modules:
the initialization module is used for creating a main process queue and a quick process queue of the process and emptying the main process queue and the quick process queue to realize the initialization of the main process queue and the quick process queue;
the estimation module is used for receiving a process sent by the virtual machine and estimating the quantity of resources called when the process is executed;
the enqueuing module is used for acquiring the total amount of resources occupied by the current main process queue, if the total amount of resources occupied by the current main process queue is larger than a preset main process queue threshold value and the number of resources called during the process execution is smaller than the preset resource threshold value, the process is inserted into the tail of the fast process queue, otherwise, the process is inserted into the tail of the main process queue; and
and the dequeuing module is used for respectively acquiring the processes from the head of the main process queue and the head of the fast process queue according to the sequence.
7. A computer-readable storage medium having stored thereon computer instructions, characterized in that the instructions, when executed by a processor, carry out the steps of the method according to any one of claims 1 to 5.
CN201711477075.9A 2017-12-29 2017-12-29 Method and device for controlling process scheduling of rapid virtual machine Active CN108304254B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711477075.9A CN108304254B (en) 2017-12-29 2017-12-29 Method and device for controlling process scheduling of rapid virtual machine

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711477075.9A CN108304254B (en) 2017-12-29 2017-12-29 Method and device for controlling process scheduling of rapid virtual machine

Publications (2)

Publication Number Publication Date
CN108304254A CN108304254A (en) 2018-07-20
CN108304254B true CN108304254B (en) 2022-02-22

Family

ID=62868082

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711477075.9A Active CN108304254B (en) 2017-12-29 2017-12-29 Method and device for controlling process scheduling of rapid virtual machine

Country Status (1)

Country Link
CN (1) CN108304254B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109213561A (en) * 2018-09-14 2019-01-15 珠海国芯云科技有限公司 The equipment scheduling method and device of virtual desktop based on container
CN112306371A (en) * 2019-07-30 2021-02-02 伊姆西Ip控股有限责任公司 Method, apparatus and computer program product for storage management
CN110659132B (en) * 2019-08-29 2022-09-06 福建天泉教育科技有限公司 Request processing optimization method and computer-readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104202261A (en) * 2014-08-27 2014-12-10 华为技术有限公司 Service request processing method and device
CN104199739A (en) * 2014-08-26 2014-12-10 浪潮(北京)电子信息产业有限公司 Speculation type Hadoop scheduling method based on load balancing
CN105448006A (en) * 2015-12-30 2016-03-30 武汉邮电科学研究院 Intelligent supermarket cashier system and method based on mobile payment and IOT (Internet of Things)
CN105991588A (en) * 2015-02-13 2016-10-05 华为技术有限公司 ethod and apparatus for resisting message attack
CN106470169A (en) * 2015-08-19 2017-03-01 阿里巴巴集团控股有限公司 A kind of service request method of adjustment and equipment
CN107018175A (en) * 2017-01-11 2017-08-04 杨立群 The dispatching method and device of mobile cloud computing platform

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9355129B2 (en) * 2008-10-14 2016-05-31 Hewlett Packard Enterprise Development Lp Scheduling queries using a stretch metric

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104199739A (en) * 2014-08-26 2014-12-10 浪潮(北京)电子信息产业有限公司 Speculation type Hadoop scheduling method based on load balancing
CN104202261A (en) * 2014-08-27 2014-12-10 华为技术有限公司 Service request processing method and device
CN105991588A (en) * 2015-02-13 2016-10-05 华为技术有限公司 ethod and apparatus for resisting message attack
CN106470169A (en) * 2015-08-19 2017-03-01 阿里巴巴集团控股有限公司 A kind of service request method of adjustment and equipment
CN105448006A (en) * 2015-12-30 2016-03-30 武汉邮电科学研究院 Intelligent supermarket cashier system and method based on mobile payment and IOT (Internet of Things)
CN107018175A (en) * 2017-01-11 2017-08-04 杨立群 The dispatching method and device of mobile cloud computing platform

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"LRURC: A Low Complexity and Approximate Fair Active Queue Management Algorithm for Choking Non-Adaptive Flows";Xianliang Jiang等;《IEEE Communications Letters》;20150430;第19卷(第4期);第545-548页 *
"基于HBase的交通流数据实时存储系统";陆婷等;《计算机应用》;20150110;第35卷(第01期);第103-107页 *

Also Published As

Publication number Publication date
CN108304254A (en) 2018-07-20

Similar Documents

Publication Publication Date Title
CN108304254B (en) Method and device for controlling process scheduling of rapid virtual machine
CN108196939B (en) Intelligent virtual machine management method and device for cloud computing
US9792137B2 (en) Real-time performance apparatus and method for controlling virtual machine scheduling in real-time
US8346995B2 (en) Balancing usage of hardware devices among clients
US9019826B2 (en) Hierarchical allocation of network bandwidth for quality of service
US20150113252A1 (en) Thread control and calling method of multi-thread virtual pipeline (mvp) processor, and processor thereof
CN105550040B (en) CPU resources of virtual machine preservation algorithm based on KVM platform
EP2742426A1 (en) Network-aware coordination of virtual machine migrations in enterprise data centers and clouds
CN107203428B (en) Xen-based VCPU multi-core real-time scheduling algorithm
US10089155B2 (en) Power aware work stealing
US20190286582A1 (en) Method for processing client requests in a cluster system, a method and an apparatus for processing i/o according to the client requests
CN116185554A (en) Configuration device, scheduling device, configuration method and scheduling method
CN109656714B (en) GPU resource scheduling method of virtualized graphics card
CN113127230B (en) Dynamic resource regulation and control method and system for perceiving and storing tail delay SLO
CN108287753B (en) Computer system fast scheduling method and device
CN114327894A (en) Resource allocation method, device, electronic equipment and storage medium
CN107423114B (en) Virtual machine dynamic migration method based on service classification
CN104866370A (en) Dynamic time slice dispatching method and system for parallel application under cloud computing environment
US20150286501A1 (en) Register-type-aware scheduling of virtual central processing units
CN112860401A (en) Task scheduling method and device, electronic equipment and storage medium
US11194619B2 (en) Information processing system and non-transitory computer readable medium storing program for multitenant service
CN113051059B (en) Multi-GPU task real-time scheduling method and device
US8245229B2 (en) Temporal batching of I/O jobs
CN115495249A (en) Task execution method of cloud cluster
CN104899098B (en) A kind of vCPU dispatching method based on shared I/O virtualized environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant