AU2021106510A4 - A method of cpu scheduling performance analysis using markov chain modeling. - Google Patents

A method of cpu scheduling performance analysis using markov chain modeling. Download PDF

Info

Publication number
AU2021106510A4
AU2021106510A4 AU2021106510A AU2021106510A AU2021106510A4 AU 2021106510 A4 AU2021106510 A4 AU 2021106510A4 AU 2021106510 A AU2021106510 A AU 2021106510A AU 2021106510 A AU2021106510 A AU 2021106510A AU 2021106510 A4 AU2021106510 A4 AU 2021106510A4
Authority
AU
Australia
Prior art keywords
cpu
markov chain
performance
scheduling
chain modeling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
AU2021106510A
Inventor
Pradeep Kumar Jatav
Manish Maheshwari
Jagdish Raikwal
Rupesh Sendre
Rahul Singhai
Kamal Upreti
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Maheshwari Manish Dr
Raikwal Jagdish Dr
Singhai Rahul Dr
Original Assignee
Maheshwari Manish Dr
Raikwal Jagdish Dr
Singhai Rahul Dr
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Maheshwari Manish Dr, Raikwal Jagdish Dr, Singhai Rahul Dr filed Critical Maheshwari Manish Dr
Application granted granted Critical
Publication of AU2021106510A4 publication Critical patent/AU2021106510A4/en
Ceased legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

A METHOD OF CPU SCHEDULING PERFORMANCE ANALYSIS USING MARKOV CHAIN MODELING. The present disclosure relates to a method (100) for CPU scheduling using Markov chain modeling for CPU performance evaluation. In an aspect, the method (100) for CPU scheduling using Markov chain modeling for CPU performance evaluation comprises of receiving performance information (102) from the processing unit of the CPU, receiving running process information (104), running by the connected terminal devices, determining performance information (106) according to the running process information received (104), scheduling the CPU (108) based on Markov chain modeling, sending the performance information (110) to the connected terminal devices for performance, evaluating the CPU performance (112), by solving the Markov chain model with the priorities input. (FIG. 1 will be the reference figure) - 13 - rage i 011 100 Receiving performance information. 102 Receiving running process information. Determining performance information. Scheduling the CPU based on markov chain modeling. 108 Sending the performance information to the connected terminal devices. 110 Evaluating the CPU performance. 112 Fig. 1 Flowchart of a method for CPU scheduling using Markov chain modeling for CPU performance evaluation. 1

Description

rage i 011
100
Receiving performance information. 102
Receiving running process information.
Determining performance information.
Scheduling the CPU based on markov chain modeling. 108
Sending the performance information to the connected terminal devices. 110
Evaluating the CPU performance. 112
Fig. 1 Flowchart of a method for CPU scheduling using Markov chain modeling for CPU performance evaluation.
A METHOD OF CPU SCHEDULING PERFORMANCE ANALYSIS USING MARKOV CHAIN MODELING. TECHNICAL FIELD
[0001] The present disclosure relates to a CPU scheduling method and in particular to a method for CPU scheduling using Markov chain modeling for CPU performance evaluation.
BACKGROUND
[0002] Background description includes information that may be useful in understanding the present invention. It is not an admission that any of the information provided herein is prior art or relevant to the presently claimed invention, or that any publication specifically or implicitly referenced is prior art.
[0003] In an operating system, a large number of processes arrive to the scheduler, which performs the function of managing the processing of these jobs. All the existing methods of managing the process schedule of a CPU have some advantages and disadvantages over each other. A unified study for scheduling schemes indicates to design a group of scheduling schemes so that the member may possess common properties of the class as well as could be mutually compared. A general class of scheduling schemes can be designed and examined through a Markov chain model to perform the comparative analysis of the performance of member scheduling schemes.
[0004] Efforts have been made in the related prior art to provide different solutions for managing the process schedule of a CPU. For example, a United States patent no. US10776151B2 discloses a system and method for performing a selection of non-uniform memory access (NUMA) nodes for mapping of virtual central processing unit (vCPU) operations to physical processors are provided. A CPU scheduler evaluates the latency between various candidate processors and the memory associated with the vCPU, and the size of the working set of the associated memory, and the vCPU scheduler selects an optimal processor for execution of a vCPU based on the expected memory access latency and the characteristics of the vCPU and the processors. The systems and methods further provide for monitoring system characteristics and rescheduling the vCPUs when other placements provide improved performance and efficiency.
[0005] Further efforts have been made in the related prior art to provide different solutions for managing the process schedule of a CPU. For example, a United States patent no. US9535736B2 discloses a resource scheduler is described that allows virtual machine instances to earn resource credits during low activity levels. Virtual machine instances that spend a predominant amount of time operating at low activity levels are able to quickly gain resource credits. Once these virtual machine instances acquire enough resource credits to surpass a threshold level, the resource scheduler can assign a high priority level to the virtual machine instances that provide them with priority access to CPU resources. The next time that the virtual machine instances enter a high activity level, they have a high priority level that allows them to preempt other, lower priority virtual machine instances. Thus, these virtual machine instances are able to process operations and/or respond to user requests with low latency.
[00061 Therefore, the present disclosure overcomes the above-mentioned problem associated with the traditionally available method or system, any of the above-mentioned inventions can be used with the presented disclosed technique with or without modification.
[0007] All publications herein are incorporated by reference to the same extent as if each individual publication or patent application were specifically and individually indicated to be incorporated by reference. Where a definition or use of a term in an incorporated reference is inconsistent or contrary to the definition of that term provided herein, the definition of that term provided herein applies, and the definition of that term in the reference does not apply.
OBJECTS OF THE INVENTION
[00081 It is an object of the present disclosure which provides a method for CPU scheduling using Markov chain modeling for CPU performance evaluation.
SUMMARY
[0009] The present concept of the present invention is directed towards the method for CPU scheduling using Markov chain modeling for CPU performance evaluation.
[0010] In an aspect, the method for CPU scheduling using Markov chain modeling for CPU performance evaluation comprises of receiving performance information from the processing unit of the CPU, receiving running process information, running by the connected terminal devices, determining performance information according to the running process information received, scheduling the CPU based on Markov chain modeling, sending the performance information to the connected terminal devices for performance, evaluating the CPU performance, by solving the Markov chain model with the priorities input.
[0011] In another aspect, the CPU scheduler model is created by representing performance processes, scheduler state, deadlock state, and the priorities of the state transition in a Markov chain model diagram.
[0012] In yet another aspect, the scheduler stats from any process and then moves to any other process in the queue based on priorities. It provides a quantum to each process, and a random trial decides the next quantum of time. The vacancy in the process queue is filled by a process in a waiting queue created outside the processing unit.
[0013] One should appreciate that although the present disclosure has been explained with respect to a defined set of functional modules, any other module or set of modules can be added/deleted/modified/combined, and any such changes in architecture/construction of the proposed system are completely within the scope of the present disclosure. Each module can also be fragmented into one or more functional sub-modules, all of which are also completely within the scope of the present disclosure.
[00141 Various objects, features, aspects, and advantages of the inventive subject matter will become more apparent from the following detailed description of preferred embodiments, along with the accompanying drawing figures in which like numerals represent like components.
BRIEF DESCRIPTION OF THE DRAWINGS
[0015] The accompanying drawings are included to provide a further understanding of the present disclosure and are incorporated in and constitute a part of this specification. The drawings illustrate exemplary embodiments of the present disclosure and, together with the description, serve to explain the principles of the present disclosure.
[0016] Fig. 1 illustrates an exemplary flow chart of the method for CPU scheduling using Markov chain modeling for CPU performance evaluation.
[00171 It should be noted that the figures are not drawn to scale, and the elements of similar structure and functions are generally represented by like reference numerals for illustrative purposes throughout the figures. It should be noted that the figures do not illustrate every aspect of the described embodiments and do not limit the scope of the present disclosure.
[0018] Other objects, advantages, and novel features of the invention will become apparent from the following detailed description of the present embodiment when taken in conjunction with the accompanying drawings.
DETAILED DESCRIPTION
[0019] In the following description, numerous specific details are set forth in order to provide a thorough understanding of embodiments of the present invention. It will be apparent to one skilled in the art that embodiments of the present invention may be practised without some of these specific details.
[0020] Embodiments of the present invention include various steps, which will be described below. The steps may be performed by hardware components or may be embodied in machine-executable instructions, which may be used to cause a general-purpose or special purpose processor programmed with the instructions to perform the steps. Alternatively, steps may be performed by a combination of hardware, software, and firmware and/or by human operators.
[0021] Embodiments of the present invention may be provided as a computing device, which may include one or more storage medium tangibly embodying thereon instructions and unique identities of the device, the instruction may be used to prevent the unauthorized user to alter/erase the unique identities of the device. The storage mediums may include, but is not limited to, semiconductor memories, such as ROMs, PROMs, random access memories (RAMs), programmable read-only memories (PROMs), erasable PROMs (EPROMs), electrically erasable PROMs (EEPROMs), flash memory, or other type of media/machine readable medium suitable for storing unique ID(s) of the device and electronic instructions (e.g., computer programming code, such as software or firmware).
[0022] Various methods described herein may be practiced by combining one or more machine-readable storage media containing the code/instruction according to the present invention with appropriate standard device hardware to execute the instruction contained therein. An apparatus for practicing various embodiments of the present invention may involve one or more computers (say server) (or one or more processors within a single computer) and storage systems containing or having network access to computer program(s) coded in accordance with various methods described herein, and the method steps of the invention could be accomplished by modules, devises, routines, subroutines, or subparts of a computer program product.
[0023] If the specification states a component or feature "may", "can", "could", or "might" be included or have a characteristic, that particular component or feature is not
required to be included or have the characteristic.
[0024] Exemplary embodiments will now be described more fully hereinafter with reference to the accompanying drawings, in which exemplary embodiments are shown. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. These embodiments are provided so that this disclosure will be thorough and complete and will fully convey the scope of the invention to those of ordinary skill in the art. Moreover, all statements herein reciting embodiments of the invention, as well as specific examples thereof, are intended to encompass both structural and functional equivalents thereof. Additionally, it is intended that such equivalents include both currently known equivalents as well as equivalents developed in the future (i.e., any elements developed that perform the same function, regardless of structure).
[0025] Thus, for example, it will be appreciated by those of ordinary skill in the art that the diagrams, schematics, illustrations, and the like represent conceptual views or processes illustrating systems and methods embodying this invention. The functions of the various elements shown in the figures may be provided through the use of dedicated hardware as well as hardware capable of executing associated software. Similarly, any switches shown in the figures are conceptual only. Their function may be carried out through the operation of program logic, through dedicated logic, through the interaction of program control and dedicated logic, or even manually, the particular technique being selectable by the entity implementing this invention. Those of ordinary skill in the art further understand that the exemplary hardware, software, processes, methods, and/or operating systems described herein are for illustrative purposes and, thus, are not intended to be limited to any particular named
[00261 The following is a detailed description of embodiments of the disclosure depicted in the accompanying drawings. The embodiments are in such detail as to clearly communicate the disclosure. However, the amount of detail offered is not intended to limit the anticipated variations of embodiments; on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the present disclosure as defined by the appended claims.
[00271 Each of the appended claims defines a separate invention, which for infringement purposes is recognized as including equivalents to the various elements or limitations specified in the claims. Depending on the context, all references below to the "invention" may in some cases refer to certain specific embodiments only. In other cases, it will be recognized that references to the "invention" will refer to subject matter recited in one or more, but not necessarily all, of the claims.
[00281 Various terms as used herein are shown below. To the extent a term used in a claim is not defined below, it should be given the broadest definition persons in the pertinent art have given that term as reflected in printed publications and issued patents at the time of filing.
[0029] In an embodiment of the present disclosure, Fig. 1 illustrates an exemplary flow chart of the method for CPU scheduling using Markov chain modeling for CPU performance evaluation.
[00301 In an aspect, the method (100) for CPU scheduling using Markov chain modeling for CPU performance evaluation comprises of receiving performance information (102) from the processing unit of the CPU, receiving running process information (104), running by the connected terminal devices, determining performance information (106) according to the running process information received (104), scheduling the CPU (108) based on Markov chain modeling, sending the performance information (110) to the connected terminal devices for performance, evaluating the CPU performance (112), by solving the Markov chain model with the priorities input.
[0031] In another aspect, a general class of round-robin queue scheduling schemes involves considering a round-robin scheduling scheme. There is one scheduler and m processes, P1, P2, P3... Pm in a queue. The scheduler provides one quantum of time to each process, and the next quantum is decided by a random trial. The scheduler starts from any process Pi in a queue and then moves to Pj ( # i = 1,2,3...m). The new process enters from the end, i.e., in a queue, Pm+1 is placed after Pm, and so on. Suppose the scheduler at the end of a quantum is at any process Pi (i=1, 2, 3...m) then in the next quantum, either the scheduler will be on Pi+1 with priority p, or it will be on Pi with priority s, or will be on Pi
1 with priority q. The scheduler becomes idle when there is no process in the queue. However, it is assumed that the scheduler may be in deadlock in any quantum. From this deadlock level, the scheduler could be back also to the queue in any other quantum for processing purposes. Outside the processing unit, there is a long waiting queue of processes Pl', P2'...... , and if one process is over inside, then a new process, waiting outside, enters inside so as to maintain m processes there.
[0032] In another aspect, a Markov chain is a stochastic model which describes a sequence of possible events in which the probability of every event individually depends only on the state attained in the previous event. It is a process through which predictions can be made regarding future outcomes based solely on its present state, and most importantly, such predictions are just as good as the ones that could be made knowing the process's full history. Conditions on the present state of the system, its past , and future states are independent. It is a type of Markov process that has either a discrete state space or a discrete index set
( which often represents time), but the precise definition of a Markov chain varies. For example, it is common to define a Markov chain as a Markov process in either discrete or continuous time with a fixed state space (thus regardless of the nature of time), but a Markov chain having discrete-time in either countable or continuous state space can also be defined (thus regardless of the state space).
[00331 In yet another aspect, in an operating system, a large number of processes arrive to the scheduler, whose role is to manage the processing of these jobs. All the existing methods of managing the process schedule of a CPU have some advantages and disadvantages over each other. A unified study for scheduling schemes motivates to the design of a general class of scheduling schemes so that the member may possess common properties of the class as well as could be mutually compared. A general class of scheduling schemes can be designed and examined through a Markov chain model in order to perform a comparative analysis of the performance of member scheduling schemes.
[0034] In another aspect, the CPU scheduler model (108) is created by representing performance processes, scheduler state, deadlock state, and the priorities of the state transition in a Markov chain model diagram. The scheduler (108) stats from any process and then moves to any other process in the queue based on priorities. The scheduler (108) provides a quantum to each process, and a random trial decides the next quantum of time. To find a good scheduling strategy for maximizing the overall performance of a system with multiple tasks with different Markov chain constraints, overall system performance needs to be quantified. Since each task is associated with a Markov chain constraint, it is natural for each task to express its performance as a function of the respective dropout process as well as the average dropout rate.
[00351 In yet another aspect, CPUs may be divided according to different allocation mechanisms so as to obtain different CPU scheduling domains. A terminal device allocates the application program to a proper CPU scheduling (110) domain according to the stored performance type information of the application program so that a CPU in the CPU scheduling domain to which the application program is allocated runs the application program. Optionally, the CPUs may be divided into a processor cluster with high power consumption and a processor cluster with low power consumption according to power consumption performance of the CPUs; then, the CPU scheduling (110 domains are the processor cluster with high power consumption and the processor cluster with low power consumption. In the Linux operating system, a CPU scheduling (110) domain is referred to as a CPU set, where the CPU set is in one-to-one correspondence with a system resource control group (cgroup). Therefore, a cgroup may be allocated to the application program according to the performance type information of the application program. There is a mechanism that in Linux that controls, by using a management mode of a tree structure, one or more processes to use system physical resources, and the cgroup mainly includes a method for allocating resources such as a CPU, a memory, a disk input/output (disk I/O), and a network class to processes in the system. The CPU set is a processor cluster that includes at least one CPU.
[0036] While the foregoing describes various embodiments of the invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof. The scope of the invention is determined by the claims that follow. The invention is not limited to the described embodiments, versions or examples, which are included to enable a person having ordinary skill in the art to make and use the invention when combined with information and knowledge available to the person having ordinary skill in the art.
[00371 Thus, the scope of the present disclosure is defined by the appended claims and includes both combinations and sub-combinations of the various features described hereinabove as well as variations and modifications thereof, which would occur to persons skilled in the art upon reading the foregoing description.

Claims (5)

We Claim:
1. A method (100) for CPU scheduling using Markov chain modeling for CPU performance evaluation, wherein said method (100) comprises of:
receiving performance information (102) from the processing unit of the CPU;
receiving running process information (104), running by the connected terminal devices;
determining performance information (106) according to the running process information received (104);
scheduling the CPU (108) based on Markov chain modeling;
sending the performance information (110) to the connected terminal devices for performance;
evaluating the CPU performance (112), by solving the Markov chain model with the priorities input.
2. The method (100) for CPU scheduling using Markov chain modeling for CPU performance evaluation as claimed in claim 1, wherein the CPU scheduler model (108) is created by representing performance processes, scheduler state, deadlock state, and the priorities of the state transition in a Markov chain model diagram.
3. The method (100) for CPU scheduling using Markov chain modeling for CPU performance evaluation as claimed in claim 1, wherein the scheduler (108) stats from any process and then moves to any other process in the queue based on priorities.
4. The method (100) for CPU scheduling using Markov chain modeling for CPU performance evaluation as claimed in claim 1, wherein the scheduler (108) provides a quantum to each process, and a random trial decides the next quantum of time.
5. The method (100) for CPU scheduling using Markov chain modeling for CPU performance evaluation, as claimed in claim 1, wherein the vacancy in the process queue is filled by a process in a waiting queue created outside the processing unit.
Application no.: Total no. of sheets: 1 23 Aug 2021 2021106510 Page 1 of 1
Fig. 1 Flowchart of a method for CPU scheduling using Markov chain modeling for CPU performance evaluation.
AU2021106510A 2021-08-13 2021-08-23 A method of cpu scheduling performance analysis using markov chain modeling. Ceased AU2021106510A4 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN202121036646 2021-08-13
IN202121036646 2021-08-13

Publications (1)

Publication Number Publication Date
AU2021106510A4 true AU2021106510A4 (en) 2021-11-25

Family

ID=78610585

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2021106510A Ceased AU2021106510A4 (en) 2021-08-13 2021-08-23 A method of cpu scheduling performance analysis using markov chain modeling.

Country Status (1)

Country Link
AU (1) AU2021106510A4 (en)

Similar Documents

Publication Publication Date Title
Witt et al. Predictive performance modeling for distributed batch processing using black box monitoring and machine learning
US8959515B2 (en) Task scheduling policy for limited memory systems
US20190303200A1 (en) Dynamic Storage-Aware Job Scheduling
US9367357B2 (en) Simultaneous scheduling of processes and offloading computation on many-core coprocessors
CN107038069B (en) Dynamic label matching DLMS scheduling method under Hadoop platform
WO2020219114A1 (en) Commitment-aware scheduler
WO2016054162A1 (en) Job scheduling using expected server performance information
US20170083367A1 (en) System and method for resource management
CN113454614A (en) System and method for resource partitioning in distributed computing
CN111767134A (en) Multitask dynamic resource scheduling method
Murthy et al. Resource management in real-time systems and networks
Alaei et al. RePro-Active: a reactive–proactive scheduling method based on simulation in cloud computing
CN109992418B (en) SLA-aware resource priority scheduling method and system for multi-tenant big data platform
Wadhwa et al. Optimized task scheduling and preemption for distributed resource management in fog-assisted IoT environment
CN111190712A (en) Task scheduling method, device, equipment and medium
CA3141319C (en) Reducing cache interference based on forecasted processor use
CN116010064A (en) DAG job scheduling and cluster management method, system and device
CN113391911B (en) Dynamic scheduling method, device and equipment for big data resources
CN109634714B (en) Intelligent scheduling method and device
Ghazali et al. A classification of Hadoop job schedulers based on performance optimization approaches
AU2021106510A4 (en) A method of cpu scheduling performance analysis using markov chain modeling.
CN116244073A (en) Resource-aware task allocation method for hybrid key partition real-time operating system
Zouaoui et al. CPU scheduling algorithms: Case & comparative study
Ghazy et al. A New Round Robin Algorithm for Task Scheduling in Real-time System.
Ghanavatinasab et al. SAF: simulated annealing fair scheduling for Hadoop Yarn clusters

Legal Events

Date Code Title Description
FGI Letters patent sealed or granted (innovation patent)
MK22 Patent ceased section 143a(d), or expired - non payment of renewal fee or expiry