EP1141827A1 - Method and apparatus for providing operating system scheduling operations - Google Patents

Method and apparatus for providing operating system scheduling operations

Info

Publication number
EP1141827A1
EP1141827A1 EP99966433A EP99966433A EP1141827A1 EP 1141827 A1 EP1141827 A1 EP 1141827A1 EP 99966433 A EP99966433 A EP 99966433A EP 99966433 A EP99966433 A EP 99966433A EP 1141827 A1 EP1141827 A1 EP 1141827A1
Authority
EP
European Patent Office
Prior art keywords
thread
threads
list
traversal
computer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP99966433A
Other languages
German (de)
French (fr)
Inventor
James A. Houha
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
PowerTV Inc
Original Assignee
PowerTV Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by PowerTV Inc filed Critical PowerTV Inc
Publication of EP1141827A1 publication Critical patent/EP1141827A1/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues

Definitions

  • the present invention relates generally to television set-top boxes and more particularly to computer-implemented kernel operations in a set-top box television environment.
  • the set-top box television environment is a real-time environment which requires operations to be performed quickly.
  • computer-implemented threads execute upon a central processing unit (CPU) in order to perform many functions for the set-top box, such as, but not limited to, providing video and audio to an user.
  • CPU central processing unit
  • threads should be scheduled so as to optimize the time required for their functions to be achieved.
  • the present invention is directed to this need and other needs of a set-top box environment.
  • a computer- implemented method and apparatus for scheduling threads contained in a thread list At least two of the threads have a priority indicative of scheduled executions for the two threads.
  • the present invention performs the following steps during a transversal through the thread list: modifying the scheduled execution of at least two threads which have equivalent priorities; performing deadline processing for at least one of the threads; and checking for a predetermined error condition of at least one of the threads.
  • Figure 1 is a perspective view of a set-top box system
  • Figure 2 is a block diagram depicting the various exemplary programs operating within the set-top box system of Figure 1;
  • Figure 3 is a block diagram depicting a novel three tiered scheduling system of the present invention;
  • Figure 4 is a block diagram depicting processing of a thread list in a single pass.
  • Figure 5 is a flow chart depicting the operational steps for a thread in making a single pass through a thread list.
  • Figure 1 shows an exemplary set-top box 20 connected to television 24 via cable 28.
  • Cable 32 provides set-top box 20 with a broadcast analog, broadcast digital, and interactive digital transmission.
  • Set-top box 20 contains application and operating system sof ware in order to provide an advanced interactive television environment.
  • the operating system is utilized by set-top box 20 to, for example, provide interfaces between the application software and the various devices used by the set-top box 20.
  • Figure 2 depicts the software components of a set-top box.
  • the multitasking operating system of the set-top box addresses the high-performance demands of media- centric, real-time applications being delivered through a set-top box.
  • the operating system provides an open, scalable platform for developing and delivering multimedia content to consumers across broadcast and client/server networks.
  • the software architecture for the operating system includes layers of interconnected modules designed to minimize redundancy and optimize multimedia processing in an interactive, network setting.
  • the layers of the architecture include a hardware abstraction layer 40, a core layer 42, an application support layer 44 and an application layer 46.
  • Each hardware device associated with the multimedia client is abstracted into a software device module that resides in the hardware abstraction layer 40.
  • Each of these device modules is responsible for media delivery and manipulation between a hardware device and the remainder of the operating system.
  • Application program interfaces (APIs) supported by each device module separate the hardware dependencies of each device from the portable operating system facilities, and thereby mask the idiosyncrasies of different hardware implementations.
  • a kernel 48 and memory manager 50 residing in the core layer 42 provide the base functionality needed to support an application.
  • a fully preemptive, multithreaded, multitasking kernel is designed to optimize both set-top memory footprint and processing speed. Since the operating system resides on consumer units, it has been designed to exist in a ROM-based system with a very small footprint (e.g., 1 megabyte). In addition, the kernel has also been created to take advantage of 32-bit Reduced Instruction Set Computer (RISC) processors which enable high-speed transmission, manipulation and display of complex media types.
  • RISC Reduced Instruction Set Computer
  • a process is an entity which owns system resources assigned to the process.
  • a thread is a unit of work or string of instructions to be executed.
  • a process can have more than one thread and in such a case is considered "multithreaded.”
  • Memory manager 50 provides an efficient allocation scheme to enable the best performance from limited memory resources. Memory manager 50 provides an efficient memory allocation scheme, enabling optimal performance from limited set-top memory resources.
  • the core layer 42 provides an integrated event system 52 and a standard set of ANSI C utility functions.
  • an application support layer 44 Built on top of the core layer 42 is an application support layer 44. This set of support modules provides higher-level processing functions and application services.
  • At the highest application level 46 at least one application, referred to as a resident application is usually always executing on a set-top box.
  • the application level 46 also provides the necessary capabilities for authoring applications and for managing resources (e.g., the tuner) between the resident application and other background applications residing on the set-top box.
  • Figure 3 depicts the three tiered scheduling operations system which the operating system scheduler 100 utilizes.
  • a round robin scheduler 104 is utilized for threads that share the same priority level. If, for example, there are three threads at priority five and all threads are ready to run, then each thread in turn is given a short time slice.
  • This "soft real time" method has the advantage in that it allows separately authored multi-threaded applications to execute on the same system with no knowledge of each other.
  • Middle tier 106 is a preemptive scheduler 108. In a preemptive preference scheduler
  • a priority level is not allowed to violated. For example, if a thread of priority three and a thread of priority four are both ready to execute, the thread of priority three will execute.
  • This scheduling approach includes the benefit that it favors the deadline of most critical ready thread. Preemptive preference scheduling is commonly used in "hard" real time systems where there may be very serious costs involved with missing critical deadlines.
  • Top tier 110 permits applications and drivers to implement custom scheduling algorithms 112 by using calls that allow a thread to change its own or another's priority level.
  • the ability to change priorities includes the following approaches, but is not limited to: priority inheritance and priority inversion.
  • priority inheritance a thread inherits the priority of a more critical thread that may be waiting for resource held by a less critical thread.
  • Another type of priority inheritance allows a monitor thread to provide services to other threads while inheriting the client threads' priority on a per request basis.
  • priority inversion the priority levels are briefly inverted (that is, high becomes low priority) to allow a less critical thread to run long enough to release a resource requested by a more critical thread. This is typically triggered when a high priority thread detects a resource it needs that is being held by a low priority thread that is not being provided any run time because of a third thread with a medium priority.
  • the present invention includes other custom scheduling methods that may be used, for example, rate-monotonic scheduling, leased space slack-time scheduling and earliest deadline scheduling.
  • Figure 4 depicts the components and the operations associated with traversing a thread list 130 within one pass 134.
  • the present invention preferably via a thread list traverser 138 traverses thread list 130 within one pass in order to perform such multiple scheduling operations within one pass 134 as: round robin scheduling 104, deadline processing 138, and error checking 142.
  • preemptive kernel 146 threads of different priority as well as threads of equal priority each attempt to obtain a time slice of processing.
  • the present invention utilizes a fully preemptive kernel which indicates for example, but not limited to, that a user can press a key at any time and no matter what is being executed on the said-top box a thread to service the pressed key event will interrupt whatever thread is currently executing so that it may execute.
  • Round robin scheduling 104 is efficiently performed within one pass 134 by rotating the priorities of threads which have equal priorities.
  • Priority rotating 150 utilizes preferably bubble sorting 154 in order to rotate threads of equal priority within one pass 134.
  • one pass preferably signifies that the top of the list is not encountered and that traversals back to prior threads within the list is minimized.
  • Priority rotating 150 internally reestablishes the threads of equal priority so that they are rotated. For example, but not limited to, four threads may operate all at priority level nine.
  • Priority rotating approach 150 internally establishes the threads' priority levels at 9.1, 9.2, 9.3, and 9.4. However, for each pass, the priority levels are rotated such that priority levels 9.1, 9.2, 9.3, and 9.4 are switched around via bubble sorting 154. The rotation is effected such that the thread which is on the bottom (that is priority level 9.4) becomes priority level 9.1. The thread which was at priority level 9.1 becomes priority level 9.2 and so on for the two remaining threads in this example.
  • deadline processing 138 performs time-out checking 158 to check for threads 162 whose time-out 166 has expired.
  • Threads 162 include in their data structure a time-out value 166 which indicates how long a thread should wait in thread list 130 for an event.
  • Thread list traverser 138 checks within one pass 134 whether a thread's time-out value has been reached. If the time out value 166 has been reached, then thread list traverser "wakes up" the thread so that the thread can do the proper processing for its time-out.
  • the present invention allows events to be scheduled to occur at a specified future time.
  • This future time may be specified down to a very fine unit of time, such as, but not limited to, on the order of a few hundred billionths of a second. This typically is used for time critical applications.
  • the present invention allows each thread to have either a coarse time-out 180 or a fine timeout 184.
  • a coarse time-out is that it is implemented very efficiently and typically uses less CPU (central processing unit) cycles to support.
  • This efficient implementation takes advantage of the scheduler which is committed to make a single pass 134 through thread list 130 at preferably every scheduler interrupt which occurs preferably every 10 milliseconds. While thread list 130 is being traversed, time-out checking 158 can be performed in order to check for expired time-out 166 with a single compare operation.
  • Scheduler 100 preferably makes a single pass 134 through thread list 130 once every 10 milliseconds. While this pass through the thread list is being performed, scheduler 100 also performs error checking 142 for such error conditions as, but not limited to, stack overflow and stack underflow.
  • Threads 162 each have a priority level 190. Threads 162 may assume preferably a priority level in the range of 1-31. There may an arbitrary number of threads and multiple threads which can share the same priority level. In the preferred embodiment, the most critical priority level is 1 , while the least critical priority level is 31.
  • the present invention preferably includes a certain number of predetermined threads 194 where one or more threads remain always at the top of thread list 130 while one or more threads always remain at the bottom of thread list 130.
  • the present invention utilizes a system thread 198 which is at the highest priority and an idle thread 202 which is at the lowest priority.
  • System thread 198 uses priority level 0 while idle thread 202 uses priority level 32 which priority levels are outside of the range allowed by drivers and applications since they preferably are within the inclusive range from priority level 1 to priority level 31. It is to be understood that the present invention is not limited to only a priority scheme of these disclosed levels but includes any priority scheme which is suitable for the situation at hand.
  • scheduler 100 which manages thread list 130 is never concerned about the special case of adding a new thread to the front or end of list. Moreover, scheduler 100 preferably never has to check for an empty list.
  • the present invention also includes a scheduler thread 206 having the second highest priority ⁇ that is second only to the priority level of system thread 198. Due to the use of predetermined threads, the present invention does not have to check for a final no pointer at the end since only a check for the idle thread is the indication for when to stop checking the list.
  • the present invention also schedules an event to happen at time "forever” which is the most distant time that can be expressed in the preferred embodiment. This event can never occur and will always be present at the end of the time queue (which is kept sorted by event time). This means the present invention is never concerned about dealing with an empty timer queue or adding an event to the end of the list.
  • Figure 5 is a flow chart depicting exemplary operational steps performed by the scheduler in making a pass through the thread list.
  • Start indication block 250 indicates that iteration block 254 is executed first.
  • iteration block 254 a single pass is made through the thread list.
  • Process block 262 compares the priority level of the currently selected thread with a preceding thread's priority. If the threads do not have the same priority as determined by decision block 266, then process block 274 is executed. However, if the threads do have the same priority, then the round robin scheduling approach of the present invention is utilized at process block 270 wherein the threads are preferably bubble sorted so that the execution order is inverted for threads with the same priority. Processing continues at process block 274.
  • a check is performed to determine if the thread which is currently being analyzed by the scheduler has timed-out. If it has, then the thread is woken up so that the thread can perform its time-out operation.
  • Process block 278 is executed wherein error checking is performed.
  • Decision block 282 inquires whether the idle thread which has the lowest priority has been encountered. If it has, then process block 264 is processed wherein a predetermined time is to expire before iteration block 254 is executed. If decision block 282 does not find that the idle thread has been encountered, then iteration block 258 is executed.

Abstract

A computer-implemented method and apparatus for scheduling threads contained in a thread list. At least two of the threads have a priority indicative of scheduled executions for the two threads. The present invention performs the following steps during a transversal through the thread list: modifying the scheduled execution of at least two threads which have equivalent priorities; performing deadline processing for at least one of the threads; and checking for a predetermined error condition of at least one of the threads.

Description

METHOD AND APPARATUS FOR PROVIDING OPERATING SYSTEM SCHEDULING OPERATIONS
BACKGROUND AND SUMMARY OF THE INVENTION The present invention relates generally to television set-top boxes and more particularly to computer-implemented kernel operations in a set-top box television environment.
The set-top box television environment is a real-time environment which requires operations to be performed quickly. In the set-top box environment, computer-implemented threads execute upon a central processing unit (CPU) in order to perform many functions for the set-top box, such as, but not limited to, providing video and audio to an user. In such an environment, threads should be scheduled so as to optimize the time required for their functions to be achieved.
The present invention is directed to this need and other needs of a set-top box environment. In accordance with the teachings of the present invention, a computer- implemented method and apparatus for scheduling threads contained in a thread list. At least two of the threads have a priority indicative of scheduled executions for the two threads. The present invention performs the following steps during a transversal through the thread list: modifying the scheduled execution of at least two threads which have equivalent priorities; performing deadline processing for at least one of the threads; and checking for a predetermined error condition of at least one of the threads.
BRIEF DESCRIPTION OF THE DRAWINGS
Additional advantages and features of the present invention will become apparent from the subsequent description and the appended claims taken in conjunction with the accompanying drawings wherein the same referenced numeral indicates the same components:
Figure 1 is a perspective view of a set-top box system;
Figure 2 is a block diagram depicting the various exemplary programs operating within the set-top box system of Figure 1; Figure 3 is a block diagram depicting a novel three tiered scheduling system of the present invention;
Figure 4 is a block diagram depicting processing of a thread list in a single pass; and
Figure 5 is a flow chart depicting the operational steps for a thread in making a single pass through a thread list.
DESCRIPTION OF THE PREFERRED EMBODIMENT
Figure 1 shows an exemplary set-top box 20 connected to television 24 via cable 28. Cable 32 provides set-top box 20 with a broadcast analog, broadcast digital, and interactive digital transmission. Set-top box 20 contains application and operating system sof ware in order to provide an advanced interactive television environment. The operating system is utilized by set-top box 20 to, for example, provide interfaces between the application software and the various devices used by the set-top box 20.
Figure 2 depicts the software components of a set-top box. The multitasking operating system of the set-top box addresses the high-performance demands of media- centric, real-time applications being delivered through a set-top box. The operating system provides an open, scalable platform for developing and delivering multimedia content to consumers across broadcast and client/server networks. The software architecture for the operating system includes layers of interconnected modules designed to minimize redundancy and optimize multimedia processing in an interactive, network setting. The layers of the architecture include a hardware abstraction layer 40, a core layer 42, an application support layer 44 and an application layer 46.
Each hardware device associated with the multimedia client is abstracted into a software device module that resides in the hardware abstraction layer 40. Each of these device modules is responsible for media delivery and manipulation between a hardware device and the remainder of the operating system. Application program interfaces (APIs) supported by each device module separate the hardware dependencies of each device from the portable operating system facilities, and thereby mask the idiosyncrasies of different hardware implementations.
A kernel 48 and memory manager 50 residing in the core layer 42 provide the base functionality needed to support an application. A fully preemptive, multithreaded, multitasking kernel is designed to optimize both set-top memory footprint and processing speed. Since the operating system resides on consumer units, it has been designed to exist in a ROM-based system with a very small footprint (e.g., 1 megabyte). In addition, the kernel has also been created to take advantage of 32-bit Reduced Instruction Set Computer (RISC) processors which enable high-speed transmission, manipulation and display of complex media types.
Within the field of the present invention, a process is an entity which owns system resources assigned to the process. A thread is a unit of work or string of instructions to be executed. A process can have more than one thread and in such a case is considered "multithreaded."
Memory manager 50 provides an efficient allocation scheme to enable the best performance from limited memory resources. Memory manager 50 provides an efficient memory allocation scheme, enabling optimal performance from limited set-top memory resources. The core layer 42 provides an integrated event system 52 and a standard set of ANSI C utility functions. Built on top of the core layer 42 is an application support layer 44. This set of support modules provides higher-level processing functions and application services.
Application management, session management, and tuner management are a few examples of these services. At the highest application level 46, at least one application, referred to as a resident application is usually always executing on a set-top box. The application level 46 also provides the necessary capabilities for authoring applications and for managing resources (e.g., the tuner) between the resident application and other background applications residing on the set-top box.
Figure 3 depicts the three tiered scheduling operations system which the operating system scheduler 100 utilizes. At bottom tier 102, a round robin scheduler 104 is utilized for threads that share the same priority level. If, for example, there are three threads at priority five and all threads are ready to run, then each thread in turn is given a short time slice. This "soft real time" method has the advantage in that it allows separately authored multi-threaded applications to execute on the same system with no knowledge of each other. Middle tier 106 is a preemptive scheduler 108. In a preemptive preference scheduler
108, a priority level is not allowed to violated. For example, if a thread of priority three and a thread of priority four are both ready to execute, the thread of priority three will execute. This scheduling approach includes the benefit that it favors the deadline of most critical ready thread. Preemptive preference scheduling is commonly used in "hard" real time systems where there may be very serious costs involved with missing critical deadlines.
Top tier 110 permits applications and drivers to implement custom scheduling algorithms 112 by using calls that allow a thread to change its own or another's priority level. The ability to change priorities includes the following approaches, but is not limited to: priority inheritance and priority inversion. In priority inheritance, a thread inherits the priority of a more critical thread that may be waiting for resource held by a less critical thread. Another type of priority inheritance allows a monitor thread to provide services to other threads while inheriting the client threads' priority on a per request basis.
In priority inversion, the priority levels are briefly inverted (that is, high becomes low priority) to allow a less critical thread to run long enough to release a resource requested by a more critical thread. This is typically triggered when a high priority thread detects a resource it needs that is being held by a low priority thread that is not being provided any run time because of a third thread with a medium priority.
The present invention includes other custom scheduling methods that may be used, for example, rate-monotonic scheduling, leased space slack-time scheduling and earliest deadline scheduling.
Figure 4 depicts the components and the operations associated with traversing a thread list 130 within one pass 134. The present invention preferably via a thread list traverser 138 traverses thread list 130 within one pass in order to perform such multiple scheduling operations within one pass 134 as: round robin scheduling 104, deadline processing 138, and error checking 142.
Within preemptive kernel 146, threads of different priority as well as threads of equal priority each attempt to obtain a time slice of processing. Preferably, the present invention utilizes a fully preemptive kernel which indicates for example, but not limited to, that a user can press a key at any time and no matter what is being executed on the said-top box a thread to service the pressed key event will interrupt whatever thread is currently executing so that it may execute.
Round robin scheduling 104 is efficiently performed within one pass 134 by rotating the priorities of threads which have equal priorities. Priority rotating 150 utilizes preferably bubble sorting 154 in order to rotate threads of equal priority within one pass 134. Within the present invention the term "one pass" preferably signifies that the top of the list is not encountered and that traversals back to prior threads within the list is minimized.
Priority rotating 150 internally reestablishes the threads of equal priority so that they are rotated. For example, but not limited to, four threads may operate all at priority level nine. Priority rotating approach 150 internally establishes the threads' priority levels at 9.1, 9.2, 9.3, and 9.4. However, for each pass, the priority levels are rotated such that priority levels 9.1, 9.2, 9.3, and 9.4 are switched around via bubble sorting 154. The rotation is effected such that the thread which is on the bottom (that is priority level 9.4) becomes priority level 9.1. The thread which was at priority level 9.1 becomes priority level 9.2 and so on for the two remaining threads in this example.
If the top thread has nothing to be processed, and the thread which is in second position does, the thread which is in the second position will execute until the thread which is in the top position has something to do within its own time slice. Within one pass 134, deadline processing 138 performs time-out checking 158 to check for threads 162 whose time-out 166 has expired. Threads 162 include in their data structure a time-out value 166 which indicates how long a thread should wait in thread list 130 for an event. Thread list traverser 138 checks within one pass 134 whether a thread's time-out value has been reached. If the time out value 166 has been reached, then thread list traverser "wakes up" the thread so that the thread can do the proper processing for its time-out. The present invention allows events to be scheduled to occur at a specified future time. This future time may be specified down to a very fine unit of time, such as, but not limited to, on the order of a few hundred billionths of a second. This typically is used for time critical applications.
However, in other situations, such resolution is finer than what is actually needed. Occasionally, a resolution measured in hundredths of a second is sufficient. Accordingly, the present invention allows each thread to have either a coarse time-out 180 or a fine timeout 184. One non-limiting advantage of a coarse time-out is that it is implemented very efficiently and typically uses less CPU (central processing unit) cycles to support. This efficient implementation takes advantage of the scheduler which is committed to make a single pass 134 through thread list 130 at preferably every scheduler interrupt which occurs preferably every 10 milliseconds. While thread list 130 is being traversed, time-out checking 158 can be performed in order to check for expired time-out 166 with a single compare operation.
Moreover, within an exemplary situation, there are typically around twenty-five operating system threads running on a set-top box, in addition to application threads. At any given time, most of these threads are in an event- wait mode. The use of coarse time-outs in an event- wait situation has the advantage of keeping the timer queue 129 (which handles events) from becoming too "crowded."
Scheduler 100 preferably makes a single pass 134 through thread list 130 once every 10 milliseconds. While this pass through the thread list is being performed, scheduler 100 also performs error checking 142 for such error conditions as, but not limited to, stack overflow and stack underflow.
Threads 162 each have a priority level 190. Threads 162 may assume preferably a priority level in the range of 1-31. There may an arbitrary number of threads and multiple threads which can share the same priority level. In the preferred embodiment, the most critical priority level is 1 , while the least critical priority level is 31.
The present invention preferably includes a certain number of predetermined threads 194 where one or more threads remain always at the top of thread list 130 while one or more threads always remain at the bottom of thread list 130. In this preferred embodiment, the present invention utilizes a system thread 198 which is at the highest priority and an idle thread 202 which is at the lowest priority. System thread 198 uses priority level 0 while idle thread 202 uses priority level 32 which priority levels are outside of the range allowed by drivers and applications since they preferably are within the inclusive range from priority level 1 to priority level 31. It is to be understood that the present invention is not limited to only a priority scheme of these disclosed levels but includes any priority scheme which is suitable for the situation at hand.
Since thread list 130 is kept sorted by thread priority, the system thread 198 is always the first entry and idle thread 202 is always the last entry. This means that scheduler 100 which manages thread list 130 is never concerned about the special case of adding a new thread to the front or end of list. Moreover, scheduler 100 preferably never has to check for an empty list.
The present invention also includes a scheduler thread 206 having the second highest priority ~ that is second only to the priority level of system thread 198. Due to the use of predetermined threads, the present invention does not have to check for a final no pointer at the end since only a check for the idle thread is the indication for when to stop checking the list.
The present invention also schedules an event to happen at time "forever" which is the most distant time that can be expressed in the preferred embodiment. This event can never occur and will always be present at the end of the time queue (which is kept sorted by event time). This means the present invention is never concerned about dealing with an empty timer queue or adding an event to the end of the list.
Figure 5 is a flow chart depicting exemplary operational steps performed by the scheduler in making a pass through the thread list. Start indication block 250 indicates that iteration block 254 is executed first. At iteration block 254, a single pass is made through the thread list. At iteration block 258, certain steps for each thread are performed.
Process block 262 compares the priority level of the currently selected thread with a preceding thread's priority. If the threads do not have the same priority as determined by decision block 266, then process block 274 is executed. However, if the threads do have the same priority, then the round robin scheduling approach of the present invention is utilized at process block 270 wherein the threads are preferably bubble sorted so that the execution order is inverted for threads with the same priority. Processing continues at process block 274.
At process block 274, a check is performed to determine if the thread which is currently being analyzed by the scheduler has timed-out. If it has, then the thread is woken up so that the thread can perform its time-out operation. Process block 278 is executed wherein error checking is performed.
Decision block 282 inquires whether the idle thread which has the lowest priority has been encountered. If it has, then process block 264 is processed wherein a predetermined time is to expire before iteration block 254 is executed. If decision block 282 does not find that the idle thread has been encountered, then iteration block 258 is executed.
The embodiments which have been set forth above were for the purpose of illustration and were not intended to limit the invention. It will be appreciated by those skilled in the art that various changes and modifications may be made to the embodiments discussed in the specification without departing from the spirit and scope of the invention as defined by the appended claims. For example, the present invention includes that the processing depicted in Figure 5 also includes the use of the other tiered scheduling approaches of Figure 3, and not only the use of a Round Robin scheduling approach.

Claims

IT IS CLAIMED:
1. A computer-implemented method for scheduling threads contained in a thread list, at least two of the threads having a priority indicative of scheduled executions for the two threads, comprising the steps of: performing the following steps during a traversal through said thread list:
(a) Modifying the scheduled execution of at least two threads which have equivalent priorities;
(b) performing deadline processing for at least one of said threads; and
(c) checking for a predetermined error condition of at least one of said threads.
2. The computer-implemented method of Claim 1 further comprising the step of: performing during said traversal a bubble sort in order modify the scheduled execution of at least two threads which have equivalent priorities.
3. The computer-implemented method of Claim 1 wherein at least one of said threads includes a time-out value, said method further comprising the step of: checking during said traversal said time-out value in order to determine if a deadline associated with said thread has occurred.
4. The computer-implemented method of Claim 1 wherein at least one of said threads includes a coarse time-out value, said method further comprising the step of: checking during said traversal said coarse time-out value in order to determine if a deadline associated with said thread has occurred.
5. The computer-implemented method of Claim 1 wherein a first thread includes a coarse time-out value and wherein a second thread includes a fine time-out value, said method further comprising the steps of: checking during said traversal said coarse time-out value of said first thread in order to determine if a deadline associated with said first thread has occurred; and checking during said traversal said fine time-out value of said second thread in order to determine if a deadline associated with said second thread has occurred.
6. The computer-implemented method of Claim 1 further comprising the step of: maintaining a thread within said thread list which is scheduled to execute first within said thread list during each traversal of said thread list.
7. The computer-implemented method of Claim 1 further comprising the step of: maintaining a thread within said thread list which is scheduled to execute last within said thread list during each traversal of said thread list.
8. The computer-implemented method of Claim 7 further comprising the steps of: performing another traversal of said thread list after encountering said thread which is to execute last within said thread list.
9. The computer-implemented method of Claim 7 further comprising the step of:
maintaining a thread within said thread list which is scheduled to execute substantially first within said thread list during each traversal of said thread list.
10. The computer-implemented method of Claim 1 further comprising the step of: setting the priority of a first thread to be the highest priority within said thread list so that said first thread is scheduled to execute substantially first within said thread list during each traversal of said thread list.
11. The computer-implemented method of Claim 10 further comprising the step of: setting the priority of a second thread to be the lowest priority within said thread list so that said second thread is scheduled to execute substantially last within said thread list during each traversal of said thread list.
12. The computer-implemented method of Claim 11 further comprising the step of: setting the priority of a third thread to be the second highest priority within said thread list so that said third thread is scheduled to execute substantially second within said thread list during each traversal of said thread list.
13. The computer-implemented method of Claim 12 further comprising the steps of: using said first thread to perform system operations; using said second thread to perform scheduling operations; and using said third thread as an idling thread.
14. The computer-implemented method of Claim 1 further comprising the step of: checking for a stack error condition associated with one of said threads during said traversal.
15. A computer-implemented apparatus for scheduling threads, at least two of the threads having a priority indicative of scheduled executions for the two threads, comprising: comprising: a thread list for containing the threads; a thread list traverser for performing the following operations during a single traversal of said thread list:
(a) modifying the scheduled execution of at least two threads which have equivalent priorities; (b) performing deadline processing for at least one of said threads; and
(c) checking for a predetermined error condition of at least one of said threads.
16. The computer-implemented apparatus of Claim 15 wherein said thread list traverser performs during said traversal a bubble sort in order modify the scheduled execution of at least two threads which have equivalent priorities.
17. The computer-implemented apparatus of Claim 15 wherein at least one of said threads includes a time-out value, said thread list traverser checking during said traversal said time-out value in order to determine if a deadline associated with said thread has occurred.
18. The computer-implemented apparatus of Claim 15 wherein at least one of said threads includes a coarse time-out value, said thread list traverser checking during said traversal said coarse time-out value in order to determine if a deadline associated with said thread has occurred.
19. The computer-implemented apparatus of Claim 15 wherein a first thread includes a coarse time-out value and wherein a second thread includes a fine timeout value, said thread list traverser checking during said traversal said coarse time-out value of said first thread in order to determine if a deadline associated with said first thread has occurred, said thread list traverser checking during said traversal said fine time-out value of said second thread in order to determine if a deadline associated with said second thread has occurred.
20. The computer-implemented apparatus of Claim 15 further comprising: a first thread for substantially permanently residing in said thread list and for executing first within said thread list; and a second thread for substantially permanently residing in said thread list and for executing last within said thread list, said thread list traverser performing another traversal of said thread list after encountering said second thread during said traversal.
EP99966433A 1998-12-23 1999-12-17 Method and apparatus for providing operating system scheduling operations Withdrawn EP1141827A1 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US21982598A 1998-12-23 1998-12-23
US219825 1998-12-23
PCT/US1999/030247 WO2000039677A1 (en) 1998-12-23 1999-12-17 Method and apparatus for providing operating system scheduling operations

Publications (1)

Publication Number Publication Date
EP1141827A1 true EP1141827A1 (en) 2001-10-10

Family

ID=22820936

Family Applications (1)

Application Number Title Priority Date Filing Date
EP99966433A Withdrawn EP1141827A1 (en) 1998-12-23 1999-12-17 Method and apparatus for providing operating system scheduling operations

Country Status (3)

Country Link
EP (1) EP1141827A1 (en)
KR (1) KR20010103719A (en)
WO (1) WO2000039677A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101686082B1 (en) 2010-07-22 2016-12-28 삼성전자주식회사 Apparatus and method for thread scheduling and lock acquisition order control based on deterministic progress index
US9354926B2 (en) 2011-03-22 2016-05-31 International Business Machines Corporation Processor management via thread status
US9652027B2 (en) * 2015-04-01 2017-05-16 Microsoft Technology Licensing, Llc Thread scheduling based on performance state and idle state of processing units

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA1322422C (en) * 1988-07-18 1993-09-21 James P. Emmond Single-keyed indexed file for tp queue repository
JPH0954699A (en) * 1995-08-11 1997-02-25 Fujitsu Ltd Process scheduler of computer
US5812844A (en) * 1995-12-07 1998-09-22 Microsoft Corporation Method and system for scheduling the execution of threads using optional time-specific scheduling constraints

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO0039677A1 *

Also Published As

Publication number Publication date
KR20010103719A (en) 2001-11-23
WO2000039677A1 (en) 2000-07-06

Similar Documents

Publication Publication Date Title
US7051330B1 (en) Generic application server and method of operation therefor
Rajkumar et al. Resource kernels: A resource-centric approach to real-time and multimedia systems
US8180973B1 (en) Servicing interrupts and scheduling code thread execution in a multi-CPU network file server
US5390329A (en) Responding to service requests using minimal system-side context in a multiprocessor environment
Bangs et al. Better operating system features for faster network servers
US7246167B2 (en) Communication multiplexor using listener process to detect newly active client connections and passes to dispatcher processes for handling the connections
US9195506B2 (en) Processor provisioning by a middleware processing system for a plurality of logical processor partitions
US6895585B2 (en) Method of mixed workload high performance scheduling
US9411636B1 (en) Multi-tasking real-time kernel threads used in multi-threaded network processing
US9003410B2 (en) Abstracting a multithreaded processor core to a single threaded processor core
CN108595282A (en) A kind of implementation method of high concurrent message queue
US6968557B1 (en) Reducing stack memory resources in a threaded computer system
Kaneko et al. Integrated scheduling of multimedia and hard real-time tasks
US20120260257A1 (en) Scheduling threads in multiprocessor computer
Pyarali et al. Evaluating and optimizing thread pool strategies for real-time CORBA
Nakajima Resource reservation for adaptive qos mapping in real-time mach
US7765548B2 (en) System, method and medium for using and/or providing operating system information to acquire a hybrid user/operating system lock
EP1377903A2 (en) Method of and system for withdrawing budget from a blocking task
US20060123421A1 (en) Streamlining cpu utilization by delaying transactions
EP1141827A1 (en) Method and apparatus for providing operating system scheduling operations
Sommer et al. Operating system extensions for dynamic real-time applications
Kravetz et al. Enhancing Linux scheduler scalability
Jones NAS requirements checklist for job queuing/scheduling software
Lagerstrom et al. PScheD Political scheduling on the CRAY T3E
Hong et al. ARX/ULTRA: A new real-time kernel architecture for supporting user-level threads

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20010618

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN

18W Application withdrawn

Withdrawal date: 20020816

RBV Designated contracting states (corrected)

Designated state(s): DE FR GB IT