WO1997024671A1 - Filtre d'evenements pour systeme d'exploitation d'ordinateur dans un terminal de telecommunication domestique - Google Patents

Filtre d'evenements pour systeme d'exploitation d'ordinateur dans un terminal de telecommunication domestique Download PDF

Info

Publication number
WO1997024671A1
WO1997024671A1 PCT/US1996/020126 US9620126W WO9724671A1 WO 1997024671 A1 WO1997024671 A1 WO 1997024671A1 US 9620126 W US9620126 W US 9620126W WO 9724671 A1 WO9724671 A1 WO 9724671A1
Authority
WO
WIPO (PCT)
Prior art keywords
event
thread
kernel
queue
descriptor
Prior art date
Application number
PCT/US1996/020126
Other languages
English (en)
Inventor
James A. Houha
Original Assignee
Powertv, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Powertv, Inc. filed Critical Powertv, Inc.
Priority to KR1019980704949A priority Critical patent/KR19990076823A/ko
Priority to AU14246/97A priority patent/AU1424697A/en
Priority to EP96944439A priority patent/EP0880745A4/fr
Publication of WO1997024671A1 publication Critical patent/WO1997024671A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/542Event management; Broadcasting; Multicasting; Notifications
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/52Program synchronisation; Mutual exclusion, e.g. by means of semaphores

Definitions

  • This invention relates generally to real-time operating systems adapted for high performance applications, such as those executing in a home communications terminal (HCT) to provide cable television or other audiovisual capabilities. More particularly, the invention provides a feature which improves the performance of operating, systems installed in devices having limited computing resources.
  • HCT home communications terminal
  • HCTs need to evolve to transform today's limited capability television sets into interactive multimedia entertainment and communication systems.
  • One possible approach for increasing the capabilities of HCTs is to port existing operating systems, such as UNIX or the like, to PC-compatible microprocessors in the HCT.
  • the huge memory re ⁇ uirements needed to support such existing operating systems render such an approach prohibitively expensive.
  • memory is a primary cost component of HCTs
  • competitive price pressures mean that the added functions must be provided in a manner which minimizes memory use and maximizes processor performance. Consequently, it has been determined that new operating system features must be developed which provide media-centric high performance features while minimizing memory requirements.
  • One conventional operating system design paradigm which has been determined to generally consume a large amount of memory is the partitioning of thread coordination mechanisms —such as semaphores, timers, exceptions, messages, and so forth — into separate subsystems in the operating system. Each such subsystem conventionally includes different application programming interface (API) conventions, different data structures, and different memory areas used by the kernel to keep track of and to check on them.
  • API application programming interface
  • Another conventional operating system design paradigm which has been determined to cause operating system inefficiency in a real-time operating system is the manner in which events are transferred to threads executing in the system.
  • a conventional kernel would transfer that event to a thread which handles the event, even though the event may be of no interest unless a cursor on the television screen is within a certain window.
  • This results in inefficiency since executing the thread involves a context switch, followed by the thread quickly determining that the event is of no interest because of the cursor location on the screen.
  • Many other examples of such inefficiency stem from the fact that threads in a system cannot "prequalify" events which they are to receive from the kernel.
  • the present invention solves the aforementioned problems by providing an efficient real-time kernel having features tailored to the needs of HCT
  • kernels typically provide separate event subsystems, semaphore subsystems and queue subsystems
  • one aspect of the present invention contemplates replacing such subsystems with a single, integrated event subsystem which provides the functionality of semaphores and other synchronization mechanisms through events on event queues.
  • a single event data structure can be used. This single data structure can be optimized to speed up the kernel. Because the kernel needs to be aware of only of two states for each thread (executing the thread, or waiting for an event to deliver to the thread), kernel efficiency can be increased.
  • conventional kernels typically require that the kernel distinguish between various other states, such as waiting for a semaphore, an event, message stream, or an I/O operation, thus resulting in increased complexity and increasing memory requirements in the kernel.
  • Another aspect of the present invention contemplates providing means for each thread to "register" with the kernel to indicate what kind of events (or classes of events) the thread would like to receive.
  • Each thread can also specify a "filter” procedure which, when an event is posted to the system (but before it is delivered to the thread), decides whether the posted event is appropriate for that context or not.
  • This filter may be an interrupt service routine which runs at interrupt time instead of invoking the destination thread, which would require a context switch.
  • HCT home communication terminal
  • FIG. 1 shows one possible configuration for a home communication terminal (HCT) on which an operating system employing the principles of the present invention can be installed.
  • FIG. 2 shows schematically how a kernel event handler 204 constructed in accordance with the present invention can efficiently handle and filter incoming events.
  • FIG. 3 shows steps which may be executed by a kernel event handler to efficiently handle events in an HCT.
  • FIG. 4 shows an example of different threads having registered interest in different types of events.
  • FIG. 5 shows one possible format for an event object in a system employing the principles of the present invention.
  • FIG. 6 shows how two threads can implement a semaphore by using a queue wherein a kernel implements a NextEvent function.
  • FIG. 1 shows a block diagram of a home communication terminal (HCT) in which various principles of the present invention may be practiced.
  • the HCT may include a CPU card 100, graphics card 101, decoder card 102, display panel and key pad 103, main processing board 104, front end 105, tuning section 106, and audio section 107.
  • CPU 100a such as a PowerPC or Motorola 68000 series, with suitable EPROM 100c and RAM 100b.
  • application programs executing on CPU 100a can interact with various peripherals such as a mouse, game controllers, keypads, network interfaces, and the like, as is well known in the art.
  • FIG.2 shows schematically how a kernel event handler employing various inventive principles can efficiently handle incoming events and prequalify the events to certain threads executing in the HCT.
  • different threads in the system may be interested only in certain types of events, and may wish to ignore other types of events.
  • Examples of events include: a keypress on a keypad attached to the HCT; a mouse movement indication received from a mouse attached to the HCT; a button press on a game controller connected to the HCT; a message received from a headend coupled to the HCT; or a signal indicating that a movie has started.
  • Examples of different threads include a copy of a video game operated by a first player; a copy of a video game operated by a second player; an on-screen programming guide; a movie player; a channel tuning indicator; or a user interface for a customer billing application.
  • one application may correspond to a single thread, or a single application may be partitioned into multiple threads which may be concurrently executed to optimize performance.
  • One of ordinary skill in the art will recognize how various applications may be constructed out of a single or multiple threads in the system.
  • a customer billing application may only be interested in events occurring from the HCT keypad, and only if the customer has first entered a special code.
  • Two video game threads may be interested in all key press events occurring from either of two game controllers coupled to the HCT (thus requiring copies of the same event to be posted to both threads).
  • An on-screen programming guide thread may only be interested in keypress events which occur when the mouse cursor is positioned within a certain predete ⁇ ni ed area on the screen, and to ignore all other events (including keypress events when the mouse cursor is not positioned within the area).
  • Each thread may thus wish to register interest in a plurality of different types of events, and may wish to change the registered interests at a later time.
  • FIG. 2 shows an exemplary configuration including a kernel event handler 204 which can accomplish the above objectives. As shown in FIG.
  • two threads A and B can each register interest with the kernel in two different events using a function pk Registerlnterest (see Appendix 1), a kernel-provided programming interface.
  • a corresponding function pk Removelnterest allows a thread to remove an interest in a thread (see Appendix 1).
  • the kernel constructs an event interest list 200 which may comprise a linked list of "event interest" objects 200a, 200b, and 200c.
  • thread A may register interest in an event corresponding to event interest 200b
  • thread B may register interest in an event corresponding to event interest 200c, each of which are specified by parameters in corresponding function calls to pk Registerlnterest.
  • thread A comprises a video game application which is only interested in key press events from game controllers
  • thread B comprises a billing application which is only interested in button presses from a mouse.
  • the kernel manipulates event interest list 200 in response to calls to pkjtegisterlnterest and pk Removelnterest.
  • kernel event handler 204 receives or generates an event, it traverses event interest list 200 to determine the conditions under which various threads in the system should be invoked. In general, by comparing an event descriptor (which describes the event) with parameters included in each event interest object, kernel event handler 204 can efficiently determine whether and how to invoke any thread which has expressed interest in an event.
  • Each event interest object 200a, 200b, and 200c may comprise a code field, a mask field, a filter procedure field, and a queue field in accordance with the parameters set forth for pk_RegisterInterest. For example, when thread A registers interest in certain game controller events, it specifies code2, mask2, filter2, and queue2 as parameters in function pk Registerlnterest.
  • Appendix 1 includes descriptions for a plurality of functions which may be used to carry out various principles of the invention.
  • a syntax including a list of parameter types and examples of use is provided. Referring to function pk Registerlnterest, for example, thread A would invoke this function and supply parameter- values for code, mask, filter, and queue (the filter and queue parameters are optional) in order to direct the kernel to prequalify and direct events to thread A.
  • the pk Registerlnterest function creates and registers an event interest with the kernel. Its code parameter specifies a description of the desired event, its mask parameter specifies an event mask which further clarifies events of interest, its filter parameter specifies an interrupt service routine (ISR) for the kernel to call when the event occurs, and its queue parameter specifies a pointer to the queue to which to route events.
  • ISR interrupt service routine
  • Each event interest object such as element 200b in FIG. 2, holds a mask and a code specifying a "desirable" event descriptor for an incoming event.
  • the mask specifies which fields of the event descriptor are germane to the specified interest, and the code specifies the values that those fields must have in order for the posted event to trigger the event interest.
  • Bits in the mask field can be used to indicate which fields of the event descriptor are germane to a particular interest.
  • One possible example is to allocate 32 bits to the mask field. Of the 32 bits, 8 bits can be allocated to indicate the device type (i.e. , the device type which generated the event), another 8 bits for the device instance (i.e., the instance of that device type which generated the event), another 8 bits for the event type (i.e., what type of event the device generated, such as a key press), and another 8 bits for event data (i.e. , a small amount of data which can contain the event information, such as the specific key which was pressed).
  • Such an allocation is by way of example only, and is not intended to be limiting.
  • each event descriptor can include an indication of the device type, device instance, event type, and event data corresponding to the event. To register interest in events from any device type, a thread would set the device type parameter to all ones.
  • the mask FFOOOOOO would cause all the device type bits to be set, forcing a comparison of device type with that specified in the code
  • event descriptor could be created with subsets of the above fields, or with entirely different fields which qualify a particular type of event.
  • Thread A wishes to register interest in all game controller key press events which occur when the cursor is within a predetermined screen area, and to ignore all other types of events.
  • Thread A creates an event interest 200b by specifying mask2 which specifies "device type” as a qualifier, leaves the device instance open, specifies "event type” as a qualifier, and leaves the “event data” open.
  • thread A specifies code2 which identifies "game controller” as the device type and "key press” as the desired event type.
  • thread A specifies a filter procedure 203 which is to be executed to further qualify the event, and a queue onto which the event will be placed (queue2). For the example shown in FIG.
  • filter procedure 203 checks the cursor location to ensure that it is within a qualified area before passing on the event. It is assumed that filter procedure
  • kernel 203 executes in kernel mode, thus avoiding a context switch. In other words, thread A will not be scheduled by the kernel unless the event meets all of thread A's qualifications.
  • an event descriptor 201 is created which corresponds to the particulars of the event.
  • a device driver which detects a hardware change, can post such an event using pk PostEvent (see Appendix 1).
  • pk PostEvent see Appendix 1
  • Kernel event handler 204 receives the incoming event (from pk PostEvent), and traverses event interest list 200 to match the incoming event to events of interest.
  • the event matching step may be performed very efficiently by multiplying the mask of each event interest object with the incoming event descriptor, then comparing the result with the code of the event interest object. If there is a match, then the event has been pre-qualified.
  • kernel event handler 204 first examines event interest 20Ca by multiplying event descriptor 201 with maskl and comparing the result to codel, and quickly determines that there is no match. This "AND” then "COMPARE" operation can be done very efficiently on a CPU (typically, only two assembly language instructions are needed), thus adding to the performance increase which results from using the principles of the invention.
  • kernel event handler 204 After determining that event descriptor 201 does not match event interest object 200a, kernel event handler 204 next examines event interest object 200b and multiples mask2 by event descriptor 201, then compares the result with code2. For the example in FIG. 2, assume that the result is a successful match. However, because thread A specified a filter procedure 203, kernel event handler 204 does not pass the event to thread A but instead executes filter procedure 203, which checks the cursor location to see if it is in a valid area. If it is, kernel event handler 204 delivers the event to queue2 (also specified in event interest object 200b), and thread A can extract the event from queue2 using pk NextEvent (see Appendix 1).
  • filter routine 203 is executed while the kernel is executing, and need not schedule thread A. If only 20% of events are actually of interest to a particular thread, then it is much faster to make the "interest" determination in kernel code than in thread code, which requires context switches.
  • event interest object 200c by way of pk Registerlnterest, specifying device type as a qualifier, but leaving the other mask fields open.
  • event descriptor 202 When a user presses a mouse button, event descriptor 202 would be generated, indicating device type as mouse ("2"), device instance "1", event type as "key”, and event data as "mouse press”.
  • Kernel event handler 204 would traverse event interest list 200, preferably multiplying each mask with the event descriptor and comparing the result with the code.
  • FIG. 3 shows steps which may be executed by kernel event handler 204 to handle incoming events in the system. It is assumed that an event interest list has already been created through the use of pk Registerlnterest calls made by threads in the system. I ginning in step 301, the next event interest object from the event interest list is retrieved. In step 302, the mask from the event interest object is multiplied with the event descriptor corresponding to the event. In step
  • step 303 the result is compared with the code from the event interest object.
  • processing resumes at step 301 with the next event interest object.
  • step 304 If, in step 304, there is a match, then in step 305 a check is made to determine whether a filter was specified for the event interest object. If so, then in step 306 the specified filter is called, preferably directly by the kernel and without a context switch. If the event passes the filter in step 307, processing advances to step 308; otherwise, processing resumes at step 301 with the next event interest object. In step 308, if a queue was specified with the event interest, the event is placed on the specified queue in step 309; otherwise, processing resumes at step 301 until the end of the event interest list is reached. Note that if no queue was specified, the event can be discarded. Such a situation may be desirable, for example, where all of the processing is done in the filter itself.
  • filters can be reusable. For example, one filter might have a function of determining whether a cursor is in a particular location on the screen; different threads could specify and use this same filter. If, for example, three different applications are executing in the HCT, each having a separate window on a television screen, and the user moves a mouse over the screen, a single filter can be devised which determines whether the cursor is over a window boundary for the specified thread.
  • the filter can modify the event itself. For example, a filter could change the time of the event before putting it on a queue. This could be used, for example, by a "replay" filter which changes the time stamp on an event to the current time (e.g., translate to current time). Another example involves checking a signature on an event before waking up a thread.
  • FIG. 4 shows another example which applies the principles of the present invention.
  • a video game to be executed on an HCT provides two characters who fight one other, with two users each interacting with a separate game controller 401 and 402.
  • the video game may comprise a main video game thread 403 which controls overall scoring and game operation, and two player threads 404 and 405.
  • player 1 thread 404 controls the actions of a first character on the video screen
  • player 2 thread 405 controls the actions of a second character on the video screen.
  • thread 404 must manipulate the first video character based on actions taken by player 1
  • thread 405 must manipulate the second video character based on actions taken by player 2.
  • player 1 thread 404 registers interest in all events from game controller #1 (by specifying the appropriate code, mask, filter and queue parameters), and player 2 thread 405 registers interest in all events from game controller #2.
  • player 1 thread 404 wants to know when player 2 has pressed a "pause” key on game controller #2
  • player 2 thread 405 wants to know when player 1 has pressed a "pause” key on game controller #1 (in other words, either player, including the one operating the opposite controller, can pause the game).
  • player 1 thread 404 can selectively register interest only in "pause” events from game controller #2, and player 2 thread 405 can selectively register interest only in
  • FIG. 4 were provided with a copy of every event generated by each game controller, much time would be wasted in processing undesirable events, wasting CPU time and memory.
  • each thread By allowing each thread to separately register interest only in certain types of events and causing the kernel to only send those types of events to each thread, significant performance increases can be achieved.
  • main video game thread 403 is interested in receiving "game events" from each player thread, and registers accordingly.
  • Each player thread receives events from main video game thread 403, such as a command to cause the character to die because of too many blows by the other player.
  • the aforementioned registered interests are indicated by arrows in FIG.
  • instant replay thread 406 provides a capability to review all previous actions over a time window. Therefore, it registers interest in all events generated by either game controller, and thus gets its own copy of each event generated by the game controllers. This avoids individual threads from having to send "copies" of events between themselves.
  • an on-line TV guide may cause tuning tables to be downloaded at unknown times. When a new tuning table is downloaded, this could constitute an event.
  • more than one thread in the HCT may need a copy of this ning table.
  • This can be accomplished by creating a filter which creates a private copy of the mning table for each thread that registers an interest in the mning table event.
  • a single event can trigger more than one filter, and can also trigger more than one thread.
  • one thread can be an event recorder (debugger); it would want to obtain a copy of every event in the system. To accomplish this, the thread would create a mask which clears all the bits, indicating interest in any event.
  • thread synchronization functions such as semaphores, timers, media events, exceptions, messages, and so forth, can be replaced with event objects and integrated onto an event queue. to minimize memory requirements and applications development in accordance with another aspect of the invention.
  • every event in the system can include a time stamp (comprising, e.g., 64 bits), comprising a "snapshot" of the system clock.
  • a time stamp comprising, e.g., 64 bits
  • a future time can be inserted into the event object (instead of the current time).
  • the kernel can hold onto the event and not post it until the designated time arrives. Therefore, every event has the capability of being an alarm. For example, every keypress generates an event at a particular time.
  • FIG. 5 shows one possible configuration for an event object 501 in a kernel, including a pointer to the next event object, a code (corresponding to an event descriptor), time, X, Y, Z fields, and a "where" pointer.
  • event object 501 may comprise 28 bytes including both "public” portions accessible by threads and “private” portions hidden from threads. Events may be strung together into a list in an "intrusive” form (i.e. , the objects in the list have in their data structure the fields strung together), as compared to an "extrusive” form in which the list component is built separately.
  • the "where" field may comprise a queue pointer which is used for scheduled delivery: deliver event to queue at a future time.
  • a thread may set the code field (device type, instance, etc.) using pk DeliverEvent where an event is created by a thread; alternatively, a device driver may set the code field using pk PostEvent as described previously.
  • the X, Y, Z fields comprise "payload"; any data (including pointers to data structures) can be included therein.
  • an event strucmre can be used to set up an alarm.
  • a game might have a 3 minute round. At end of the round, a bell will ring.
  • a thread can set an alarm by calling pk_ScheduledDeIivery (see Appendix 1) and set the time to 3 minutes from now, and specify the event code, X, Y, Z, and the thread's own queue as the "where" pointer. The thread can then use pk NextEvent to get the next event off its queue.
  • pk_ScheduledDeIivery see Appendix 1
  • the thread can then use pk NextEvent to get the next event off its queue.
  • a thread can "pre-timestamp” the object, then the kernel can reuse that same space when the actual time is stamped. Consolidating alarm functions into an event delivery function thus saves memory, processing time, and programmer effort.
  • the kernel need not determine whether the thread is waiting for a semaphore, or for an alarm, or any other type of synchronization event. This results in faster thread switches. Whenever a thread is sleeping, the operating system is always waiting for an event on its queue. This also makes the kernel smaller. In contrast, conventional kernels need to execute instructions to determine whether a thread is waiting for an alarm, semaphore, queue wait, message wait, etc.
  • a second example of consolidation involves semaphores (FIG. 6). Assume that two threads 602 and 603 need to use a single printer. A conventional approach is to provide a semaphore to which both threads are responsive (i.e., they are in a "semaphore wait” state, and the kernel puts a thread on a "semaphore wait” queue with a semaphore wait data structure).
  • the present invention contemplates using an event object.
  • a queue To implement a semaphore using event queues, a queue
  • the first thread 604 is created (typically by printer driver 601) and a single event 604a is placed on the quote.
  • the first thread 602 which needs the resource makes a call to pk NextEvent (see Appendix 1), specifying the printer semaphore queue 604.
  • pk NextEvent (see Appendix 1), specifying the printer semaphore queue 604.
  • This function causes a wait in the thread if there is no event on the queue, but otherwise extracts the event if there is one on the queue.
  • the kernel 605 removes it from queue 604 and delivers it to thread 602. If another thread 603 attempts to use the resource (by calling pk NextEvent), it will cause the thread to wait until the event 604a is returned to queue 604 by first thread 602.
  • pk RemmEvent (see Appendix 1) which returns the event to queue 604, allowing second thread 603 to finally execute its pk_NextEvent function.
  • kernel 605 does not place second thread
  • the common event paradigm is used. This increases the efficiency of the kernel because the kernel need not maintain separate queues for all types of different activities, and when examining a thread's status, the kernel already knows that the thread can be in only one state. Furthermore, the same common event object can be used to implement semaphores in the system; no special semaphore data object needs to be defined.
  • thread synchronization including semaphores, messages, and queue messages, all of which are passed between threads as synchronization objects.
  • demand scheduling which signals that a demand was made for the upgrade of a thread's priority.
  • timer events and delayed actions an event can be posted immediately or at a specified future time
  • inter-thread exception handling an event occurs whenever a thread raises an exception
  • user interface actions including pressing a button on a remote or game controller, changing the volume, moving a pointer device, etc.
  • media events including starting the playback of audio, reaching the end of a move, inserting media into or ejecting media from a device, etc.
  • This function not only deletes the specified queue, but frees all the events in the queue as welL
  • TW ⁇ is a m ⁇ ero.
  • the outer try block can belong to an application, a device, or the operating system itself.
  • TWs is a very simple example of an exception handler, 1__e try block surrounds the PleyMgnChirneSound routine, and a single catch block catches all exceptions. void PleyHighChimeSound (void)
  • This example waits for an event to be received, and then frees the event when the application is done with it const TlmeWue FwAIEterney - ⁇ QxH+H+H Otd+H-H-H-J; event 'anevsnt; //Where gsm ⁇ p ⁇ d and system ⁇ /wfl be posted u*32 evsnt ⁇ evtoe;. //The device thet posted ft ⁇ event evertOsts; //Oeta posted in the event queue 'gAppOueue; // Input event queue
  • nEva ⁇ t ⁇ pk_,Ns ⁇ tEvent (gApp ⁇ ueue. h orA-BsmMy); sve-unala anEve ⁇ t-xxx-e * hficLMaefc. evertOevtoe - ⁇ n-Sventocode A (u02) tfX_Ma ⁇ * . pk_f ⁇ reeEvent ( nEvent);
  • ⁇ __da A pointer to a private data area. —t ⁇ twt jf The thread priority. The lower 5 bits represents die priority, which can range from 0 to 31.
  • ⁇ codi A pointer to the code to be executed by the new thread.
  • the thread priority The thread priority.
  • the lower 5 bits represents the priority, which can range from 0 to 31.
  • ⁇ d A pointer to a parameter for cods.
  • ⁇ stkj The size, in bytes, of the s ack to allocate for the new thread.
  • the application execution priority The lower 3 bits represents the priority, which can range from 1 (highest) to 7 (lowest).
  • the thread identifier a number from 1 to 31.
  • This function disables interrupts, thereby preventing context switches.
  • the queue from which to retrieve the next event ⁇ timeout The time at which to return if an event has still not been delivered to the queue specified by a.
  • An application can also spedfy kPtv.Forever to wait an indefinite period of time and return only when an event is delivered to the queue.
  • a pointer to the next event or NULL if a timeout occurred (signalling that the queue is still empty).
  • a pointer to the next event or NULL if a timeout occurred (signalling that the queue is still empty).
  • the event type (in conjunction with the mask) is diecked against each event interest in the order in which the event interests were registered. When a match is found, the event interest is triggered.
  • the filter procedure is called and the return code determines whether a copy of the event is sent to the queue. If no filter was specified, the event is delivered directly to the queue.
  • Posting an event may trigger multiple event interests and result in multiple copies of the event being delivered to multiple queues.
  • This procedure takes a pointer to the event as an argument and returns a boolean value.
  • An event interest specifies a type and mask which determine the class of events to watch for.
  • An event interest may have only a fil er or only a queue, or it may have both.
  • 77n ⁇ is ⁇ macro.
  • pfc_Rethrow is equivalent to calling pk_Throw (pK_CurrantException).
  • This macro can be thought of as an alternative return _nech__nis_n. lt passes control to the outer try block so that the exception can be processed. Control does not return to the location where the exception occurred.
  • ptrJtoturnEvent primarily for implementing semaphores. Certain events, such as semaphore events, have limited mterest and can be routed to, at most one queue. When retrieving such events for processing, we reco ⁇ unend using pk_.RetumEven which routes the actual event rather than a copy of the event
  • TTUs ie a macro.
  • This function implements a timeout with an accuracy of +/- 5 milliseconds.
  • This macro can be thought of as an alternative return me_______ ⁇ n. It passes control to the outer try block so that the exception can be processed Control does not return to the location where the exception occurred.
  • This function delimits a block of code known as a try block.
  • One or more catch blocks must follow the try block and define the actual processing functionality for the exception.
  • the scope of the try block determines its priority.
  • inner try blocks take precedence over outer ones.
  • This function re-enables interrupts, thereby enabling context switches.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Debugging And Monitoring (AREA)
  • Input From Keyboards Or The Like (AREA)

Abstract

Un noyau de système d'exploitation amélioré, destiné à un terminal de télécommunication domestique (HCT), comporte un filtre d'événements (203) qui permet à des unités d'exécution (A, B) du HCT d'enregistrer l'intérêt pour des événements d'un type particulier, provenant d'une source particulière, ou d'autres critères désirables. Les événements survenant dans le système (205, 206) sont préqualifiés par le noyau avant d'être transmis aux différentes unités d'exécution ayant enregistré l'intérêt pour uniquement certains types d'événements. Le filtrage dans le contexte du noyau permet d'éviter un changement de contexte dans l'unité d'exécution. Les événements survenant dans le système peuvent être mis en correspondance avec les événements intéressants (200a, 200b, 200c) enregistrés par différentes unités d'exécution, grâce à une comparaison efficace comprenant une zone de masque et une zone de code. En outre, divers mécanismes de synchronisation des unités d'exécution, tels que des signaux sonores et des sémaphores, peuvent être implémentés à l'aide d'un objet événement commun intégré à des files d'attentes d'événements.
PCT/US1996/020126 1995-12-29 1996-12-23 Filtre d'evenements pour systeme d'exploitation d'ordinateur dans un terminal de telecommunication domestique WO1997024671A1 (fr)

Priority Applications (3)

Application Number Priority Date Filing Date Title
KR1019980704949A KR19990076823A (ko) 1995-12-29 1996-12-23 운영 시스템 커널 및 이것에서의 이벤트 필터링 방법과세마포 기능 구현 방법
AU14246/97A AU1424697A (en) 1995-12-29 1996-12-23 Event filtering feature for a computer operating system in home communications terminal
EP96944439A EP0880745A4 (fr) 1995-12-29 1996-12-23 Filtre d'evenements pour systeme d'exploitation d'ordinateur dans un terminal de telecommunication domestique

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US57820395A 1995-12-29 1995-12-29
US08/578,203 1995-12-29

Publications (1)

Publication Number Publication Date
WO1997024671A1 true WO1997024671A1 (fr) 1997-07-10

Family

ID=24311856

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1996/020126 WO1997024671A1 (fr) 1995-12-29 1996-12-23 Filtre d'evenements pour systeme d'exploitation d'ordinateur dans un terminal de telecommunication domestique

Country Status (4)

Country Link
EP (1) EP0880745A4 (fr)
KR (1) KR19990076823A (fr)
AU (1) AU1424697A (fr)
WO (1) WO1997024671A1 (fr)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001050241A2 (fr) * 1999-12-30 2001-07-12 Koninklijke Philips Electronics N.V. Architecture logicielle a taches multiples
WO2005109185A1 (fr) * 2004-05-09 2005-11-17 St Microelectronics Nv Procede permettant d'ameliorer l'efficacite de la transmission et du traitement d'evenements dans un dispositif de reception de television numerique
EP2053507A3 (fr) * 1999-04-14 2009-05-27 Panasonic Corporation Dispositif de contrôle d'événements et système de diffusion numérique
US9524163B2 (en) 2013-10-15 2016-12-20 Mill Computing, Inc. Computer processor employing hardware-based pointer processing
WO2019217295A1 (fr) * 2018-05-07 2019-11-14 Micron Technology, Inc. Messagerie d'événement dans un système ayant un processeur d'auto-programmation et un tissu d'enfilage hybride
US11068305B2 (en) 2018-05-07 2021-07-20 Micron Technology, Inc. System call management in a user-mode, multi-threaded, self-scheduling processor
US11074078B2 (en) 2018-05-07 2021-07-27 Micron Technology, Inc. Adjustment of load access size by a multi-threaded, self-scheduling processor to manage network congestion
US11093251B2 (en) 2017-10-31 2021-08-17 Micron Technology, Inc. System having a hybrid threading processor, a hybrid threading fabric having configurable computing elements, and a hybrid interconnection network
US11119972B2 (en) 2018-05-07 2021-09-14 Micron Technology, Inc. Multi-threaded, self-scheduling processor
US11119782B2 (en) 2018-05-07 2021-09-14 Micron Technology, Inc. Thread commencement using a work descriptor packet in a self-scheduling processor
US11132233B2 (en) 2018-05-07 2021-09-28 Micron Technology, Inc. Thread priority management in a multi-threaded, self-scheduling processor
US11157286B2 (en) 2018-05-07 2021-10-26 Micron Technology, Inc. Non-cached loads and stores in a system having a multi-threaded, self-scheduling processor
US11513839B2 (en) 2018-05-07 2022-11-29 Micron Technology, Inc. Memory request size management in a multi-threaded, self-scheduling processor
US11513838B2 (en) 2018-05-07 2022-11-29 Micron Technology, Inc. Thread state monitoring in a system having a multi-threaded, self-scheduling processor
US11513840B2 (en) 2018-05-07 2022-11-29 Micron Technology, Inc. Thread creation on local or remote compute elements by a multi-threaded, self-scheduling processor
US11513837B2 (en) 2018-05-07 2022-11-29 Micron Technology, Inc. Thread commencement and completion using work descriptor packets in a system having a self-scheduling processor and a hybrid threading fabric

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5301270A (en) * 1989-12-18 1994-04-05 Anderson Consulting Computer-assisted software engineering system for cooperative processing environments
US5321837A (en) * 1991-10-11 1994-06-14 International Business Machines Corporation Event handling mechanism having a process and an action association process
US5325536A (en) * 1989-12-07 1994-06-28 Motorola, Inc. Linking microprocessor interrupts arranged by processing requirements into separate queues into one interrupt processing routine for execution as one routine
US5428781A (en) * 1989-10-10 1995-06-27 International Business Machines Corp. Distributed mechanism for the fast scheduling of shared objects and apparatus
US5430875A (en) * 1993-03-31 1995-07-04 Kaleida Labs, Inc. Program notification after event qualification via logical operators
US5465335A (en) * 1991-10-15 1995-11-07 Hewlett-Packard Company Hardware-configured operating system kernel having a parallel-searchable event queue for a multitasking processor
US5566337A (en) * 1994-05-13 1996-10-15 Apple Computer, Inc. Method and apparatus for distributing events in an operating system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5339418A (en) * 1989-06-29 1994-08-16 Digital Equipment Corporation Message passing method
US5625821A (en) * 1991-08-12 1997-04-29 International Business Machines Corporation Asynchronous or synchronous operation of event signaller by event management services in a computer system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5428781A (en) * 1989-10-10 1995-06-27 International Business Machines Corp. Distributed mechanism for the fast scheduling of shared objects and apparatus
US5325536A (en) * 1989-12-07 1994-06-28 Motorola, Inc. Linking microprocessor interrupts arranged by processing requirements into separate queues into one interrupt processing routine for execution as one routine
US5301270A (en) * 1989-12-18 1994-04-05 Anderson Consulting Computer-assisted software engineering system for cooperative processing environments
US5321837A (en) * 1991-10-11 1994-06-14 International Business Machines Corporation Event handling mechanism having a process and an action association process
US5465335A (en) * 1991-10-15 1995-11-07 Hewlett-Packard Company Hardware-configured operating system kernel having a parallel-searchable event queue for a multitasking processor
US5430875A (en) * 1993-03-31 1995-07-04 Kaleida Labs, Inc. Program notification after event qualification via logical operators
US5566337A (en) * 1994-05-13 1996-10-15 Apple Computer, Inc. Method and apparatus for distributing events in an operating system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP0880745A4 *

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2053507A3 (fr) * 1999-04-14 2009-05-27 Panasonic Corporation Dispositif de contrôle d'événements et système de diffusion numérique
US7962568B2 (en) 1999-04-14 2011-06-14 Panasonic Corporation Event control device and digital broadcasting system
WO2001050241A3 (fr) * 1999-12-30 2002-02-07 Koninkl Philips Electronics Nv Architecture logicielle a taches multiples
US6877157B2 (en) 1999-12-30 2005-04-05 Koninklijke Philips Electronics N.V. Multi-tasking software architecture
WO2001050241A2 (fr) * 1999-12-30 2001-07-12 Koninklijke Philips Electronics N.V. Architecture logicielle a taches multiples
WO2005109185A1 (fr) * 2004-05-09 2005-11-17 St Microelectronics Nv Procede permettant d'ameliorer l'efficacite de la transmission et du traitement d'evenements dans un dispositif de reception de television numerique
US9524163B2 (en) 2013-10-15 2016-12-20 Mill Computing, Inc. Computer processor employing hardware-based pointer processing
US11093251B2 (en) 2017-10-31 2021-08-17 Micron Technology, Inc. System having a hybrid threading processor, a hybrid threading fabric having configurable computing elements, and a hybrid interconnection network
US11880687B2 (en) 2017-10-31 2024-01-23 Micron Technology, Inc. System having a hybrid threading processor, a hybrid threading fabric having configurable computing elements, and a hybrid interconnection network
US11579887B2 (en) 2017-10-31 2023-02-14 Micron Technology, Inc. System having a hybrid threading processor, a hybrid threading fabric having configurable computing elements, and a hybrid interconnection network
WO2019217295A1 (fr) * 2018-05-07 2019-11-14 Micron Technology, Inc. Messagerie d'événement dans un système ayant un processeur d'auto-programmation et un tissu d'enfilage hybride
US11513837B2 (en) 2018-05-07 2022-11-29 Micron Technology, Inc. Thread commencement and completion using work descriptor packets in a system having a self-scheduling processor and a hybrid threading fabric
US11119782B2 (en) 2018-05-07 2021-09-14 Micron Technology, Inc. Thread commencement using a work descriptor packet in a self-scheduling processor
US11126587B2 (en) 2018-05-07 2021-09-21 Micron Technology, Inc. Event messaging in a system having a self-scheduling processor and a hybrid threading fabric
US11132233B2 (en) 2018-05-07 2021-09-28 Micron Technology, Inc. Thread priority management in a multi-threaded, self-scheduling processor
US11157286B2 (en) 2018-05-07 2021-10-26 Micron Technology, Inc. Non-cached loads and stores in a system having a multi-threaded, self-scheduling processor
US11513839B2 (en) 2018-05-07 2022-11-29 Micron Technology, Inc. Memory request size management in a multi-threaded, self-scheduling processor
US11513838B2 (en) 2018-05-07 2022-11-29 Micron Technology, Inc. Thread state monitoring in a system having a multi-threaded, self-scheduling processor
US11513840B2 (en) 2018-05-07 2022-11-29 Micron Technology, Inc. Thread creation on local or remote compute elements by a multi-threaded, self-scheduling processor
US11119972B2 (en) 2018-05-07 2021-09-14 Micron Technology, Inc. Multi-threaded, self-scheduling processor
US11579888B2 (en) 2018-05-07 2023-02-14 Micron Technology, Inc. Non-cached loads and stores in a system having a multi-threaded, self-scheduling processor
US11074078B2 (en) 2018-05-07 2021-07-27 Micron Technology, Inc. Adjustment of load access size by a multi-threaded, self-scheduling processor to manage network congestion
US11809369B2 (en) 2018-05-07 2023-11-07 Micron Technology, Inc. Event messaging in a system having a self-scheduling processor and a hybrid threading fabric
US11809368B2 (en) 2018-05-07 2023-11-07 Micron Technology, Inc. Multi-threaded, self-scheduling processor
US11809872B2 (en) 2018-05-07 2023-11-07 Micron Technology, Inc. Thread commencement using a work descriptor packet in a self-scheduling processor
US11068305B2 (en) 2018-05-07 2021-07-20 Micron Technology, Inc. System call management in a user-mode, multi-threaded, self-scheduling processor
US11966741B2 (en) 2018-05-07 2024-04-23 Micron Technology, Inc. Adjustment of load access size by a multi-threaded, self-scheduling processor to manage network congestion
US12067418B2 (en) 2018-05-07 2024-08-20 Micron Technology, Inc. Thread creation on local or remote compute elements by a multi-threaded, self-scheduling processor
US12106142B2 (en) 2018-05-07 2024-10-01 Micron Technology, Inc. System call management in a user-mode, multi-threaded, self-scheduling processor

Also Published As

Publication number Publication date
AU1424697A (en) 1997-07-28
EP0880745A4 (fr) 1999-04-21
KR19990076823A (ko) 1999-10-25
EP0880745A1 (fr) 1998-12-02

Similar Documents

Publication Publication Date Title
WO1997024671A1 (fr) Filtre d'evenements pour systeme d'exploitation d'ordinateur dans un terminal de telecommunication domestique
US7107497B2 (en) Method and system for event publication and subscription with an event channel from user level and kernel level
US5563648A (en) Method for controlling execution of an audio video interactive program
US5539920A (en) Method and apparatus for processing an audio video interactive signal
CN105740326B (zh) 浏览器的线程状态监测方法及装置
US6823518B1 (en) Threading and communication architecture for a graphical user interface
EP0760131A1 (fr) Procede et appareil de distribution de messages d'evenements dans un systeme d'exploitation
CN108595282A (zh) 一种高并发消息队列的实现方法
CN103984598A (zh) 用于线程调度的方法以及系统
US20120090012A1 (en) Event booking mechanism
US7032211B1 (en) Method for managing user scripts utilizing a component object model object (COM)
CN112691365B (zh) 云游戏加载方法、系统、装置、存储介质和云游戏系统
EP1657640A1 (fr) Méthode et système informatique pour traitement de queue
US7386861B2 (en) System and method for efficiently blocking event signals associated with an operating system
US6877157B2 (en) Multi-tasking software architecture
US20050177797A1 (en) Key event controlling apparatus
WO2001075599A2 (fr) Interface d'abstraction de systeme d'exploitation pour logiciel microprogramme de plate-forme de terminal a bande large
CN106371905A (zh) 应用程序操作方法、装置和服务器
Gajewska et al. Why X is not our ideal window system
US7010781B1 (en) Methods and apparatus for managing debugging I/O
EP1141827A1 (fr) Procede et appareil pour assurer des operations de programmation de systeme d'exploitation
CN116860306A (zh) 业务系统的数据升级方法、装置、设备及存储介质
Bonifazi et al. The Monitoring and Control System of the LHCb Event Filter Farm
JP2024078783A (ja) 情報処理装置及びアプリケーション起動制御方法
CN112578984A (zh) 一种合成视景系统人机交互事件的处理方法和系统

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AL AM AT AU AZ BA BB BG BR BY CA CH CN CU CZ DE DK EE ES FI GB GE HU IL IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK TJ TM TR TT UA UG UZ VN AM AZ BY KG KZ MD RU TJ TM

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): KE LS MW SD SZ UG AT BE CH DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 1019980704949

Country of ref document: KR

WWE Wipo information: entry into national phase

Ref document number: 1996944439

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: JP

Ref document number: 97524380

Format of ref document f/p: F

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

WWP Wipo information: published in national office

Ref document number: 1996944439

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 1019980704949

Country of ref document: KR

WWW Wipo information: withdrawn in national office

Ref document number: 1996944439

Country of ref document: EP

WWR Wipo information: refused in national office

Ref document number: 1019980704949

Country of ref document: KR