Multi-tasking data processing system
The invention relates to a method for reducing multi-tasking overhead in a data processing system, the data processing system comprising at least one processing unit and a scheduling unit, the processing unit executing at least one task under control of the scheduling unit, wherein a task interruption occurs if a current task interrupts its execution. The invention also relates to a data processing system comprising at least one processing unit and a scheduling unit, the processing unit being arranged to execute at least one task under control of the scheduling unit, wherein a task interruption occurs if a current task interrupts its execution.
There is a wide range of data processing systems comprising processors which are capable of executing multiple tasks in a pseudo-parallel, pseudo-simultaneous fashion. Such a processor, also referred to as a processing unit, is typically equipped with a scheduling unit. The scheduling unit may be a hardware component or a software component performing scheduling functions. Furthermore, the scheduling unit may form part of an operating system controlling the use of the processor in the data processing system, or the scheduling unit may be a separate piece of software. The scheduling unit is arranged to schedule the execution of tasks by the processor. Typically, there are two forms of multitasking: preemptive multitasking and cooperative multitasking. There are also hybrid forms of multitasking, such as a combination of preemptive and cooperative multitasking which is disclosed in US 6,256,659. In preemptive multitasking systems, tasks are typically processed based on allotted time slices. Each task is allotted a certain amount of processing time, which is referred to as a time slice. When a time slice expires, the execution of a first task is suspended so that a second task can be started or resumed. If a task is suspended, then usually a pointer is provided that indicates where, in a stream of instructions, the execution was suspended. When the execution of the first task resumes at some later point in time, the pointer indicates the next instruction to be executed. Also, a variety of temporary values must be stored when the first task suspends its execution. These temporary values which are kept
in a plurality of registers of the processing unit, represent the state of the execution of the first task; this state must be 'remembered' (i.e. the temporary values must be stored in, for example, the main memory) so that it can be restored when the first task resumes its execution. This process is commonly referred to as state saving and state restoring. In cooperative multitasking systems, application programs are designed such that they contain specific interruption points. A task executing the instructions from the program can only be interrupted if an interruption point is reached. An interruption of a task can be achieved by the task suspending itself (i.e. the task transfers control to the scheduling unit), in which case the interruption point is referred to as a suspension point. Alternatively, an interruption of a task can be achieved by explicitly terminating the task, in which case the interruption point is referred to as a termination point. There are different strategies for state saving and restoring in case of either a suspension or a termination of a task. If a task is suspended, state saving operations and state restoring operations are usually required. State saving is then typically performed by storing the temporary values which constitute the state of execution of the task. In that case, state restoring is performed by loading these temporarily values after the execution of the task has been resumed. Another approach for state restoring is to recalculate values after the execution of the task has been resumed, in which case no temporary values need to be stored. If a task is terminated, then its state of execution is typically not stored and intermediate values which have been calculated should be recalculated after the execution of the task has been restarted. However, also in the case of termination a task could save its state by storing temporary values and loading these temporary values after a restart. Using cooperative multitasking, a programmer can for example insert suspension points at moments in the execution of a task where relatively few temporary values must be stored. Furthermore, a programmer can insert termination points at moments in the execution of a task where relatively few computations must be performed to recalculate intermediate values which are lost at termination of the task. Consequently, storage requirements are less stringent and such a system can be faster compared to a preemptive multitasking system. In a cooperative multitasking system each task must be designed as a cooperative task. Control cannot be 'taken' from a cooperative task, i.e. the operating system or scheduling unit cannot force the task to interrupt its execution. The concept that a task controls the interruption of its execution (instead of the scheduling unit) is referred to as explicit task switching. A cooperative task may explicitly interrupt itself, e.g. by invoking a
special primitive called 'suspend' (which transfers control to the scheduling unit; the scheduling unit usually performs the suspend operation) or by returning from the task's main function (which terminates the execution of the task). Using the concept of explicit task switching, it is also possible to specify at which suspension points state saving is necessary. As the case may be, there are no temporary values which need to be stored at a certain moment in the execution of a task. The task can then explicitly suspend itself without saving temporary values representing its state. If the task is suspended, then only a pointer to the next instruction is stored to resume the task afterwards at the correct point of execution of the task. Reducing the number of state variables which need to be stored lowers the storage requirements and improves the performance of the system.
It is an object of the invention to provide a data processing system of the kind set forth which has a reduced multi-tasking overhead. This is achieved by providing a method, characterized by the characterizing portion of claim 1. It is also achieved by providing a data processing system characterized by the characterizing portion of claim 10. It is possible that a task gives up control unnecessarily, e.g. when there is no other task to run. In such a case the scheduling unit, which is responsible for the management of the tasks and the order of their execution, decides that the same task should be rescheduled for execution immediately. The task should then simply proceed with its execution instead of being interrupted, because interruption leads to a waste of memory space and processing time. If the task is unnecessarily suspended, then in many cases unnecessary state saving operations and state restoring operations are performed as well. If the task is unnecessarily tenninated, then in many cases unnecessary computations are performed to recalculate intermediate values. This problem can have serious consequences. For example, if the processor is a hardware accelerator then normally only one or two tasks run simultaneously on the processor, and the probability of unnecessary task interruptions is very high. Considering that for hardware accelerators performance may be very critical, being able to avoid unnecessary operations is of crucial importance. If the processor is a central processing unit (CPU), then it will typically execute 10 or more tasks simultaneously. In that case, suspension is usually performed via an operating system and it involves operations which span many clock cycles, e.g. 100-1000 cycles. Although the frequency of unnecessary task interruptions is lower than
on hardware accelerators, the performance cost of such a task interruption is significantly higher. The invention relies on the perception that unnecessary task interruptions can be avoided if a current task can determine whether another task will be executed after the current task's interruption. If the current task can retrieve information about the next task to be executed, it can determine whether its interruption is justified or not. In particular, if the scheduling unit reschedules the current task for immediate execution (i.e. without scheduling another task to execute first), then the interruption is unnecessary. In that case, the current task should simply proceed with its execution. If another task has indeed been scheduled, then the current task should indeed interrupt its execution and return control to the scheduling unit. In the case that the interruption is achieved by means of suspension, it may be necessary to save temporary values and the current task will perform state saving. At some point later in time, after one or more other tasks have executed, the scheduling unit will reschedule the task. The task will then resume its execution and perform state restoring. In the case that the interruption is achieved by means of termination, it is typical that intermediate values must be recalculated. When the scheduling unit restarts the task, the intermediate values will be recalculated by the task. It is noted that the document 'Design of Multi-Tasking Coprocessor Control for Eclipse', by Martijn J. Rutten, Jos T.J. van Eijndhoven, Evert- Jan D. Pol, presented at the 10th International Symposium On Hardware/Software Codesign (CODES'02), May 6-8, 2002, Estes Park, CO, USA, and the document 'A Heterogeneous Multiprocessor Architecture for Flexible Media Processing', by Martijn J. Rutten et al., IEEE Design and Test Magazine, July- August 2002, pages 39-50, disclose a technique to provide support for multi-tasking coprocessors. A hardware shell connected to a coprocessor provides a scheduling service to the coprocessor. A coprocessor can access the scheduling service using a 'GetTask' primitive in order to perform task selection. The 'GetTask' primitive returns an identifier of a next task to be executed. The 'GetTask' primitive is used by the coprocessor to select the next task to be executed after the current task has met a blocking condition that cannot be satisfied, e.g. when no input data available. A task switch is perfoπned unconditionally when a blocking condition is not satisfied. Hence, the state must be saved unconditionally before the 'GetTask' primitive is performed, even if the same task will be executed again. In the solution according to the invention, the task switch and the associated
state saving are performed conditionally (i.e. dependent on the information returned by a yield primitive) and therefore unnecessary state saving operations can be avoided. It is noted that the prior art discloses some techniques directed to an improvement of the efficiency of task switching, in particular of the efficiency of state saving and restoring. US 2002/0010733 discloses a device which is provided with a control device that activates a stack machine and that also controls thread switching. The control device first discriminates a next thread to be switched to. It then sidetracks register data indicating the current state of execution of the program, stored in a control register group in the stack machine, in response to a request to switch threads and stores the register data in a sidetracking area set up for the current thread. Finally, it reads the register data out of the sidetracking area set up for the thread that is switched to. US 2002/0099933 discloses a data processing apparatus and a method for saving return state data. The data processing apparatus comprises a processing unit for executing data processing instructions, the processing unit having a plurality of modes of operation, with each mode of operation having a corresponding stack for storing data associated with that mode. The processing unit is responsive to a return state data processing instruction to write return state data of the processing unit from its current mode of operation to a stack corresponding to a different mode of operation. US 6,243,735 discloses a microcontroller comprising a processor, a task management table and a scheduler. The processor sequentially runs a plurality of tasks for controlling hardware engines (cores) allocated to perform the tasks. The task management table stores task management information which includes state information representative of the execution state of each task, priority information representative of the execution priority of each task, and core identification information representative of the allocation of tasks to the cores. The scheduler allows the processor to switch between tasks on the basis of the task management information when a given instruction is decoded or when the execution of any one of the cores is terminated. The techniques described in these documents cannot avoid unnecessary task interruptions and the associated state saving/restoring or recomputation in cooperative multi- tasking. Unnecessary task interruptions have a clear negative effect on the resource utilization of multi-tasking systems and can have a large influence on the overall performance of these systems. In the embodiment defined in claim 2, a task interrupts itself by suspending its execution. In that case, the task usually transfers control to the scheduling unit, and
subsequently the scheduling unit performs an operation to suspend the task. Typically a program counter value is stored, so that the execution of the task can be resumed at the interruption point. Alternatively, a task may interrupt its execution by terminating its execution. In that case, the task usually returns control to the scheduling unit and after a certain amount of time the scheduling unit restarts the execution of the task, and then the task typically restart at the first instruction of the task. Alternatively, a task may restart at the interruption point if the task itself has stored a program counter value. In both cases state saving and state restoring may be required, as defined in claim 4 and claim 5, respectively. A task saves its state of execution before it interrupts its execution. In the case that a task suspends its execution, the scheduling unit will resume the execution of the task after a while, and then the task will restore its state of execution before it proceeds with its execution. In the case that a task terminates its execution, the scheduling unit will restart the execution of the task after a while, and then optionally the task will restore its state of execution before it proceeds with its execution. According to the embodiment defined in claim 6, the information about the next task, which is provided by the scheduling unit to the current task, indicates whether the current task is equal to the next task. If the current task is equal to the next task, then the current task is allowed to proceed with its execution. If the current task is not equal to the next task, then the current task must interrupt its execution. According to the embodiment defined in claim 7, the current task calls a yield primitive before interrupting its execution. The yield primitive requests the scheduling unit to provide the said information. The embodiment defined in claim 8 comprises an example of an implementation of the yield primitive. In that case, the yield primitive returns value 'true' if the current task is not equal to the next task, and the yield primitive returns value 'false' if the current task is equal to the next task. The current task is allowed to proceed with its execution if the yield primitive returns the value 'false', and the current task must interrupt its execution if the yield primitive returns the value 'true'. The embodiment defined in claim 9 comprises another example of an implementation of the yield primitive. In that case, the yield primitive returns information about the priority of the next task. Subsequently, the current task compares the priority of the next task with its own priority. The current task is allowed to proceed with its execution if the priority of the next task is lower than or equal to its own priority. The current task must interrupt its execution if the priority of the next task is higher than its own priority.
As will be recognized by persons skilled in the art, there are more alternative implementations of the yield primitive. The choice for a particular implementation depends among others on the prioritization scheme and the design of the scheduling unit applied in the data processing system.
The present invention is described in more detail with reference to the drawings, in which: Fig. 1 illustrates a known data processing system comprising a processor which is capable of executing multiple tasks in a pseudo-parallel way; Fig. 2 illustrates an example of a known digital signal processor which is arranged to execute signal processing tasks; Fig. 3 illustrates a known algorithm carrying out a picture-processing task; Fig. 4 illustrates the pseudo-code conesponding to the algorithm illustrated in Fig. 3; Fig. 5 illustrates another known algorithm carrying out a picture-processing task; Fig. 6A illustrates the pseudo-code conesponding to the algorithm illustrated in Fig. 5 and Fig. 6B contains the continuation of this pseudo-code; Fig. 7 illustrates an algorithm according to the invention which carries out a picture-processing task; Fig. 8A illustrates the pseudo-code corresponding to the algorithm illustrated in Fig. 7 and Fig. 8B contains the continuation of this pseudo-code; Fig. 9 illustrates another algorithm according to the invention which carries out a picture-processing task; Fig. 10A illustrates the pseudo-code corresponding to the algorithm illustrated in Fig. 9 and Fig. 10B contains the continuation of this pseudo-code.
Fig. 1 illustrates a known data processing system dps comprising a processor p which is capable of executing multiple tasks in a pseudo-parallel way. In this case, the processor p is capable of executing two tasks ti and t2 in a pseudo -parallel way; the number of tasks which can be executed in such a way is variable and generally depends on the type of
processor and its purpose. For example, a hardware accelerator typically executes a small number of tasks and a CPU may perform more than 10 tasks. In the example illustrated in Fig. 1, the data processing system dps comprises a processor p which is executing two tasks ti and t2 simultaneously. Each task ti, t2 has two private variables: vi and v2 in ti, and v3 and v in t2. Both tasks ti, t2 are executed by the processor p. Processor p has a limited amount of local storage in the form of local registers ri, r2 and r3. Register ri holds variable vi corresponding to task ti, and register r3 holds variable v corresponding to task t2. Register r2 is shared between task ti and task t2: it holds v2 if tj is executing and it holds v3 if t2 is executing. Holding values of variables locally within the processor p is important, because the values are accessed frequently. Since the registers ri, r2 and r3 are located inside the processor p the access time is short and therefore computations are fast. However, if a task is suspended, the value which resides in a shared register such as r2 must be saved at a location which is not shared, in such a manner that the value can be restored at a later moment in time. In this case, locations ϊ\ and 12 in a memory m are used to save variables v2 and v3, respectively. Memory m can be a processor's main memory (internal RAM) or an external memory, which depends on the memory organization of the data processing system. Memory m can be accessed by the processor p via bus b. Restoring the value, i.e. transferring the value back to the shared register r2 from the memory m, is typically performed when the suspended task is resumed. Saving and restoring variables which are in shared registers requires bus transfers, which are costly due to relatively long latencies. Hence, it is desirable to reduce the cost of state saving and state restoring. Cooperative multitasking facilitates this reduction by allowing a task to control its own suspension, instead of allocating this control to the scheduling unit. This concept is referred to as explicit task switching. The points at which a task switch may occur are made explicit in the task, for example by calling a suspend primitive which suspends the execution of the task and transfers control to the scheduling unit, or by returning from the main function of the task using a return statement which terminates the execution of the task and returns control to the scheduling unit. Additionally, the state saving operations and state restoring operations can be made explicit in the task. The state saving operations are then performed before suspending the task, and the state restoring operations are performed when the task is resumed. Explicit state saving has the advantage that it may reduce the amount of data which needs to be saved and restored, because the programmer of the task chooses appropriate suspension points. Explicit state saving allows that the state saving operations
involve a variable amount of temporary values to be stored, so that the programmer can insert suspension points in the execution of a task where relatively few temporary values need to be stored. However, these techniques do not prevent that unnecessary state saving operations and state restoring operations are performed. An appropriate way to achieve this is to avoid unnecessary task suspensions. In the case of termination points, these techniques do not prevent unnecessary computations to recalculate intermediate values. This can be achieved by avoiding unnecessary task terminations. The concepts of avoiding unnecessary suspension and unnecessary termination form the basis of the methods according to the invention. Fig. 2 illustrates an example of a digital signal processing task dspt. The digital signal processing task dspt has a first input port ii, a second input port i2 and an output port o. The techniques from the prior art and the advantages of the invention will be explained using examples of algorithms and pseudo-code, carrying out a picture-processing task dspt in a data processing system dps. Fig. 3 illustrates a known algorithm carrying out a picture-processing task and Fig. 4 illustrates the corresponding pseudo-code. The algorithm comprises the following steps: it is checked ACQUIRE_DATA whether the input buffer of the first input port ii contains any data; a header is read LOAD_RELEASE from the first input port ii; if the header indicates a new picture NEW_PIC?, then: - it is checked ACQUIRE_DATA whether the input buffer of the second input port i2 contains any data; - a new table is read LOAD_RELEASE from the second input port i2; - a counter which keeps track of the number of processed pictures is incremented COUNT NCR; if the header does not indicate a new picture NEW_PIC?, then no new table is read, but: - it is checked again ACQUIRE_DATA whether the input buffer of the first input port ii contains any data; - it is checked ACQUIRE_ROOM whether the output buffer of the output port o has sufficient free space;
- more input data is read from the first input port ii, processing is performed using the old table and the result is output through the output port o LOAD_PROCESS_STORE_RELEASE. It is assumed that the incoming data, which enter through the input ports ii, i2, and the outgoing data, which are output through the output port o, are stored in memory. Therefore, instructions like 'acquire' are used to claim a certain memory block, instructions like 'load' and 'store' are used to access the memory, and 'release' is used to make a memory block available for other tasks. Instructions of this kind are also referred to as communication primitives. In the example given in Fig. 3 and Fig. 4, a preemptive system is assumed.
The preemptive system comprises a scheduling unit that may initiate task switches. Since the scheduling unit is aware of all tasks which can be executed, it will never preempt the running task if there is no other task to run. Hence, in such a preemptive system there are no unnecessary task interruptions. The scheduling unit may take the decision to perform a task switch if, for example, there are no input data or there is no output space to write data in. In this case, the primitives 'acquireData' and 'acquireRoom' are blocking primitives, which means that the scheduling unit can take over control if the primitives block, e.g. because there is no data or room available. The scheduling unit will usually suspend a task and save all variables which represent the state of execution of the current task. These variables are then transferred from shared registers to a block of memory. When the scheduling unit decides to resume the execution of the task, the state variables are transferred back from memory to the shared registers. As will be recognized by a person skilled in the art, the scheduling unit must make worst-case assumptions with regard to the state variables which correspond to a certain task. Typically, the scheduling unit has no knowledge of the variables of the task, and it can only use a worst-case approach to perform state saving, which means that it saves all the registers of the processor, even if some of the registers are not used by the variables of the task. Even if the scheduling unit has knowledge of the variables of the task, it typically has no knowledge of the lifetime of the state variables of the task, i.e. it cannot know whether some state variables do not need to be saved because they are no longer relevant for the further execution of the task. Hence, all state variables corresponding to a task must be saved in memory when a task switch occurs. Both approaches have a clear negative effect on the performance of the system.
Fig. 5 illustrates another known algorithm carrying out a picture-processing task. Fig. 6A illustrates the corresponding pseudo-code; Fig. 6B contains the continuation of the pseudo-code. The algorithm illustrated in Fig. 5 comprises the steps of the algorithm illustrated in Fig. 3. In addition, the algorithm comprises: - steps of verifying whether the steps ACQUIRE_DATA and ACQUIREJROOM have been successful SUCCESS?; - if these steps have been successful SUCCESS?, then the execution of the current task proceeds; - if these steps have not been successful SUCCESS?, then the execution of the current task is suspended SUSPEND, SAVE_STATE_SUSPEND and resumed RESUME, RESTORE_STATE_RESUME at a later moment in time; whether the state of execution of the current task is saved and restored SAVE_STATE_SUSPEND, RESTORE_STATE_RESUME depends on whether temporary values need to be stored at the respective suspension points. Here a cooperative task is assumed, which means that the task can control the points at which it is suspended instead of the scheduling unit. The suspension of the execution of the task is done explicitly by the task itself, and may involve explicit state saving operations and state restoring operations, which depends on the state of the execution of the task at specific suspension points. In the example this is achieved by using so-called non-blocking versions of the 'acquire' primitives. These primitives merely test whether data or room is available and they return a Boolean value to indicate this. The non-blocking primitives have the prefix 'try' in the example. They never interrupt (suspend or terminate) a task. The task interrupts itself only at specific points of its execution, e.g. by calling a suspend function which suspends the execution of the task, or by executing a return statement which terminates the execution of the task. In the example a suspend function suspendf) is called. Fig. 6A and Fig. 6B show the corresponding pseudo-code, comprising calls to a function suspendQ which transfers control to the scheduling unit. State saving operations and state restoring operations are performed by the functions saveflaskState state) and restore (TaskState* state), respectively. It is noted that saving and restoring the state of execution of the task is not necessary for each suspension, because the programmer of the task can determine whether these operations are necessary at a certain point of execution. This has the advantage that a reduction of state saving and state restoring operations can be
achieved. As can be seen from Fig. 6 A and Fig. 6B, the number of places where state saving is actually performed is limited to only two out of four. However, it is possible that an unnecessary task interruption occurs, in particular if the scheduling unit reschedules the same task for execution immediately after suspension. Fig. 7 illustrates an algorithm according to the invention which carries out a picture-processing task. Fig. 8A illustrates the corresponding pseudo-code; Fig. 8B contains the continuation of the pseudo-code. The algorithm illustrated in Fig. 7 comprises the steps of the algorithm illustrated in Fig. 5. In addition, the algorithm comprises: - steps of requesting the scheduling unit to provide information about the next task to be executed SAME_TASK?; - if the next task is equal to the current task SAME_TASK?, then the execution of the cunent task proceeds; - if the next task is not equal to the current task SAME_TASK?, then the execution of the current task is suspended SUSPEND, SAVE_STATE_SUSPEND and resumed RESUME, RESTORE_STATE_RESUME at a later moment in time. In the example shown in Fig. 8A and Fig. 8B, the suspension of a task depends on the result of a function or primitive yieldQ which is called before explicitly suspending the task. The yield primitive yieldQ requests the scheduling unit to provide information whether the same or another task will be scheduled next. Note that the current task is defined as the task from which the yield primitive is called. The yield primitive returns the value 'true' if a task other than the current task will be executed next. It returns the ^alue 'false' if the current task is rescheduled for immediate execution, meaning that no other task will be executed next. If the yield primitive returns the value 'false', then the current task can continue without being suspended. As a result, the number of task interruptions can be reduced significantly. Furthermore, the number of unnecessary state saving operations and state restoring operations can be further reduced. Note that the yield primitive can also be implemente l differently, for example as a function which returns information about the priority of the next task. The current task can then compare the priority of the next task with its own priority, and decide whether it suspends itself or not. For example, the current task should proceed with its execution if the priority of the next task is lower than or equal to its own priority, and the current task should interrupt its execution if the priority of the next task is higher than rts own priority. A variety
of other implementations of the yield primitive is possible, which have in common that the yield primitive requests information about the next task to be executed. It is noted that even if state saving operations are performed, state restoring operations may not always be necessary. As the case may be, no other task is executed between the moment of suspension of the current task and the moment of resuming the current task. In that case, the values in the shared registers have not been changed and the state restoring operations are redundant. The skilled person will recognize that it is possible to provide information to the task and to make the state restoring operations part of a conditional instruction based on this information. For example, the suspend function can be adapted to return such information to the task. The information reveals whether other tasks have been executed between the moment of suspension of the current task and the moment of resuming the cunent task. If no other task has been executed, then the state restoring operations are not needed and performance can be further improved by skipping the state restoring operations. Fig. 9 illustrates another algorithm according to the invention which carries out a picture-processing task. Fig. 10A illustrates the corresponding pseudo-code; Fig. 10B contains the continuation of the pseudo-code. In this case, the current task is interrupted by returning from the task's main function. The point of execution and the state of execution of the task are lost upon interruption, and the execution starts from the start of the task's main function when the scheduling unit reschedules the task for execution. This means that the lost data must be reloaded or recalculated when the task is restarted. This is a clear waste of processing time, so in this case it is also desirable to avoid unnecessary task interruptions. The algorithm illustrated in Fig. 9 comprises the steps of the algorithm illustrated in Fig. 3. In addition, the algorithm comprises: - steps of verifying whether the steps ACQUIRE_DATA and ACQUIRE_ROOM have been successful SUCCESS?; - if these steps have been successful SUCCESS?, then the execution of the current task proceeds; - if these steps have not been successful SUCCESS?, then the execution of the current task is either terminated TERMINATE or a step is performed which requests the scheduling unit to provide information about the next task to be executed SAME_TASK?; in the latter case the following steps are performed:
- if the next task is equal to the current task SAME_TASK?, then the execution of the current task proceeds; - if the next task is not equal to the current task SAME_TASK?, then the execution of the current task is terminated TERMINATE. The pseudo-code illustrated in Fig. 10A and Fig. 10B demonstrates the use of the return statement return to terminate the execution of the current task and the use of the yield function yieldQ to request the scheduling unit to provide information about the next task to be executed. It is remarked that the scope of protection of the invention is not restricted to the embodiments described herein. Neither is the scope of protection of the invention restricted by the reference signs in the claims. The word 'comprising' does not exclude other parts than those mentioned in a claim. The word 'a(n)' preceding an element does not exclude a plurality of those elements. Means forming part of the invention may both be implemented in the form of dedicated hardware or in the form of a programmed general- purpose processor. The invention resides in each new feature or combination of features.