WO1985004499A1 - Computer interprocess signal communication system - Google Patents
Computer interprocess signal communication system Download PDFInfo
- Publication number
- WO1985004499A1 WO1985004499A1 PCT/GB1985/000138 GB8500138W WO8504499A1 WO 1985004499 A1 WO1985004499 A1 WO 1985004499A1 GB 8500138 W GB8500138 W GB 8500138W WO 8504499 A1 WO8504499 A1 WO 8504499A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- signal
- program
- buffer
- master control
- control unit
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/54—Interprogram communication
- G06F9/546—Message passing systems or structures, e.g. queues
Definitions
- This invention relates to a signalling system for exchanging of messages between independent software signalling units within real-time computer systems, said signalling system including means to allow the signalling units to send and receive asynchronous messages and also including means for routing and transfer of these messages between their origin and destination signalling units.
- the invention relates to a computer having one or more central processing units for execution of programs, a main memory for storage of information in binary form and a master control unit for performing routing, transfer and scheduling of asynchronous messages between a signalling unit representated by a program being executed by one central processing unit and another signalling unit represented by another program in the same or in another central processing unit.
- the invention provides a computer system having at least one central processor, main memory means for storage of information and a master control unit connected between the central processor (s) and a memory means, said master control unit being able both to intercept and interpret virtual addresses and commands issued by a central processor in such a way that one class of virtual addresses and commands are interpreted to be associated with reading of information from and writing of information into the main memory means and are translated into real addresses issued by the master control unit to the main memory means, and a second class of virtual addresses (either independent of or combined with specific command codes issued by a central processor) which cause, for each virtual address code, autonomous signal routing transfer and reception activity to be performed by the master control unit, and which master control unit has means to perform signal reception independently of the central processor and means to initiate operation of the central processor when a received signal has been accepted by the master control unit, the master control unit being able to access the main memory means indepdently in performing such signalling activities.
- Fig.1 shows a simplified block diagram of a computing system. This system is not the subject of the invention, but introduced in order to be able to explain the background to, and the salient points of, the invention.
- the main parts of the computing system in Fig 1 are the Central Processing Unit (CPU), the Memory (M), and the Input/Output Interface (IO) to the external world.
- the CPU contains a Control Logic function (CL), which is not described in detail, and four registers; a Program Counter register (PC,) an Address Register (AR), a Data Register (DR) and an Instruction Register (IR).
- the information stored in these registers can be read and written by CL.
- the memory (M) contains N memory words (MW 0 , MW 1 , ..
- Each memory word contains a specific number of binary memory elements, each one of which may store either the information 0 or 1. All the memory words are connected to the Data Register (DR) of the CPU, i.e. the contents of any memory word may be transferred to DR and vice versa.
- the Control Logic (CL) of the CPU has two control outlets, a Read Outlet (R) and a Write Outlet (W), which are connected to all the memory words in parallel.
- the Address Decoder selects the Memory Word, which corresponds to the contents of the Address Register (AR) or the Program Counter (PC), and the R Control Outlet from CL enables the transfer of information from the selected memory word into the Data Register (DR) if the address is obtained from the Address Register (AR), and into the Instruction Register (IR) if the address is obtained from the Program Counter (PC).
- the W Control Outlet from CL enables the transfer of information from the Data Register (DR) into the memory word addressed bythe Address Register (AR).
- the information stored in the memory words can be used in two different ways, either as data or as control instructions.
- V m the separate binary memory elements are combined together to form a single value (V m ) according to the principle shown in Fig. 2a.
- This value can then be manipulated by the CL in the desired manner, e.g. arithmetic operations, logic operations, etc.
- the total number of different data values represented by different combinations of the m bits, which can be stored in the memory word is 2 m .
- These bit combinations may be used to represent values ranging from 0 to 2 m -1 as illustrated by the example of the possible values for a 4-bit memory word in Fig.2b.
- Memory words used for data storage purposes are usually randomly accessed, i.e.
- the information from the first word is first read into the Data Register (DR), and then via CL transferred to the Address Register (AR).
- a program consists of a number of sequentially executed Machine Instructions. It is therefore natural, that the memory words used to store the Machine Instructions of a program also follow one another sequentially. The sequential execution of Machine Instructions stored in consecutive memory words is the normal mode of operation and therefore built into the Control Logic of the CPU.
- the Program Counter PC
- IR Instruction Register
- the Program Counter is automatically incremented so that control instructions are read in consecutive order.
- Fig. 3 illustrates a simplified example of instruction decoding in a memory word MW P , where it has been assumed, that instructions contain three bit groups, a Command Code group (CC) and two Operand bit groups (OP1) and (OP2).
- the Control Logic (CL) will then perform the following activities:
- Transfer OP2 into the DRregister of Figure 1. Assert the W control signal to the memory M, whereby the information in DR will be written into the memory word addressed by AR. Increment the PC register to prepare execution of the next instruction.
- the number of operands used in any particular instruction may vary with the instruction as already illustrated by the two examples above. Some instructions use no operands at all. Some instructions may even require three or four operands.
- the hardware consists of the Central Processing Unit (CPU) with its set of Machine Instructions (MI), the Memory (M) and the interface to the externalworld (IO).
- MI Machine Instructions
- M Memory
- IO externalworld
- Fig.4 the Data Flow and Control Flow relationships are shown, where each such relationship assumes the existence of a hardware connection.
- the Memory (M) is normally accessed by the CPU, but can also, if necessary be accessed directly from the IO hardware without the CPU being involved.
- the software which resides in the memory M, consists of the Application Programs (AP), the Input/Output Interface Programs (IOP), the Operating System (OS) and the Data Base (DB).
- AP Application Programs
- IOP Input/Output Interface Programs
- OS Operating System
- DB Data Base
- the Machine Instructions consists of the set of instructions, which are executable within the CPU and generally available for the design of any program (OS, AP, IOP).
- the Data Base (DB) contains all of the data for the system, which data can be read and written by the CPU under the control of the various programs.
- the Interface Programs are a set of programs specifically designed to control the actual types of peripheral devices (IO) used in the interworking with the external world.
- the Interface Programs are called either from the Master Scheduler (MS) or from the Application Programs (AP), and can themselves call Utility Subprograms.
- the Application Programs (AP) are a set of user specific programs, which have been designed to solve specific application problems (e.g. different types of computation). Application Programs called from the Master Scheduler and may call Utility subprograms and Interface Programs.
- the Operating System is the application independent System Control Program, which contains two parts, the Master Scheduler (MS), and the Utility Subprograms ( US ).
- the Master Scheduler (MS ) is usually driven by means of Interrupt Signals (IS), and controls the execution of all other programs in the entire system.
- the Utility Subprograms (US) are a set of generally available subprograms, which have been developed to solve problems of a general nature, but which are too complex to be performed by means of single Machine Instructions.
- the Utility Subprograms can be called from any type of programs (OS, AP, or IOP).
- Fig.5 shows a block diagram of the principal software structure, which is based on the hardware/software system structure in Fig. 4, and which is currently considered to be the basic software structure in conventional software technology.
- the software is split into two types of units, data and programs.
- the data of a system consist of individual data elements and structures (DE) located in the Data Base (DB) of the system.
- the programs consist of Operating System programs (OS), Application Programs (AP) and
- IOP Input/Output Programs
- the software structure in Figure 5 allows the three principal methods of communication illustrated in Figure 6 to be used, i.e. sequential communication, hierarchical communication and data communication.
- Sequential communication is the only type of communication used on the machine instruction level whereby program 1 and program 2 may be represented by two machine instructions stored in consecutive memory words and the common data element represented by the Data Register (DR) in Figure 1. Sequential communication may, nevertheless, be used on any program level up to, and including, the application programs (AP) in Figure 4. Sequential communication always implies a tight coupling and automatic synchronization between the two communicating programs through their control flow relationship, however.
- FIG. 6b The principle of hierarchical communication between two programs is illustrated by Figure 6b.
- the first program also transfers control to the second program.
- the transfer of control is only provisional, subject to the termination of the second program, whereupon control reverts back to the first program to the point immediately following the provisional exit point.
- Information may be passed from the first program to the second program through a commonly accessible data element in connection with the transfer of control from the first to the second program and similarly from the second program back to the first program through the same or another commonly accessible data element in connection with the return of control to the first program.
- 'mutual exclusion' i.e. two or more programs (or parts of the programs) writing into the same data element are prevented from being executed concurrently. If both programs are executed by the same processor, then the simplest way to achieve such 'mutual exclusion' is by ensuring that each program is executed to its termination before the next program is allowed to execute. However, in a time-sharing environment or in a multiprocessor system this cannot be simply guaranteed. In this cases explicit exclusion is obtained by 'locking' the data element, for instance by means of a so called 'semaphore'.
- the common data element consists of two parts, a data carrying part (D) and a semaphore element (S).
- the semaphore element in its simplest form consists of a binary element with one of its binary values used to indicate that the data carrying element (D) is 'unlocked' and the second value indicating that the data carrying element (D) is 'locked'.
- the Test-and-Set instruction will now 'lock' it, i.e. the actual program has uncontested access to the data element (D). After all accessing of this data element has been performed the semaphore element is reset, thereby 'unlocking' the data element (D) for other programs.
- the sender of the message writes information into the message buffer from which the information subsequently may be read by the receiver.
- no direct passing of control nor any other control flow relationship between the two programs is implied, i.e. the two communicating programs may operate concurrently.
- FIG. 8 illustrates the basic principle commonly used to achieve the required synchronization.
- the data element carrying messages is defined as a Message Buffer and consists internally of two elements, a Semaphore Element (S) and a Message Element (M).
- S Semaphore Element
- M Message Element
- the Semaphore Element in its simplest form is a binary element with one of its binary values used to indicate that the message buffer is free (i.e. no message in the buffer) and the other one that there is a message in the buffer. For the time being it is assumed that the value 0 indicates no message and the value 1 the existence of a message in the buffer.
- Both program 1 and program 2 have a normal control flow entry point, where execution of each program is to start when they have been scheduled by the Master Scheduler (MS). Assume that program 1 has been scheduled and has started execution from its normal entry point. At a certain point during this execution program 1 now has to send a message to program 2 via the message buffer. In order to find out whether the message buffer is available (i.e. whether the previous message has been fetched from the buffer) the Semaphore element is tested by means of an indivisible Test-and-Set instruction, which sets the Semaphore Element and remembers its previous value for subsequent testing.
- MS Master Scheduler
- Semaphore Element was previously set, then a message already exists in the buffer, which message must not be overwritten. In this case the execution of program 1 must be suspended and resumed at a later point in time when the message buffer is empty. Program 1 is therefore rescheduled, i.e. the Master Scheduler is informed that program 1 is to be re-executed. However, the part of program 1 which already has been executed does not have to be re-executed. It is therefore necessary to inform the Master Scheduler not only of the fact of the re-execution, but also of the actual rescheduling entry point from which execution is to be resumed. This rescheduling entry point is, in fact, immediately prior to the previously mentioned Test-and-Set instruction.
- Semaphore Element was previously reset, then a message may be deposited in the message buffer and program 1 executed to its normal termination point (provided of course that no subsequent reschedulings are warranted or required).
- Program 2 is scheduled to be executed by the Master Scheduler like program 1 and will thereby execute until the point in the program is reached, where a message has to be fetched.
- Semaphore Element is tested by means of an indivisible Test-and-Reset instruction, again remembering the previous value of the Semaphore Element.
- Semaphore Element was previously reset, then no message exists in the buffer. In this case the execution of program 2 must be suspended and resumed at a later point in time when a message is available. Program 2 is therefore rescheduled, i.e. the Master Scheduler is informed that program 2 is to be re-executed. Again, the part of program 2 which already has been executed does not have to be re-executed, i.e. it is therefore necessary to inform the Master Scheduler not only of the fact of the re-execution, but also of the actual rescheduling entry point from which execution is to be resumed. This rescheduling entry point is, in this case, immediately prior to the previously mentioned Test-and-Reset instruction.
- Semaphore Element was previously set, then a message is avilable in the message buffer and program 2 may now fetch the message and continue to executed to its normal termination point (again provided that no subsequent reschedulings are warranted or required).
- the Semaphore Element (S) and the Message Element (M) is able to accomodate a single message only. It is, however, quite easy to extend both element to cover any number of messages.
- the Test-and-Set operation on the Semaphore Element would now indicate that the semaphore was previously set only in the case that the message buffer already holds N messages.
- the Test-and-Reset operation on the semaphore element would now indicate that the semaphore was previolsly reset only in the case where no message was held by the buffer. The operation is otherwise similar to Figure 8.
- Periodic scheduling means that a program is scheduled to be executed regardless of whether there is anything for the programs to do or not.
- Periodic scheduling has two elements of time involved, i.e. the scheduling rate and the scheduling delay.
- the scheduling rate determines how often a program is to be executed, whereas the scheduling delay is defined as the time difference between the point in time when a program is scheduled to be executed and the the point in time when the execution actually starts. It can be seen that the scheduling rate of any periodically executed program must always be higher than or equal to the required rate of execution of that program.
- the scheduling delay may be anything from 0 up to the scheduling interval (i.e.
- Periodic scheduling always causes an overhead load on the system. This overhead load is directly proportional to both the number of scheduled programs and the scheduling rate of each program. It is therefore desirable to keep both the number and scheduling rates of periodically scheduled programs as low as possible, which means that on-demand scheduling should be used as much as possible.
- On-demand scheduling means that a program is scheduled only when there is something for the program to do. Furthermore, scheduling should not be attempted unless the program can complete its execution without unnecessary rescheduling.
- program 2 should only be scheduled when a message is available and waiting for it in the message buffer and program 1 when the message buffer is empty.
- Figure 8 is a special case of the more general situation illustrated by Figure 9, where each program may interwork with other programs by means of both a number of messages and general common data elements . This implies the existence of an arbitrary number of locations within each program, where the program may be rescheduled.
- PCB Process Control Blocks
- FIG. 10 shows a typical structure of a conventional Process Control Block containing the following elements: (a)The Process State
- the process state is used to indicate the current status of the process as seen from the Operating System.
- the following states are typical:
- the scheduling of a process is usually performed on a priority queue basis, where each process is assigned a fixed priority and where higher priority processes may interrupt lower priority processes.
- a lower priority process which is RUNNING at the time a higher priority process is scheduled (i.e. READY)
- READY RUNNING at the time a higher priority process is scheduled
- the Process State of the higher priority process will be set to RUNNING and execution of the higher priority process will begin from the location held in the Program Location below.
- the Scheduling Control therefore comprises both a priority indication and necessary bidirectional link elements to acccomodate the scheduling queues as well as the need to be able to reschedule or change the scheduling at will.
- the Message Queue Control consists of the necessary data elements indicating the head and tail of the queue of messages sent to the process but not yet received, assuming that the message queue is of the
- FIFO type First In, First Out
- Program Location The Program Location is used to hold the location in the program when execution is to start the next time the program is executed.
- e Global Data Access Control
- the global data access control comprises a set of pointers (i.e. absolute addresses or similar), which define all global data elements accessible by the process.
- a global or common data element is any data element accessible from more than one process.
- the local data space is a memory area dedicated to data elements used to hold specific information local to a process, i.e. not accessible to any other process.
- (g)Private Stack Space The private stack space is used for the storage of temporary values during the execution of a program.
- a message based signalling system using on-demand process scheduling is for example described in "An Operating System for Reliable Real-Time
- a process may be scheduled to be executed because of reception of a message, because it cannot send a message due to message buffer congestion, because a mutual exclusion condition is encountered, because the program has been interrupted by a higher priority program or for any other reason.
- process scheduling is, in fact, the only feasible scheduling alternative. This, in turn, requires a Process Control Block of the type outlined in Figure 10.
- Process Control Blocks of the type outlined in Figure 10 are quite big, ranging from a minimum of several hundreds of bytes of memory to several thousands of bytes, all of which have to be held in the main memory (M) of a computer according to Figure 4.
- M main memory
- PCBs main memory
- Typical values for the maximum number of PCBs range from 100 up to 255.
- the required number of simultaneously active processes in a modern, complex Real-Time system is of the order of 10000 or more. Using at most a few hundred of PCBs to accomodate a several orders of magnitude larger number of processes forces the systems to rely heavily on dynamic allocation of resources.
- each message buffer may be considered to represent an intercommunicating link or port between two processes. It should be evident, that the number of message buffers will grow more or less exponentially with the number of processes in the system, thereby enhancing the requirement to keep the number of simultaneous processes fairly low for reasons of manageability. Because of the limited resources available the number of messages that a single message buffer is able to hold simultaneously must be reduced in comparison with the case where each process had a single message buffer for all messages to that process, thus increasing the probability of buffer congestion.
- message communication in the form described above tightly couples two processes to each other it is generally accepted to use messages as a means of synchronizing two processes with each other, i.e. the message communicaton aspect and the synchronizing aspect are inextricably tied together.
- the sending process must know that a receiving process exists before sending a message, with the case where no receiving process exists being treated as a direct error.
- the receiving process must know that a sending process exists and is about to send a message before the message is actually sent. This directly precludes any kind of off-line or on-line pluggability in a system.
- the first of the differences between a system based on asynchronous signal communication instead of synchronous message communication is that scheduling of processes is based exclusively on the arrival of signals. This is always possible if each signal contains information enabling the receiving process to be unambiguously defined. There is consequently no need to tie signal buffers to either the receiving or the sending process, i.e. a common set of signal buffers may now be utilized for all processes. The immediate consequence of this is that this common set of signal buffers may be so dimensioned that buffer congestion never arises during any normal operating condition. Hence it is possible to assume that a signal can always be sent, i.e. rescheduling due to signal buffer congestion is never required.
- signal scheduling is self-reenforcing, i.e. because signal reception is the only reason for scheduling a common set of signal buffers may be employed; because a common set of signal buffer may be employed, the number of individual buffers may be dimensioned so that buffer congestion never occurs and because buffer congestion never occurs there is no need for scheduling for any other reason than signal reception.
- the previously described process scheduling with synchronous messages is equally self-reinforcing, i.e.
- the second difference is that the signals are assigned priorities depending on their importance, whereas processes have no inherent priority of their own. Any process is now executed at the priority of the signal, the arrival of which triggered the execution of the process. This means that the same process may be executed with completely different priorities at different points in time. Because the priority of the signals are assigned completely independently of the processes sending and receiving the signals it is possible to vary the priorities of the signals without modifying the design of the processes themselves. A system based on signal scheduling may therefore be easily 'tuned' by varying the signal priorities. Such tuning is impossible in a process scheduling system.
- the third difference is that a signal scheduling system has no need for the type of Process Control Block shown in Figure 10. Because processes are never scheduled directly there is no need for any Scheduling Control. Secondly, because the signal buffers are not associated with any particular process there is no need for any Message Queue Control. A direct consequence of the signal scheduling as opposed to process scheduling is also that it is now possible to design the system so that a common stack area is utilized instead of a Private Stack Space. Hence, of the original Process Control Block in Figure 10, only the Program state, the Program Location, the Global Data Access Control and the Local Data Space remains.
- WAIT Execution is currently suspended until the next signal is received.
- RUNNING The process is currently being executed and blocked from signal reception. There is no need for any 'READY' state because the process is never itself scheduled.
- signal transfer may be made at least one order of magnitude more efficient than the message transfer in a process scheduling system because signal routing and scheduling may now be performed in a general way instead of individually per process or process pair.
- Typical signal transfer times are of the order of 20 ⁇ seconds or less. This is to be contrasted with the message transfer times of 1000 ⁇ seconds or more, which are typical in process scheduling systems.
- the fast signal transfer makes it possible to eliminate any other type of communication between software units without loss of eficiency, thereby again reinforcing the single reason for scheduling.
- the signals may be assigned N different priority levels. Each priority level is assigned a predefined number of Signal Buffers (SB), exemplified by the m signal buffers (SB ll -SB lm ) on priority level 1 and the k signal buffers (SB Nl -SB Nk )on priority level N.
- SB Signal Buffers
- the number of signal buffers assigned to the different priority levels (m, ... k) is dimensioned so that buffer congestion never will occur during any normal operating condition.
- the number of high priority signals will, in general, be considerably smaller than the number of low priority signals and because the high priority signals always arew treated before low priority signals the number of signal buffers required to accomodate high priority signals will always be smaller than the number of signal buffers required to accoamodate low priority signals in the same system.
- the signal scheduling on each priority level (i) is controlled by a set of two registers, a Job-In register (JI i ) and a Job-Out register (JO i ), and a
- Comparator circuit (C i ) interacting with the interrupt control system of the processor.
- the JI i -register always points to the first free signal buffer out of the signal buffers assigned to the associated priority level
- the Comparator circuit (C i ) compares the contents of the JI i and the JO i registers and generates an interrupt signal (INT i ) as soon as the two registers are unequal.
- the signal buffer identified by the JI i register is seized for this particular signal.
- the signal buffer space is divided into two parts, a signal header part containing the Signal Identity (SID) and the Signal Destination (SDEST) as it most important components, and a signal information part (SINFO) carrying the information to be transported by the signal.
- SID Signal Identity
- SDEST Signal Destination
- SINFO signal information part
- the processor On a signal interrupt to priority level (i), the processor starts executing a sequence of instructions or microinstructions, which identify the signal buffer indicated by the JO i register associated with priority level (i).
- This signal buffer contains all information associated with the signal next to be received.
- the Signal Destination part (SDEST) of the signal header identifies the actual receiving process and is now utilized to identify the data and program areas associated with that process.
- SDEST Signal Destination part
- the Signal Identity (SID) is utilized to identify the task to be performed as will be described below, whereafter the information carried by the signal (SINFO) can be transferred to, or utilized by, the receiving process. After all of the information in the signal buffer has been utilized the signal buffer is discarded by means of incrementing the JO i register.
- the interrupt signal (INT i ) will disappear, otherwise an interrupt signal ( INT i ) will continue to be generated as long as any unreceived signal remains in the signal buffer for the associated priority level i .
- SDEST Signal Destination part
- PA Program Area
- DA Data Area
- PSA starting addresses
- DSA DSA
- the Data Area (DA) contains all of the data for the process and also contains the so called STATE of the process.
- the STATE is a data element of fixed length, e.g. a byte element.
- One bit of the STATE element is used to represent the previously mentioned Program state with the two values WAIT and RUNNING and the remaining 7 bits used to indicate one of a number (Q) functional conditions of the process.
- the Program Area (DA) contains two parts.
- the first part of the program area consists of a so called Signal Distribution Table with M entries (SEA j to SEA m ).
- Each signal entry (SEA j ) contains the internal address to the program code related to reception of this signal.
- the actual tasks to be performed normally depend on the functional condition of the process, which functional condition is retained in the STATE element in the data area. For this reason a signal entry (SEA j ) normally points to an associated Signal Jump Table (SJT j ).
- the signal jump table principally contains one entry for each of the Q functional conditions of the process, each entry (PE jk ) identifying the actual set of instructions (PC jk ) to be executed on reception of signal (j) for functional condition (k).
- the start addresses associated with the destination process are determined from the SDEST element of the associated signal buffer.
- the SID element is thereafter used to indicate the actual signal entry (SEA SID ), from which the location of the first instruction to be executed is obtained.
- This instruction usually is a table jump instruction or set of instructions (JTAB), which uses the functional condition stored in the STATE element to determine further activities.
- a process may be either in the WAIT or in the RUNNING state.
- the process In the WAIT state the process is waiting for the next signal to be received, whereby the functional condition may be used to determine the activities to be performed on reception of the signal.
- the functional condition may be used to determine the activities to be performed on reception of the signal.
- a process is RUNNING, then a signal has been received, at the reception of which the previous functional condition was used to determine the activities currently being performed. This previous functional condition is therefore no longer valid.
- the next valid functional condition is only determined at the termination of the processing, whereby the process reaches its' next WAIT state.
- the bit indicating the Process State is automatically set. If this bit was previously reset, then the process was in a WAIT state, i.e. the functional condition is valid and processing can therefore continue. At the termination of this processing the new functional conditioning is inserted into the STATE element, thereby also automatically resetting the process to WAIT.
- the process is currently RUNNING (either on a lower priority level interrupted by a higher priority signal or asynchronously being executed by another CPU). In this case reception of the signal is not possible, i.e. the signal must now be deferred for reception at the termination of the currently interrupted processing in order to prevent faults from occurring.
- Such deferment may for instance be achieved by seizing a signal buffer on a lower priority level in the standard manner, copying all information from the actual signal buffer into the newly seized signal buffer and thereafter releasing the actual signal buffer by incrementing the JO i register for the priority level (i) of the actual signal, thereby either inhibiting the associated interrupt signal (INT i ) and thus allowing the interrupted program to be resumed or allowing the next signal to be analysed for reception.
- processing is to terminate by reaching the next WAIT state the new functional condition is inserted in the STATE element of the process, thereby also resetting the Process state to WAIT.
- the signal buffer of the old received signal must be released by incrementing the JO i register associated with the priority level (i) of this signal.
- any interrupt will be completely transparent, i.e. a function does never know that, or if, it has been interrupted.
- Figures 11 and 12 has a number of advantages.
- the interface between the different software units is defined by means of signals, whereby each software unit knows which signals it is able to receive and can completely disregard all other signals. This makes the system insensitive to faults and errors.
- a third advantage is that the signal communication is fast with typical average signal transfer times of the order of less than 30 ⁇ seconds when this technique was first implemented.
- the signal transfer time is furthermore independent on the load of the system unlike a process scheduling system, where the actual message transfer time is dependnet on the amount of rescheduling that has to be done and therefore varies with the load on the system. Systems based on signal scheduling are therefore easily dimensionable.
- the signal distribution table at the beginning of the program area in Figure 12 must, of necessity, be limited in size.
- the maximum number of signals which any function block (process) is able to receive is 255, whereas the total number of signals in a system is of the order of several thousand.
- SID signal identity numbers
- a number of the function blocks within the systems interwork via standard interfaces. What this eans is that the signal identity numbers within any standard interface have to be fixed.
- the MODE_THREE process contains a single data element (DATA) and performs a set of interconnected activities as follows. When the process is initially started the data element is cleared, after which the process reaches its first so called 'waiting node', i.e. a point where processing cannot proceed until a signal has been received. In this first waiting node (NO) the process waits for either an ADVANCE or a RETREAT signal to arrive before processing can continue.
- DATA data element
- the process When an ADVANCE signal arrives in waiting node NO, then the process will read information from its data element (DATA) and send this information with a ONE signal, whereafter the process will wait for the next signal to arrive in the second waiting node (N1). If instead a RETREAT signal arrives, the process will store the data carried by the signal into the data element
- the process will read information from its data element (DATA) and send this information with a THREE signal, whereafter the process will wait for the next signal to arrive in the first waiting node (N0). If instead a RETREAT signal arrives, the process will store the data carried by the signal into the data element (DATA) and return to the second waiting node (N1) to wait for the next signal.
- Figure 14 illustrates the principle of the implementation of the function of Figure 13 in a system according to Figures 11 and 12 and assuming a processor according to Figure 1.
- the Data Area (DA) contains two elements, the DATA element identified in Figure 13 and a STATE element required by the actual implementation.
- the STATE element may have three values (N0, N1, N2) which in this implementation correspond with the numeric values 0, 1 and 2.
- the Program Area contains 10 identifiable sections.
- the first section (Located at Offsets +0, +1 and +2 relative to PSA) forms the Signal Distribution Table.. It is assumed that entry 0 in this table (I) is always reserved for possible initialisation activities, i.e. the first 'real' signal identity is '1', the second '2' etc.
- the ADVANCE signal has been assigned th absolute identity '1' and the RETREAT signal the absolute identity '2'.
- the contents of the signal distribution table locates the initialisation sequence at offset +3, the ADVANCE reception sequence at offset 45 and the RETREAT reception sequence at offset +22.
- the second section of the program area (at offsets +4 and +5) forms the initialisation sequence.
- the third section of the program area forms the jump table at reception of the ADVANCE signal and includes a previously mentioned Test-and-Set instruction (TAS) for interference protection.
- TAS Test-and-Set instruction
- the forth, fifth and sixth sections of the program area form the actual sequences initiated by the reception of ADVANCE signals .
- the seventh section forms a corresponding jump table section for the RETREAT signal.
- Figure 14 is an example of one valid implementation of Figure 13. Nevertheless, Figure 14 only represents the function of Figure 13 within the framework of, and the restrictions imposed by the means of this implementation, i.e. the particular Finite State Machine Oriented approach with a STATE element coded in a particular way and with signals sent and received in a particular way. Furthermore, it makes it the responsibility of the programmer to take all these implementation restrictions into account, i.e. he has to design his functions with a particular implementation in mind and, as a result, introduce nonfunctional elements and sequences and thereby distort the original function.
- the function illustrated by Figure 13 shows, for example, a continuously looping process with three waiting nodes, where signals may be received and containing a single data element.
- Figure 15 which should show the same function, actually gives the impression that the function contains two data elements and a set of disjoint program sequences. Neither the existence of waiting nodes nor the looping nature of the interrelationship between the program sequences is apparent anymore.
- a third disadvantage with the signal communication as described above is the means available for deferring of signals. Because the JI/JO registers control the interrupt system as defined in Figure 11, a signal, which has to be deferred for whatever reason, has to be removed from its original signal buffer and transferred to a lower priority buffer in order not to block the resumption of execution of the lower level interrupted program, which temporarily block reception of the signal. However, this means that the priority of the signal will be artificially lowered, with the result that the order between different signals may be lost. Hence the possibility for malfunctions exists in the case where signals are deferred.
- a signalling system aimed at improving the advantages of asynchronous signal communication, while at the same time removing some of the disadvantages is disclosed in Belgian patent 876025. Like the previously described system, this system also works on the principle of a set of signal buffers, with the number of the signal buffers dimensioned so that buffer congestion never occurs. Unlike the previous system, where the signal buffers and the signal reception was controlled by means of fixed registers (JI/JO) with a fixed set of buffers for each priority level, the new signalling system controls the signal buffers by means of linked list of First-In-First-Out (FIFO) type with one list per priority level. This means that the signal buffers are, as such, common for all priority levels.
- FIFO First-In-First-Out
- a signal buffer consists of a signal header part and a signal data part, where the signal data part directly corresponds with the previous system.
- the signal header part still contains information corresponding to SID and SDEST of Figure 11, although one of the objects of this signalling system is to allow undefined signal destinations in connection with so called 'system basic messages' and thereby allow a greater flexibility an on-line pluggability than the previous system.
- the signal header contains a list link element, necessary to allow the system to be controlled by means of linked lists.
- the new signalling system does not assume a Finitie State Machine approach as such, i.e. it does not assume the existence of a STATE data element.
- the new signalling system basically retains the Program State and Program Location of Figure 10 in the form of an 'Action Indicator' to indicate whether a process is in its WAIT or in its RUNNING state element and a 'Program Pointer' to indicate the program location of a waiting node. This arrangement frees the system from maintenaning individual signal distribution tables of fixed length, i.e. the signal may be coded with completely arbitrary signal identity codes.
- Figure 16 illustrates an implementation of the MODE_THREE process with the new signalling system, but still assuming a processor according to Figure 1.
- the Data Area (DA) contains three elements inthis case, the DATA element explicitly identified in Figure 13, and two implicit data elements, the Action Indicator element (AI) and the Program Pointer element (PP).
- the Action Indicator is a binary elememt with two values corrseponding to the process states WAIT and RUNNING. When the Action Indicator indicates that the Process is in a WAIT state, then the Program Pointer identifies the offset to an instruction following a WAIT instruction. It is to be noted that the implicit data elements are not explicitly accessible by the programmer at any time.
- the Program Area principally contains a single sequence of instructions, which directly corresponds with the network in Figure 13.
- the program starts at the Program Start Address (PSA), where the initialisation sequence is executed.
- PSA Program Start Address
- This initialisation sequence consists of the two instructions at offsets +0 and +1, the second of which is a WAIT instruction.
- This WAIT instruction principally deposits the offset of the instruction following the WAIT instruction (in this case +2) into the Program Pointer, resets the action indicator to WAIT and initiates scheduling of the next signal.
- the WAIT instruction may also return the last signal buffer to the pool of idle signal buffers.
- the actual signal buffer is identified as the first signal buffer in the appropriate list.
- the SDEST information in the signal buffer is as before used to identify the start of the Data Area (DSA) and the start of the Program Area (PSA).
- the Action Indicator is Test-and-Set to RUNNING and, provide that it was previously reset to WAIT, signal reception s allowed to continue. If the Action Indicator was already set to RUNNING then the actual signal is deferred by inserting it in the deferred list.
- the Program Pointer value is added to the PSA address, giving the resumption point for execution. Immediately after initialisation this would indicate the instruction at offset +2, which is an ACCEPT instruction.
- This instruction compares the actual identity of the signal as given by the SID element of the signal buffer with the signal identity number given by the instruction. If no match is found, i.e. the two signal identity numbers are unequal, then the next instruction is executed, whereby a either a new signal identity number is compared in the same manner or a DISCARD instruction is encountered. When a DISCARD instruction is encountered the Action Indicator is reset to WAIT and the actual signal buffer returned to the common pool of idle signal buffers, whereafter the next signal is scheduled for reception.
- the ACCEPT instruction When a match is found between the signal identity number from the signal buffer and the signal identity number in an ACCEPT instruction, the ACCEPT instruction also contains the location, from where further execution is to continue. Hence, if an ADVANCE signal is received at offset +2, then processing will continue from the instruction at offset +5, i.e. the instructions at offsets +5, +6 and +7 will be executed.
- the instruction at offset +7 is another WAIT instruction.
- the Program Pointer is set to +8 in this case and the Action Indicator reset to WAIT.
- the previous signal buffer has not been previously released, it will be released at this time and the next signal to be received is scheduled.
- the existence of a separate DEFER list for deferred signals necessitates all signals in the deferred list to be scheduled for reception before all other signals, i.e. any deferred signal automatically has a higher priority than all other signals.
- each signal when going through the list of deferred signals, each signal must only be analysed once without losing the order of arrival of the deferred signals. Furthermore, there is a distinct possibility of the number of deferred signals growing and the deferred signal analysis and scheduling therefore causing a significant overhead in the systewms.
- FIG 17 illustrates a process P, which is able to receive two types of signal (S1 and S2).
- the process P contains a network of tasks, of which the relevant part for the purpose of illustration consists of two waiting nodes (WX emd WY) and four tasks (T1, T2, T3 and T4.
- WX emd WY two waiting nodes
- T1, T2, T3 and T4 four tasks
- the process P is in waiting node WY it is waiting exclusively for signal S2, i.e. all other signals will be discarded.
- signal S1 has been received, i.e. while executing the tasks T1, T2 and T3
- no further signal reception is possible.
- FIG. 18 shows the process P within its context as part of a larger structural entity (GP) which, in addition to the process
- P also contains a so called 'template definition' of a task template (TP).
- This task template is used for both task T2 and task T3 of the P process, i.e. the internal structure of both of these tasks is as defined by the TP template.
- the TP template is assumed to contain an internal waiting node (WP), where either an S3 or an S4 signal may be received.
- WP waiting node
- process P according to Figure 19 is not equivalent with the function of process P according to Figures 17 and 18.
- the process P according to Figure 19 is explicitly aware of the existence of the signals S3 and S4 in addition to S1 and S2, whereas the process P in Figures 17 and 18 only knows about S1 and S2.
- a signal S2 actually occurs while the process is executing any of the tasks T1, T2 or T3, then the reception of this signal is, by default, deferred until the process has reached its next waiting node, i.e. WY.
- a signal buffer consists of two parts as shown by Figure 11, i.e. a signal header parts with a number of standard elements (SID, SDEST, etc.) and a signal information carrying part (SINFO). Normally the signal information part is never utilized to its full extent.
- the signal buffer used for the signal S1 should be retained until enteering the waiting node WY, i.e. any local data temporarily stored in this signal buffer must be available for use in all of the tasks T1, T2 and T3.
- the signal buffer used for S1 will be released on entering the waiting node T2-WP, which directly precludes use of the signal buffer for storage of local information on a regular basis.
- FIG. 21 shows a general overview of a Processing System according to the invention.
- MCU Master Control Unit
- the difference between this processing system and a conventional processing system of the type illustrated by Figure 1 is that a Master Control Unit (MCU) according to the invention has been interposed between the Memory (M) and the CPU, whereby the CPU asserts addresses to the MCU on its Address Bus (ABUS) and exchanges data with the MCU via the Data Bus (DBUS).
- the MCU is in its turn connected to the memory (M) via secondary Address and Data Buses (ABUS2 and DBUS2) with the necessary control signals exchanged both between the CPU and the MCU and between the MCU and the memory (M).
- FIG 22 shows em overview of the main partitionings of the memory (M) required to support independent software signalling units according to the invention.
- M memory
- independent software units are hereinafter referred to as 'Sofchips'.
- the memory is first partitioned into two major areas, which are referred to as System Memory (SM) emd User Memory (UM).
- SM System Memory
- UM emd User Memory
- the User Memory is partitioned into one logical area for each Sofchip (SCHP 1 , SCHP 2 , ... SCHP N ). It is to be noted that, although the Sofchip areas in Figure 22 and in subsequent Figures are shown as contiguous areas for ease of understanding of the invention, this is not, as such, a prerequisite for the invention.
- the System Memory is partitioned into a Common Subroutine Area (CSA), a Stack Area (SA) and a Signal Buffer Area (SBA).
- the Common Subroutine Area contains a number of Common Subroutines (CS 1 , CS 2 , ... CS a ). These subroutines may be used by any sofchip in a way equivalent with any machine instruction and may therefore be considered as extensions to the machine instruction list of the CPU.
- the Stack Area contains a single System Stack ( SS ) and a User Stack ( SS ) .
- the User Stack is utilized in connection with user programmed subroutine calls and temporary storage of information.
- the System Stack is used in connection with system interrupts and fault traps .
- the Signal Buffer Area contains a number of Signal Buffers (SB 1 , SB 2 , ... SB b ), which are utilized for all communication between the sofchips.
- the number of signal buffers (b) is assumed to be dimensioned so that during all normal operating conditions there always is a free signal buffer available when required.
- Figure 23 illustrates the principal internal structure of an operational sofchip.
- a sofchip may contain a number of processes (P1 to P P ), where each process forms a concurrently operating software unit. Each process may be instantiated a number of times individual for each process (N P1 to N PP ), i.e. process P1 may be considered to form an array of N P1 instances, etc.
- PRp may, in addition, contain a data structure
- THe sofchip itself may also contain a common data structure (DS 1 ), which is accessible from any of the processes belonging to the sofchip.
- Each data structure may contain an arbitrary mixture of subsidiary data structures (DS il to DS ij ) and single data elements (DE kl to DE kl ), both of which may form arrays with an arbitrary numbewr of elements (N]il .... N kl ).
- the sofchip communicates with its environment by means of a set of signals (S1 ... Sn). These signals are in reality sent and received by the program parts of the processes belonging to the sofchips with the information carried by the signals stored into and retrieved from the data elements belonging to the sofchips. It is to be noted that the processes of a sofchip can only access the data elements belonging either to the process itself or, if a common data structure (DS 1 ) exists, to that data structure. Direct data access between sofchips is not possible.
- FIG 24 illustrates an example of an actual sofchip (SCHP).
- This sofchip contains a single common data element (DE1) and three processes (P1, P2, P3).
- Process P1 occurs only once (i.e. a process with a single instance) whereas processes P2 and P3 occurs N times (processes with N instances).
- the signals S l to S n sent and received by the sofchip are distrumped between the processes so that P1 sends and receives signals S l to S i , process P2 sends and receives signals S i+l to S j and process P3 sends and receives signals S j+l to S n .
- Process P1 contains its program (PR1) and a data element (DE2), which forms an array with N individual elements.
- Program PR1 may access both the common data element DE1 and the DE2 array.
- Process P2 contains its program (PR2 ) and a single data element (DE3). PR2 only accesses DE3, it is not required to access the common data element DE1.
- Process P3 finally contains its program (PR3) and a data element (DE4), which forms an array with M elements.
- Program PR3 may access both DE4 and the common data element DE1.
- FIG 25 illustrates a possible implementation of the sofchip in Figure 24 within a processing system according to the invention.
- Memory allocated to the sofchip is divided into two memory areas, a Program Area (PA) and a Data Area (DA).
- PA Program Area
- DA Data Area
- the Program Area contains the programs (PR1, PR2, PR3) of the three processes (P1, P2, P3) belonging to the sofchip. It is thereby to be noticed that, although the processes themselves are shown to be instantiated in Figure 24, the corresponding implemented programs need not be instantiated, because such instantiation would only mean an unnecessary replication of identical programs.
- the Data Area contains the four data elements DE1, DE2, DE3 and DE4, which have been arranged contiguously for convenience. In this case all instantiations are carried over into the implementation, i.e space is allocated for N instances of DE2 (P1.DE2(1) to P1.DE2(N)), for N instances of DE3 (P2(1).DE3 to P2(N).DE3) and for M*N instances of DE4 (P3(1).DE4(1) to P3(N).DE4(M)).
- the CPU controls this execution through its normal register set, exemplified by the registers PC, AR, IR and DR in Figure 25.
- Figure 26 illustrates the internal arrangement of the MCU when addressing the Common Subroutine Area (CSA) and the Stack Area (SA) in the memory (M).
- CSA Common Subroutine Area
- SA Stack Area
- the Common Subroutine Area and the Stack Area forms a single contiguous area in the memory and that this area can be addressed by means of M address bits, where M is less than the total number of address bits (N) on the primary address bus (ABUS).
- One of the four bits not directly used for addressing purposes (bit N-4) is used to discriminate between access of a common subsroutine or stack area, where no address or data translation is required, and access of any other area.
- bit N-4 One of the four bits not directly used for addressing purposes
- bit N-4 One of the four bits not directly used for addressing purposes (bit N-4) is used to discriminate between access of a common subsroutine or stack area, where no address or data translation is required, and access of any other area.
- the value '0' has been arbitrarily selected to indicate direct memory access (common subroutine or stack access). In this case the three remaining bits are not used (NU) at all.
- Figure 27 illustrates the principle of accessing memory areas belonging to the sofchips and to the signal buffers.
- the discriminator bit is set to 1, thereby causing the MCU to perform an address and data translation by means of a Descriptor Table (DT), three
- XDEC XDEC
- XR0, XR1, ... XRQ Index Registers
- MSU Mask and Shift Unit
- RSU RAnge Conversion Unit
- the arrangement in Figure 27 differs from the general arrangement in the U.K. Patent Application 8405491 in two details. Firstly, the descriptor table entries have been grouped together so that all descriptors belonging to the same sofchip form single contiguous areas in the descriptor table (SCHPD 1 , SCHPD 2 , ... SCHPD N ). Secondly, one of the index registers (exemplified by XRO in Figure 27) has been given the special purpose of identifying the actual sofchip under execution by means of the starting address to the sofchip descriptor area within the descriptor table (DT). In order to speed up address translation this index register (XRO ) is directly added to the Virtual Address obtained on the address bus from the CPU by means of an extra adder circuit.
- XRO index register
- the standard technique used to avoid interference between the interrupting and the interrupted program is to save all process registers on the system stack in connection with an interrupt. Thus, when the interrupting program terminates, then the registers may be restored to their values at the time of the interrupt, thereby allowing the interrupted program to resume execution as if the interrupt had not occurred at all.
- the above arrangement not only allows a single processor to execute programs on different priority levels without interfering with each other, it also allows several CPUs to be connected to a single MCU and execute programs independently of each other, provided that each processor executes on a unique priority level.
- index registers within each index register set XRS(i) of the MCU are given further dedicated use as exemplified in Figure 29.
- index register XRO index register XRO has been selected as the sofchip register and whereby the sofchip identification is performed by means of the starting address to the sofchip descriptor area within the descriptor table (DT).
- a second index register is dedicated to the identification of the actual received signal.
- This register is called the Received Signal Register (RSR) and corresponds with point (4) in Figure 25.
- RSR Received Signal Register
- index register XR1 has been selected as the received signal register.
- Two further index registers are dedicated to the control of signal queues as will be described later.
- Signal queues are of the FIFO type (First-In-First-Out) and are controlled by a Signal Queue Head Register (SQHR) and a Signal Queue Tail Register (SQTR).
- Index registers XR2 emd XR3 are dedicated for this purpose in Figure 29. It is to be noted that neither of these two register corresponds with any of the points in Figure 25.
- a fifth index register is dedicated to the identification of the signal next to be sent. This register is called the Send Signal Register (SSR) and corresponds with point (5) in Figure 25.
- Index register XR4 is dedicated to this purpose in Figure 29.
- a sixth index register is dedicated to the actual process instance index.
- This index register is called the Instance index Register (IXR) and is used in conjunction with with the SCHPR register when accessing any data elements belonging to a process instance.
- Index register XR5 is selected for this purpose in Figure 29.
- the remaining index registers (XR6 to XRQ) may be used without any dedicated purpose.
- Figure 30 illustrates the essential additional circuitry required for a signalling system according to the invention and the principle for interfacing the new circuitry to existing MCU circuitry.
- This additional circuitry consists of a Microprogram Unit (MPU), an Interrupt Controller (ICNT) and a set of gates (GI 0 to GI 7 ) and comparator circuitry (CA 0 , CB 0 , ... CA 7 , CB 7 , STCD).
- MPU Microprogram Unit
- ICNT Interrupt Controller
- GI 0 to GI 7 set of gates
- comparator circuitry CA 0 , CB 0 , ... CA 7 , CB 7 , STCD
- the MPU interfaces to the main memory (M) via the secondary address bus (ABUS2), i.e. the MPU may independently assert addresses to the main memory.
- the MPU also controls the readout of information from the memory via MSU and is able to interface directly to the RCU.
- the MPU is also able to independently control the task level via the Level Decoder (LDEC) as will be later explained.
- the MPU is itself controlled from the Interrupt Controller (ICNT) and may also generate interrupt signals (INT) to the CPU.
- ICNT Interrupt Controller
- INT interrupt signals
- the relevant parts of the MPU for the purpose of the invention are the Micro Program Memory (MPM), the Micro Program Descriptor Table (MPDT) and two internal registers, an Idle Queue Header register (IQH) and an Idle Queue Tail register (IQT).
- MPM Micro Program Memory
- MPDT Micro Program Descriptor Table
- IQH Idle Queue Header register
- IQT Idle Queue Tail register
- the Micro Program Memory contains a set of microprograms (MP 1 , MP 2 , ... MP P ), which control the use of the MCU according to the invention. These microprograms are described in detail in Figures 36, 37, 38, 39, 40, 41, 44 and 45.
- the interrupt Controller is a standard interrupt controller such as exemplified by the INTEL 8259A
- Programnable Interrupt Controller emd is as such only a subject of the invention as far as its interconnections to the other relevant parts of the MCU axe concerned.
- the Interrupt Controller reacts on a number of interrupt signals. These interrupt signals are either generated by the above mentioned gate/comparator circuitry as will be explained below or generated for particular combinations of bits on the primary address bus from the CPU when the previously mentioned discriminator bit is set to '1' by means of the Mode Decoder (MD) as shown in Figure 30. It is to be noted that neither the number or the particular bits chosen to indicate the corresponding mode is critical for the invention, bits 12-15 on the address bus in Figure 30 have been chosen only as an example.
- Each set of index registers contains a Received Signal Register (RSR) and a Signal Queue Header Register (SQHR), which in the example shown in Figure 29 have been assigned as index registers 1 and 2 (XR1 and XR2).
- the contents of the RSR register belonging to each group of index registers (XRS(i)) is on a bit-by-bit basis by means of an associated comparator circuit (CA i ) compared with a predefined Stop Code (STCD), whereby the comparator circuit generates an output signal for any difference between the contents of the RSR register and the stop code value.
- the contents of the SQHR register is by means of a second associated comparator circuit (CB i ) compared with the stop code value, whereby the comparator generates an output signal for any unequality.
- each set of comparator circuits (CA i emd CB i ) are by means of a gating arrangement (GI i ) combined into a single interrupt signal (IR i , whereby an interrupt signal will be generated when the SQHR register is different from the stop code, provided that the RSR register is simultaneously equal to the stop code.
- Each set of index registers (XRS(0) to XRS(7)) is thus able to generate one interrupt signal, whereby the Interrupt Controller (ICNT) resolves the internal priority between these internally generated interrupt signals and the interrupt signals generated from the CPU by means of the Mode Decoder (MD). It is again to be noted that the number of index register sets is not critical for the invention.
- Figure 28 and 30 shows a total of 8 index register sets, any other number is possible.
- the MCU has one of its set of index registers allocated to the execution of that process program.
- the only reason for scheduling and execution of a program is the reception of a signal.
- the signalling system according to the invention is similar in kind to the previously described signalling systems, i.e. the signals are assigned different priorities and signal transfer takes place by means of a set of signal buffers.
- each process is controlled by means of a Program Pointer (PP), which is not directly accessible to the programmer, and which contains two components, an Action Indicator (AI) and a Pointer (PTR) as illustrated by Figure 31.
- PP Program Pointer
- AI Action Indicator
- PTR Pointer
- the Pointer is not restricted to instruction addresses, and the Action Indicator has a total of four values instead of only two.
- the four values of an Action Indicator according to the invention are as follows: WAIT Execution is currently suspended until the next signal is received. In this case the Pointer indicates the instruction address to the next waiting node. TRANSIT The process is in transit on the indicated level of execution and blocked from signal reception on this level. In this case the Pointer identifies the signal buffer containing the signal which caused the transition.
- the signal buffers according to the invention consists of a Signal Header part (SH) and a Signal Information Part (SINFO) as illustrated by Figure 32.
- SH Signal Header part
- SINFO Signal Information Part
- the significant difference between a signal buffer according to the invention and previously known arrangements is that the Signal Header part in addition to the standard Signal Identity element (SID) and possible standard Link and Priority Indicator elements (LE, PL) contains a Program Pointer Element (PP) with the same internal structure as the process program pointer shown in Figure 31 and used in a similar way, and that the Signal Destination element (SDEST) in structured in a particular way, which directly reflects the sofchip structure.
- SID Signal Identity element
- LE, PL Link and Priority Indicator elements
- PP Program Pointer Element
- SDEST Signal Destination element
- the invention assumes that a standard sofchip structure has been defined.
- An example of such a standard sofchip structure is illustrated by Figure 24.
- This standard sofchip structure assumes that each sofchip may contain two types of processes, unreplicated processes and replicated processes, whereby all replicated processes have exactly the same number of instances (N). If it is assumed that all sofchips within a system are assigned a unique numeral identity code starting from one, then each sofchip may be identified in toto by means of its identity number or Sofchip Index (SCHPX).
- SSPX Sofchip Index
- the actual instance may be identified by a number in the value range 1...N.
- This index is defined as the Instance Index (IX).
- the value 0 may used as an Instance Index value to indicate nonreplicated processes.
- each process within a sofchip may be given a number, which is unique within the sofchip or, alternatively, within the group of either nonreplicated or replicated processes belonging to the sofchip. This number may now be used as a Process Index (PX) to identify the actual process.
- PX Process Index
- each process may uniquely be identified by means of three components, i.e. the Sofchip Index (SCHPX), the Instance Index (IX) and the Process Index (PX) shown as the three components of the SDEST element of the Signal Header part of a signal buffer in Figure 32.
- Sofchip Index SBPX
- IX Instance Index
- PX Process Index
- Figure 33 illustrates the total implementation structure of a sofchip according to Figure 24.
- Figure 33 also shows the Program Pointers required for the different processes as well as an assumed Common Process Routine Area (CPR) within the sofchip Program Area (PA).
- the nonreplicated process P1 has a single program pointer (P1.PP), whereas each instance (i) of the processes P2 and P3 has its own program pointer (P2(i).PP, P3(i).PP).
- the Common Process Routine Area contains subroutines, which belong to the sofchip, i.e. they can only be called by programs belonging to the sofchip itself.
- Figure 33 also shows a possible descriptor table structure associated with the sofchip, which encompasses both a general MCU descriptor table (DT, see Figure 27) and a special MPU descriptor table (MPDT, see Figure 30).
- DT general MCU descriptor table
- MPDT special MPU descriptor table
- the MPU Descriptor Table contains the Signal Buffer Base Address (SBAB), i.e. the address in the main memory (M) Where the first signal buffer is located. SBAB is used internally within the MPU to calculate individual signal buffer addresses as will be described in connection with the individual microprograms below.
- SBAB Signal Buffer Base Address
- M main memory
- the MPU Descriptor Table (MPDT) further contains a Sofchip Descriptor Table area (SCHPDT) where, for each sofchip, two pointers are held.
- the first pointer identifies a secondary descriptor area within the MPU Descriptor Table itself (SCHPPD k ).
- This secondary descriptor area holds the starting addresses within the Program Area of each process within the sofchip (P1PA, P2PA, P3PA) as well as the base addresses to the program pointers for the processes (P1PPA, P2PPA, P3PPA).
- the second pointer identifies the starting address within the general Descriptor Table (DT) of the sofchip descriptor (SCHPD k ).
- the sofchip descriptor table (SCHPD k ) is shown to contain a descriptor (PD) to the Program Area (PA), a descriptor for each accessed data element ( DE1D, DE2D, DE3D, DE4D) belonging to the process as well as a descriptor for accessed information in sent or received signals (RSD-, SSD-).
- PD descriptor
- PA Program Area
- DE1D, DE2D, DE3D, DE4D a descriptor for each accessed data element belonging to the process as well as a descriptor for accessed information in sent or received signals (RSD-, SSD-).
- the CPU executes the program of each process instruction by instruction as in any normal processing system.
- SD Store
- LD Load
- ICNT address modifier code
- the processing proceed normally.
- one of the special microprograms MP 1 , MP 2 , ... MP P in Figure 30
- MP 1 , MP 2 , ... MP P in Figure 30 may be evoked and independently executed by the MPU.
- These microprograms include, but are not restricted to, the following microprograms:
- Figure 34 shows a typical program arrangement of the program in the main memory (M) according to the invention.
- This program consists of two kinds of instruction sequences.
- the first kind is a normal instruction sequence representing the execution of tasks to take a process from one waiting node to another.
- This type of program is therefore defined as a Task Program.
- Task Programs are typically terminated by an instruction or an instruction sequence causing the process to enter the next waiting node. According to the arrangement in Figure 34 this is performed by means of a standard Common Subroutine (CS WAIT ).
- CS WAIT Common Subroutine
- On calling this subroutine (as on calling any subroutine) the return address from the subroutine (address to the Call instruction + 1) is deposited topmost on the currently used stack.
- the CS WAIT subroutine simply pops the return address from the top of the stack into the DR register and then issues a Store Instruction with the value to be stored in the DR register and with the address modifier indicating 'WAITMODE'.
- MD Mode Decoder
- ITT Interrupt Return instruction
- the second kind of instruction sequence in the program of a process is associated with signal reception in a waiting node and is not a proper program (i.e. an executable sequence of instructions) at all.
- the first emory word of such a sequence which follows a Call CS WAIT instruction according to the invention, contains two items of information, i.e. the Task Level (TL) of the Waiting Node and the Number of Signals (SIGNUM) expected in that waiting node.
- the Task Level is used to indicate whether a waiting node occurs directly within a process (subsequently coded as task level '0' ) or within a procedure called from a process (subsequently coded as task level '1' ) or from another procedure (subsequerntly coded as task level '2', '3', etc.).
- the significance of the Task Level is that it indicates whether an unexpected signal is to be discarded (task level '0') or deferred (all other task levels).
- the waiting nodes WX and WY would have task level '0' and the waiting node WP task level '1'.
- the memory word containing the Task Level, Signal Number information is followed by a signal reception table with the number of entries given by the Signal Number information (SIGNUM).
- SIGNUM Signal Number information
- Each such entry consists of two memory words, where the first word of any entry (i) contains the actual Signal Identity (SID i in binary code and the second word contains the address (TA i ) within the Program Area of a sofchip to the first instruction of the corresponding task program.
- Figure 35 illustrates a complete implemented sofchip, which corresponds with the logic function of Figure 13.
- This program contains three waiting node sequences as marked (instructions 2-6, 12-16 and 22-26) and a total of seven task sequences (instructions 0-1, 7-11, 17-21, 27-31, 32-33, 34-35 and 36-37).
- the task sequences are executed by the CPU, whereas the waiting node sequences are handled completely autonomously by the MPU within the MCU.
- certain instructions may by themselves cause the MPU to perform extra actions.
- Signals can only be sent by a process executing a particular task sequence. It is assumed that this execution is performed on a particular priority leveland that a set of index registers as exemplified in Figure 29 has been assigned to the process in connection with the reception of the signal initiating the task execution. Signals are always assumed to be transferred from the sending process to the receiving process by means of a Signal Buffer, which is assumed to be a contiguous, identifiable memeory area within the main memory (M). In order to be able to transfer information from the sending process into a signal buffer, the corresponding memory area must be accessible from the program. This postulates some kind of access mechanism as indicated by (5) in Figure 25.
- this access mechanism is provided by the use of one of the index registers of the MCU as a dedicated Signal Send Register (SSR) as indicated by Figure 29.
- SSR Signal Send Register
- the Signal Buffer Area (SBA) in the main memory (See Figure 22) is postulated to contain a sufficient number of individual signal buffers so that a free buffer is always obtainable when required under all normal operating conditions. What this means is that the number of signal buffers must be so dimensioned, that the probability of not finding a free signal buffer is less than 0.000001.
- All free signal buffers are assumed to be organized as a conventional FIFO (First-In-First-Out) queue. This requires the Head and Tail ends of the queue to be held at all times. According to the invention all seizing and releasing of signals buffers is performed under control of the MPU. For this reason the control registers holding the head and tail ends of the idle queue are indicated as internal registers (IQH and IQT) of the MPU in Figure 30. It is to be noted that organizing idle signal buffers in a FIFO queue is only one possible way of organizing these buffers and is only used as an example to illustrate the use of the invention. The invention could equally well be used with any other efficient method to seize and release idle signal buffers.
- a 'SEIZE' instruction consists of an instruction, which asserts a 'virtual address' on the primary address bus (ABUS) in Figure 30, such that the discriminator bit (N-4) is set to '1' and the address mode (bits 12-15) causes the Mode Decoder (MD) to output an interrupt signal to the MPU. It is also assumed that the actual priority level on which the CPU is executing is indicated (bits N-1, N-2, N-3). On reception and acknowledging this interrupt signal the MPU will perform a microprogram, a particular arrangement of which is shown in Figure 36.
- the first microinstruction of this microprogram tests whether a free signal buffer is available by comparing the contents of the Idle Queue Head register (IQH) with the unique code value used as stop code (STCD)and jumps to microinstruction 7 if no free signal buffer is available.
- IQH Idle Queue Head register
- STCD stop code
- the X register group (XRS(L)) associated with the actual priority level (L) on which the CPU is executing is identified by multiplying (microinstruction 2) the priority level value (L) by the number of index registers (Q+1 in Figure 29).
- the resulting value identifies index register XRO within the actual index register group XRS(L).
- the value contained in the IQH register, which identifies the first idle signal buffer is now transferred to index register XR4 (microinstruction 3) which, according to Figure 29, is used as the Signal Send Register (SSR).
- SSR Signal Send Register
- the memory address tc the signal buffer in the main memory is calculated by means of the Signal Buffer Area Base address (SBAB in Figure 33) to which the index to the actual signal buffer (IQH) multiplied by a constant (C SB ) giving the number of words for a single signal buffer is added (microinstruction 4).
- SBAB Signal Buffer Area Base address
- C SB constant
- the contents of the Link Element (LE) of that signal buffer addressed by a constant and known offset (C LE ) from the beginning of the signal buffer is transferred (microinstruction 5) to the Idle Queue Head register (IQH), whereby the seized signal buffer is unlinked from the idle queue. Thereafter the microprogram execution is terminated (microinstruction 6).
- microinstruction 7 whereafter the microprogram execution is terminated (microinstruction 8).
- XR4 index register of te MCU associated with the current level (L) on which the CPU is executing contains the identity of the seized signal buffer.
- This index register may now be used in the normal fashion according to U.K. Patent Application 8405491 to transfer information from the CPU into the signal buffer through the HCU.
- This is exemplified by instruction 8 of Figure 35, which causes the constant value corresponding to 'ONE' to be transferred into the signal identity element (SID), and by instruction 9, which causes the contents of the data element 'DATA' to be copied into the first information element (SDATA1) of the signal information part (SINFO) of the signal buffer.
- Other elements of the signal buffer for instance the SDEST and PL elements in Figure 32, may be set in a similar manner.
- the signal buffer is dispatched by means of a 'DISPATCH' instruction.
- the 'DISPATCH' instruction is, like the 'SEIZE' instruction performed by a microprogram in the MPU invoked by an interrupt via the virtual address asserted on the primary address bus by the CPU.
- a particular arrangement of such a microprogram is illustrated by Figure 37.
- the microprogram assumes that sent signals are queued in ⁇ . priority order, whereby each signal priority is assigned its own queue. This queue must be of the FIFO type in order to guarantee the order between different signals. It is therefore assumed that each group of index registers contains two dedicated registers, SQHR and SQTR as illustrated by Figure 29 to manage these signal priority queues.
- the principal function of the dispatch signal microprogram is therefore to insert the signal identified by the SSR register (See Figure 29) of the index register group associated with the currently executing program in the CPU into the signal priority queue indicated by the priority of the signal itself.
- the signal dispatch microprogram starts by identifying (instruction 1) the index register group XRS(L) associated with the actual priority level (L) on which the CPU is executing in the same way as in the 'SEIZE' instruction. Thereafter (instruction 2) the memory address of the actual signal buffer is calculated by adding the signal buffer identity in XR4 of the actual index register group multiplied by the constant (C SB ) to the Signal Buffer Area Base address (SBAB). The priority of the actual signal (PL) may now be read (instruction 3) from the signal buffer (See Figure 32) by reading the contents of the memory word addressed by the constant offset value (C PL ) from the start of the actual signal buffer.
- the signal identity could alternatively have been provided as part of the virtual address on the primary address bus, in which case the microprogram would have to write the actual priority value into the signal buffer instead. Regardless of how the signal priority is actually obtained it is now used to identify (instruction 4) a second index register group XRS(PL) associated with the actual signal priority in the same way as the original index register group.
- index register XR2 of the index register group XRS(PL) identified by the signal priority (PL) is now tested for equality with the stop code (STCD), i.e. for an empty signal priority queue (instruction 7), in which case the microprogram execution continues from instruction 14.
- STCD stop code
- the last signal buffer in the queue is identified by the SQTR register (See Figure 29), which is represented by the XR3 register of the register group associated with the signal priority (PL).
- the address to this last signal buffer in the queue is calculated in the same way as before (instruction 8) whereafter this signal buffer is linked to the signal buffer to be dispatched by writing the contents of the index register XR4 of the index register group associated with the priority level (L) of the currently executing program in the CPU into the memory word addressed by the fixed offset value (C LE ) from the starting point of the last signal buffer (instruction 9).
- index register XR4 is thereafter also transferred into index register XR3 of the index register group associated with the signal priority (PL) of the dispatched signal (instruction 10), thereby indicating that the dispatched signal buffer is now the last signal in the signal priority queue.
- index register XR4 is now reset to the stop code (STCD), thereby preventing the sending process from further accesses of the signal buffer of a dispatched signal (instruction 11).
- STCD stop code
- the actual signal buffer to be dispatched is inserted both as the last instruction in the signal priority queue (instruction 14) by tremsferring the contents of the index register XR4 of the index register group associated with the actual execution level (L) within the CPU into the XR3 register associated with the actual signal priority (PL), and as the first instruction in the signal priority queue (instruction 15) by transferring the same XR4 contents into the XR2 register associated with the actual signal priority (PL).
- the microprogram is thereafter (instructions 16, 17, 18) terminated in the same way as in the first case.
- the signal buffer containing information about the received signal is held only by the RSR register (XR1). It is also to be noted that if the signal was the only signal in the signal priority queue,then the link element in the associated signal buffer contains the standard stop code. Resetting the SQHR register as described above will therefore cause the interrupt signal (IR L ) to disappear. In the case that the signal priority queue contained more than one signal the SQHR register will differ from the standard stopcode, i.e. an interrupt signal will be generated as soon as the RSR eregister is reset to the stemdard stop code. At this point no identification has yet been made of the process to which the signal contained in the identified signal buffer is directed. The identity of this process is held by the Signal Destination element within the signal buffer, which is assumed to be structured as indicated by Figure 32. The MPU next
- instruction 6 reads all three components from the signal buffer by reading the required number of memory words from the main memory addressed by a constant offset (C DEST ) from the start of the actual signal buffer.
- the Sofchip Index part (SCHPX) is now used to identify the entry belonging to the sofchip in the Sofchip Descriptor Table within the MPU Descriptor Table (MPDT) shown in Figure 33, from which the starting point to the actual Sofchip Descriptor area in the main MCU Descriptor TAble (DT) is copied into the Sofchip Register (SCHPR), which is represented by index register XRO within the index register group XRS(L) associated with the priority of the present signal (instruction 7) as shown by Figure 29.
- the Instance Index part of the signal destination (IX) is copied into the Instance Index Register (IXR), which is represented by the XR5 register within the same index register group.
- the address to the program pointer (PP) can now be calculated by means of the SCHPDT and SCHPPD k tables within the MPDT of the MPU as shown in Figure 33 and utilizing the Process Index part (PX) of the signal destination (instruction 9).
- PX Process Index part
- the information contained in the SCHPPD k table for sofchip (k) only gives the starting address to the area in the main memory, where the program pointers of all instances of the actual process are located (P1PPA, P2PPA, P3PPA). Therefore, in order to obtain the address to the program pointer (Px(IX).PP) belonging to the actual instance of the process, the value of the Instance Index must be added to this starting address (instruction 10).
- the case where the process is neither in a waiting node or in transit between waiting node is an error case and causes an automatic restart of the associated process.
- Each process has a restart position,which is identified by the appropriate entry (PIPA, P2PA or P3PA) within the SCHPPD k table for sofchip (k) as shown by Figure 33.
- the restart address is transferred to index register XR6 (instruction 16).
- the program pointer (PP) belonging to the actual signal buffer as shown by Figure 32 is thereafter (instruction 17) set to the compound value ('2,0'), which means that the Action Indicator component (AI) of the program pointer is set to the value '2' indicating a 'NORMAL' signal undergoing analysis, and the pointer (PTR) part of the program pointer reset to zero (don't care value).
- the program pointer of the actual process is now set to a compound value
- an internal signal address register (SAR) of the MPU is set to the address of the first memory word associated with the particular memory word by adding the content of the pointer component (PTR) of the program pointer, which gives the relative offset of the waiting node within the process program, to the start address (PD) of the process program, which start address is assumed to be given by the first entry of the sofchip descriptor table area (SCHPD k ) as shown by Figure 33 (instruction 23).
- the memeory word addressed by the SAR register contains a compound value with a task level (TL) and a signal number (SIGNUM) component as shown by Figure 34.
- the signal number component is now read into an internal signal number register (SNR) of the MPU (instruction 24), whereafter the SAR register is incremented (intsruction 25).
- the microprogram now performs a loop, whereby the contents of the signal number register is tested
- instruction 26 If no more signals remain to be tested (SNR ⁇ 0), then the microprogram continues from instruction 40. Otherwise the memory word addressed by the SAR register is read an compared with the previously read signal identityu (instruction 27). If a match is found the microprogram continues from instruction 31, otherwise the SAR register is incremented to point to the next signal (instruction 28), the SNR register is decremented to reduce the number of signals left to analyze (instruction 29) and the loop is repeated
- the interrupt signal (INT) to the CPU eventually causes the CPU to interrupt its currently executed program.
- all the relevant index registers belonging to the corresponding priority level in the MCU are set up, i.e. XR0 identifies the start address of the actual sofchip descriptor table area (SCHPD k for sofchip (k) , XR1 identifies the signal buffer containing information about the received signal, XR5 contains the actual instance index and XR6 the address to the actual task program.
- the signal interrupt routine in the CPU therefore only has to read the contents of the XR6 register into its Program Counter (PC) register, whereby execution of the corresponding task program can commence and proceed until the next waiting node is reached.
- PC Program Counter
- the XR6 register may be freely used for any required purpose, because it only indicated the starting address of the task program, which is relevant only in order to be able to start the execution.
- the MCU may handle additional signal interrupts independently of the program execution in the CPU. If these signal interrupts are directed to another process than the one currently under execution by the CPU the signal interrupt microprogram proceeds in the way already described. However, if the signal destination identifies exactly the same process already undergoing execution in the CPU, then the test of the action indicator (instruction 15) will cause an automatic signal defer routine to be activated.
- This signal defer routine first copies (instruction 53) the identity of the actual signal from the RSR register (XR0) into the next available general purpose index register (exemplified by XR7), whereafter the RSR register is reset (instruction 54) to the identity of the signal buffer containing the signal which caused the currently ongoing transition, which is obtained from the pointer component (PTR) of the process program pointer.
- NORMAL i.e. the actual signal is undergoing analysis.
- a signal is deferred by setting the program pointer of the associated signal buffer to the compound value ('3,0'), i.e. the action indicator component (AI) is set to 'DEFERRED'
- the action indicator (AI) of the program pointer for the signal presently undergoing execution in the CPU may also indicate that the signal buffer is in a WAIT condition (instruction 58) although the process is undergoing a transition. This condition is caused by encountering a waiting node within a procedure (compare Figure 18 ) in manner which will be described later in connection with the waiting node entry microprogram. In this case the signal identity is checked in the same manner as desribed previously (instructions 22-30). When a signal identity match is found (instruction 27), then the actual task address is read and set into the XR6 register (instruction 31) and the actual signal program pointer (action indicator) set to 'NORMAL' (instruction 32).
- the internal task level counter of the MPU is now greater than 0 (instruction 33), which causes the program ponter of the signal buffer for the signal which caused the ongoing process level transition to be reset to a compound value indicating that the signal buffer is in 'TRANSIT' and identifying the newly received signal as the signal undergoing transition (instruction 34).
- instruction 33 causes the program ponter of the signal buffer for the signal which caused the ongoing process level transition to be reset to a compound value indicating that the signal buffer is in 'TRANSIT' and identifying the newly received signal as the signal undergoing transition.
- the link element of the newly received signal is copied from the link element of the previous signal (instruction 35).
- the action indicator of the program for a signal buffer may now also take the value 'TRANSIT' (instruction 59).
- the actual signal identity (which is already in a general purpose index register, for instance XR7), is now copied
- the entry parameter of this microprogram identifies the signal buffer, which is to be released, according to the already described functions all free signals buffers are managed by means of a idle signal buffer queue, the head and tail end of which are managed by the Idle Queue Head
- the Signal Buffer Release microprogram is a straightforward FIFO queue insert program, which calculates the address of the actual signal buffer (instruction 1), sets the link element of the signal buffer to the standard stop code (instruction 2) and either inserts the signal buffer as the last signal buffer in an existing idle queue (instructions 4-7) or as the only signal in the idle queue (instructions 8-10).
- the RSR register (XR1) still identifies the primary signal buffer and the IXR (XR5) register contains the actual normalized instance index value.
- the XR6 register now, due to the above described resetting, also contains the actual address to the new waiting node. Further general purpose registers may be utilized to contain the identities of signal buffers associated with deferred signals.
- the waiting node entry microprogram is started (instruction 1) by identifying the actual index register set (XRS(L)) associated with the priority (L) as provided by the corresponding bits of the virtual address asserted on the primary address bus. Thereafter the address to the primary signal buffer (i.e. the signal buffer identified by the RSR register) is calculated (instruction 2) and the signal destination obtained from this signal buffer (instruction 3). It is to be noted that one feature of the invention is that the primary signal buffer is retained during the entire transition caused by the associated signal, thereby enabling the system internal identity of the 'own' process to be automatically established at any time. The process identity given from the old signal destination is now used to calculate the address to the process program in the same manner as in the signal interrupt microprogram
- instruction 6 indicating a process level waiting node. Thereafter all interrupts are disabled to prevent other microprograms from interfering with the waiting node entry microprogram (instruction 7) and the contents of the program pointer belonging to the actual process is read (instruction 8) and tested (instruction 9).
- the only legal value of the action indicator (AI) component of the program pointer is the 'TRANSIT' value. If the action indicator is does not indicate 'TRANSIT', then the process is restarted with release of all eventually deferred signals in the same way as for signal interrupts (instructions 13-26).
- the action indicator (AI) for the process does indicate 'TRANSIT'.
- the program pointer of the received signal is read (instruction 10) and tested (instructions 11 and 12).
- the action indicator (AI) for this signal program pointer may now have the legal values 'TRANSIT' (represented by '1') and 'NORMAL' (represented by '2'). Again, if the action indicator does not have a legal value, then the process is restarted in the same way as described above.
- the link element of the actual signal is read and retained (instruction 44) to cater for eventual deferred signals, whereafter the actual signal buffer is released (instruction 45) by calling the Signal Buffer Release microprogram illustrated in Figure 39.
- the program pointer of the process is reset (instruction 46) to a compound value, where the action indicator component (AI) of that value indicates 'WAIT' (represented by '0') and where the pointer (PTR) component of the value indicates the relative address of the waiting node as given by the XR6 register.
- the RSR register is now reset to the value given by the retained link element (instruction 47).
- the RSR register will now either contain a stop code value (STCD) in the case that no deferred signals existed, or the signal buffer identity to the first deferred signal.
- STCD stop code value
- the resetting of the RSR register in the first case will thereby cause the masking of eventual signal interrupts to be removed as already explained in connection with the signal interrupt microprogram. If no deferred signals exist (instruction 48), then the microprogram execution is terminated (instructions 25, 26).
- the task program address is read and copied into the XR6 register (instruction 70), whereafter either the process program pointer (on task level 0) or the previous signal buffer program pointer (all other task levels) is set to a compound value, where the action indicator (AI) component indicates 'TRANSIT' (represented by '1') and the pointer component (PTR) identifies the actual signal (instructions 71-74).
- the priority level is the priority level
- microprogram is again terminated in the normal way (instructions 67, 25, 26), otherwise the address to the next signal is calculated (instruction 68) and the identity of this signal analyzed (instructions 69, 54, 55, etc.).
- the signal program pointer is first reset to the compound value ('3,0'), thereby setting the action indicator component to 'DEFERRED' (instruction 90).
- the link element of the actual signal is now read (instruction 91) and tested (instruction 92). If no further deferred signals exist, then the actual RSR register is reset (instruction 116), thereby unmasking new signal interrupts of the same priority and the microprogram is terminated (instructions 117 and 118). If further waiting signals exist, then the identity of the newly deferred signal is retained in a temporary register (TSR) within the MPU (instruction 93) and the identity of the actual signal to analyze is reset (instruction 94). The address to the previous signal is also temporarily retained (instruction 95) and the address to the new signal calculated (instruction 96).
- TSR temporary register
- the new and the previous signals are relinked in such a way that the primary deferred signal is linked to the new signal, which is linked to the just deferred signal, which is linked to the eventual signal previously linked to the new signal (instructions 98-100), whereafter the new signal identity is analyzed in the normal manner (instructions 101, 54, 55, etc.).
- the primary deferred signal is linked to the new signal (instruction 102), whereafter the link element of the new signal is read (instruction 103) and tested (instruction 104).
- each waiting signal is examined whether it is deferred or not (instructions 105, 106, 107 and 108), whereby Lhis process is repeated until either a nondeferred waiting signal is encountered or no further waiting signals are found.
- the actual signals are relinked so that the primary deferred signal is linked to the newly found undeferred signal, which is linked to the first analyzed deferred signal and the last analyzed deferred signal is linked to the newly deferred signal, which in turn is linked to the eventual signal previously linked to by the newly found undeferred signal (instructions 109-112).
- the newly found deferred signal is again analyzed in the normal manner (instruction 113, 54, 55, etc.).
- the action indicator of this signal indicates 'TRANSIT'
- the iternal task level indicator (TL) of the MPU is incremented (instruction 27), and the address to the primary signal retained (instruction 28).
- one of the general purpose index registers e.g. XR7 contains the identity of a secondary signal buffer, which identity is now used to calculate the address to the secondary signal (instruction 29), whereafter the program pointer of this signal buffer may be read (instruction 30) and tested (instructions 31 and 32).
- the action indicator of the actual signal element indicates 'TRANSIT' (instruction 31)
- the actual process is repeated one further task level down.
- the action indicator indicates 'NORMAL' (instruction 32)
- the address to the first memory word associateed with the waiting node is calculated and the task level of the waiting node read out and compared with the internal task level in the MPU (instruction 120).
- the same activities are performed as for the previously described primary signal analysis (instructions 37, 38, etc.).
- the linke element of the actual signal is again read and retained (instruction 121), whereafter the actual signal buffer is released by mean of a call of the Signal Buffer Release microprogram (instruction 122). Thereafter the primary signal program pointer is reset to a complex value, where the action indicator component (AI) indicates 'WAIT' (represented by '0' ) and the pointer component indicates the location of the next waiting node as obtained from the XR6 register (instruction 123).
- the link element of the primary signal is now reset to the retained link element (instruction 124) and the actual general purpose index register similarly (instruction 125) whereafter the next eventual signal is analysed as already described (instructions 126, 48, etc.).
- the above described arrangements of the signal interrupt and waiting node entry microprograms according to the present invention ensures that when a task program is executed, then the signal buffer for the signal which triggered the task program is held for the entire task program, i.e. until the next waiting node is entered.
- the signal buffer for this signal is held during all three tasks T1, T2 and T3 and only released when the process enteres the waiting node WY.
- the tasks T2 and T3 consist internally of a task sequence containing an inner waiting node (WP).
- this execution consists of the execution of the subtask TP1, after which the waiting node WP is entered.
- Entering a waiting node on an inner task level does not cause the signal buffer on the primary level to be released but instead causes the program pointer of this signal buffer to be reset to the inner waiting node as described above (instruction 37 of the waiting node entry microprogram).
- the signal buffer for this signal is retained in addition to, and independently of the primary signal.
- the signal buffer for the received inner signal is retained until either the next inner waiting node is entered or until the inner task program terminates by returning to the next outer task level.
- the signal discard microprogram starts by identifying the actual index register group XRS(L) for the priority level (L) on which the CPU currently executes (instruction 1), whereafter the address to the primary signal buffer is calculated (instruction 2).
- THis microprogram may only be executed on inner task levels,, for which reason the task level counter (TL) of the MPU is set to 1 immediately (instructions).
- the MPU interrupts are now disabled (instruction 4) in order to prevent interference from other microprograms and the program pointer of the primary signal is read (instruction 5) and tested (instruction 6).
- the only legal value of the action indicator (AI) component is in this case 'TRANSIT' (represented by '1').
- the microprogram is simply terminated (instructions 11 and 11a).
- the address to the second signal is calculated (instruction 7) and the program pointer of this signal is read (instruction 8) and tested (instructions 9 and 10).
- the legal action indicator values are 'TRANSIT' (represented by '1')and 'NORMAL' (represented by '2').
- the action indicator of the secondary signal indicates 'TRANSIT' (instruction 9) then the actual signal is reset as the primary signal (instruction 12) and the task level indicator (TL) incremented (instruction 13) whereafter the next signal is analyzed in the same way (instructions 14, 7, 8, etc.).
- a system built around a Master Control Unit according to the invention forms a signal scheduling system, which has the following advantages compared with previously known similar signalling systems.
- the invention adequately solves the signal defer problem.
- the Master Control Unit may be used in combination with any modern microprocessor, utilizing the adequate task program execution capabilities of these processors and nevertheless at the same time tremsforming the processors into efficient support machines for real time systems.
- the same Master Control Unit may serve several CPUs, allowing these CPUs to concurrently execute different process instances.
- Timing facility consists of one or more prediodically scanned time queues in the MPU, an arrangement of which will be described below.
- the signal header part (SH) of the signal buffere will in this case have to contain at least one time counter element (TC) as illustrated by Figure 42 and the MPU at least one Time Queue Head register (TQH), at least one Time Queue Tail register (TQT) and at least one Clock Register (CLR) as illustrated by Figure 43.
- TC time counter element
- TQH Time Queue Head register
- TQT Time Queue Tail register
- CLR Clock Register
- the time queues are managed by two additional microprograms, a Timed Signal Dispatch microprogram and a Time Queue Check microprogram.
- Timed signal dispatch microprogram A possible arrangement of a timed signal dispatch microprogram is illustrated by Figure 44. Like the normal signal dispatch microprogram the timed signal dispatch microprogram starts by identifying the index register group XRS(L) associated with the priority level (L) on which the CPU is currently executing (instruction 1) and by calculating the address to the actual signal buffer (instruction 2). It is furthermore assumed that the time counter element (TC) of the signal buffer is set to a value corresponding with the number of time queue check program intervals to elapse before the signal may be rotued to its destination.
- TC time counter element
- the value of the time counter element in the signal buffer is now read and added to the actual value of the Clock Register of the MPU and retained in an internal time counter (T) of the MPU (instruction 3), which value is then copied back into the time counter element of the signal buffer (instruction 4).
- Interrupts are now disabled (instruction 5) in order to prevent interference from other microprograms, whereafter the Time Queue Head register is tested (instruction 6). If the time queue was currently empty, then the actual signal buffer identified by the actual SSR register (XR4) is inserted as the only signal buffer in the time queue (instructions 7, 8, 9), whereafter the SSR register is reset (instruction 10) and the microprogram execution terminated (instructions 11 and
- the actual signal is inserted as the first signal in the time queue and linked to the previoulsy first signal in the queue (instructions 17 and 18), whereafter the SSR register is reset ( instruction 19) and the microprogram execution is terminated (instructions 20 emd 21). If the time difference is less than the comparison value by the comparison (instruction 16), then the link element of the current first signal in the time cue is read (instruction 22) and tested (instruction 23). If the link element contains the standard stop code, then the current signal buffer was the previously last element in the time queue. In this case the actual signal is inserted as the last signal in the time queue (instructions 24, 25 and 26), whereafter the SSR register is reset (instruction 27) and the microprogram execution terminated (instructions 28 and 29).
- the link element identifies a subsequent signal buffer
- the address to this signal buffer is calculated (instruction 30)and the time counter value read (instruction 31) and the time difference calculated (instruction 32) in the same way as before, whereafter the time difference is tested against the same comparison value (C D ) as before (instruction 33). If now the time difference is greater than the comparison value, then the expiration time of the actual signal is later earlier than the currently tested signal in the time queue but later than the previous signal in the time queue. In this case the actual signal is therefore inserted between the two mentioned signals (instructions 34 and 35), whereafter the SSR register is reset (instruction 36) and the microprogram execution terminated (instructions 37 and 38). In the remaining case the next signal in the time queue is tested in exactly the same manner (instructions 39, 40, 22, 23, etc.)
- the Timed Signal Dispatch microprogram in Pigure 44 thus ensures that the signal buffers are inserted into the time queue in an order determined by their respective expiration times, so that the signal with the shortest remaining expiration time will always be at the head of the queue.
- a Time Queue is scanned by a periodically executed microprogram in the MPU.
- a possible arrangement of such a microprogram according to the invention is shown in Figure 45.
- the microprogram first read the value of the Clock Register (CLR) and increments module the range of the Clock Register and retains the resulting value (T) for subsequent references (instruction 1), whereafter the result is transferred back into the Clock Register (instruction 2).
- the Time Queue Head register (TQH) is now tested (instruction 3) whereby the microprogram terminates if the time queue was empty (instruction 4).
- the address to the first signal buffer is calculated (instruction 5) and the time counter value read from the signal buffer (instruction 6).
- the difference between this time counter value and the retained clock register value (T) is now calculated modulo the range of the Clock Register (instruction 7) and the resulting difference compared with a comparison value (C D ). If the actual value is less than the comparison value, then the time for the first signal in the queue has not yet expired. Because the signals are ordered in the time queue so that the signal with the shortest remaining expiration time is always the first signal in the queue none of the other signal times can have expired, i.e. the microprogram execution is terminated (instruction 4) in this case.
- interrupts are disabled (instruction 9) and the first signal extracted from the time queue (instructions 10 and 11), whereafter the link element of the actual signal is reset (instruction 12), the signal priority of the actual signal (PL) read from the signal buffer (instruction 13) and used to identify the corresponding index register set XRS(PL), whereafter the signal is either inserted as the last signal in an existing signal priority queue (instructions 16, 17, 18) or as the only signal in the signal priority queue (instructions 21 and 22).
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multi Processors (AREA)
Abstract
A computer system having a central processor unit, a main memory for storage of information and a master control unit connected between the processor and memory. The master control unit intercepts and interprets virtual addresses and commands issued by the central processor in such a way that one class of virtual addresses and commands are interpreted to be associated with reading of information and writing of information from the main memory and are translated into real addresses issued by the master control unit to the main memory and a second class of virtual addresses cause autonomous signal routing, transfer and reception activity to be performed by the master control unit. The master control unit performs signal reception independently of the central processor and initiates operation of the central processor when the received signal has been accepted by the master control unit, the master control unit being able to access the main memory independently in performing such signalling activities.
Description
COMPUTER INTERPROCESS SIGNAL COMMUNICATION SYSTEM
This invention relates to a signalling system for exchanging of messages between independent software signalling units within real-time computer systems, said signalling system including means to allow the signalling units to send and receive asynchronous messages and also including means for routing and transfer of these messages between their origin and destination signalling units.
More specifically the invention relates to a computer having one or more central processing units for execution of programs, a main memory for storage of information in binary form and a master control unit for performing routing, transfer and scheduling of asynchronous messages between a signalling unit representated by a program being executed by one central processing unit and another signalling unit represented by another program in the same or in another central processing unit.
The invention provides a computer system having at least one central processor, main memory means for storage of information and a master control unit connected between the central processor (s) and a memory means, said master control unit being able both to intercept and interpret virtual addresses and commands issued by a central processor in such a way that one class of virtual addresses and commands are interpreted to be associated with reading of information from and writing of information into the main memory means and are translated into real addresses issued by the master control unit to the main
memory means, and a second class of virtual addresses (either independent of or combined with specific command codes issued by a central processor) which cause, for each virtual address code, autonomous signal routing transfer and reception activity to be performed by the master control unit, and which master control unit has means to perform signal reception independently of the central processor and means to initiate operation of the central processor when a received signal has been accepted by the master control unit, the master control unit being able to access the main memory means indepdently in performing such signalling activities.
There now follows a general description of the technical background to the invention followed by a detailed description of some specific embodiments of the invention with references to the accompanying drawings.
Fig.1 shows a simplified block diagram of a computing system. This system is not the subject of the invention, but introduced in order to be able to explain the background to, and the salient points of, the invention. The main parts of the computing system in Fig 1 are the Central Processing Unit (CPU), the Memory (M), and the Input/Output Interface (IO) to the external world. The CPU contains a Control Logic function (CL), which is not described in detail, and four registers; a Program Counter register (PC,) an Address Register (AR), a Data Register (DR) and an Instruction Register (IR). The information stored in these registers can be read and written by CL. The memory (M) contains N memory words (MW0, MW1, .. MWN-1), and an Address Decoder (AD), the input of which is connected to the Program Counter (PC) and the Address Register (AR) of the CPU. Each memory word contains a specific number of binary memory elements, each one of which may store either the information 0 or 1. All the memory words are connected to the Data Register (DR) of the CPU, i.e. the contents of any memory word may be transferred to DR and vice versa. The Control Logic (CL) of the CPU has two control outlets, a Read Outlet (R) and a Write Outlet (W), which are connected to all the memory words in parallel. When CL issues a Read command, the Address Decoder (AD) selects the Memory Word, which corresponds to the contents of the Address Register (AR) or the Program
Counter (PC), and the R Control Outlet from CL enables the transfer of information from the selected memory word into the Data Register (DR) if the address is obtained from the Address Register (AR), and into the Instruction Register (IR) if the address is obtained from the Program Counter (PC). If a Write Command is issued, the W Control Outlet from CL enables the transfer of information from the Data Register (DR) into the memory word addressed bythe Address Register (AR). The information stored in the memory words can be used in two different ways, either as data or as control instructions. When the information stored in a memory word is used as data, the separate binary memory elements are combined together to form a single value (Vm) according to the principle shown in Fig. 2a. This value can then be manipulated by the CL in the desired manner, e.g. arithmetic operations, logic operations, etc. For a memory word MWx containing m binary memory elements or bits, the total number of different data values represented by different combinations of the m bits, which can be stored in the memory word is 2m. These bit combinations may be used to represent values ranging from 0 to 2m-1 as illustrated by the example of the possible values for a 4-bit memory word in Fig.2b. Memory words used for data storage purposes are usually randomly accessed, i.e. no implicit relationship exists between the address of one data word and another
data word. Inorder to access a dataword, the address of the word has to be transferred into the Address Register (AR). Thereafter the contents of the word can be read into the Data Register (DR), or the contents of the Data Register (DR) can be stored into the memory word. It is to be noted that it is possible to build explicit relationships between data words. One possibility is the sequential array, i.e. a number of consecutive memory words, which contains related information and which can be accessed by setting the Address Register to, for instance, the first word in the array and then incrementing the Address Register to access consecutive words in the array. Another possibility is the chaining of data elements, i.e. storing the address of one memory word, as information, in another memory word. In order to access such an indirectly addressed memory word, the information from the first word is first read into the Data Register (DR), and then via CL transferred to the Address Register (AR). A program consists of a number of sequentially executed Machine Instructions. It is therefore natural, that the memory words used to store the Machine Instructions of a program also follow one another sequentially. The sequential execution of Machine Instructions stored in consecutive memory words is the normal mode of operation and therefore built into the Control Logic of the CPU. In order to access memory words
containing Control Instructions the Program Counter (PC) is used to select the memory word to be read. The information is then transferred to the Instruction Register (IR), where CL can access and evaluate the instruction. Each time a Machine Instruction is read from the memory, the Program Counter is automatically incremented so that control instructions are read in consecutive order.
When a Machine Instruction has been read into the Instruction Register the Control Logic (CL) first decodes the instruction. Depending on how the decoding is performed the bits of an instruction form bit groups, where each bit group has a separate meaning (and may even overlap each other in some cases). Fig. 3 illustrates a simplified example of instruction decoding in a memory word MWP, where it has been assumed, that instructions contain three bit groups, a Command Code group (CC) and two Operand bit groups (OP1) and (OP2). The Command Code determines the actual instruction to be executed and the operands specific parameters of that execution. It is for instance possible to give a memory address as OP1 and a data value as OP2 with the command code of the instruction specifying writing into memory (Store Data = SD instruction). The Control Logic (CL) will then perform the following activities:
Transfer OP1 into the AR register of Figure 1.
Transfer OP2 into the DRregister of Figure 1. Assert the W control signal to the memory M, whereby the information in DR will be written into the memory word addressed by AR. Increment the PC register to prepare execution of the next instruction.
Another example is the case where OP1 gives a memory address and the command code specifies a 'jump' (JP instruction). In this case the control logic performs the single activity:
Transfer OP1 into the PC register, thereby preparing for execution of the next instruction at the specified memory word.
The number of operands used in any particular instruction may vary with the instruction as already illustrated by the two examples above. Some instructions use no operands at all. Some instructions may even require three or four operands.
Gradually the general computer system structure in Fig.4 has evolved, which structure is now taken for granted in practically all types of conventional computing systems The hardware consists of the Central Processing Unit (CPU) with its set of Machine Instructions (MI), the Memory (M) and the interface to
the externalworld (IO). In Fig.4 the Data Flow and Control Flow relationships are shown, where each such relationship assumes the existence of a hardware connection. The Memory (M) is normally accessed by the CPU, but can also, if necessary be accessed directly from the IO hardware without the CPU being involved. The software, which resides in the memory M, consists of the Application Programs (AP), the Input/Output Interface Programs (IOP), the Operating System (OS) and the Data Base (DB).
The Machine Instructions (MI) consists of the set of instructions, which are executable within the CPU and generally available for the design of any program (OS, AP, IOP). The Data Base (DB) contains all of the data for the system, which data can be read and written by the CPU under the control of the various programs.
The Interface Programs (IOP) are a set of programs specifically designed to control the actual types of peripheral devices (IO) used in the interworking with the external world. The Interface Programs are called either from the Master Scheduler (MS) or from the Application Programs (AP), and can themselves call Utility Subprograms. The Application Programs (AP) are a set of user specific programs, which have been designed to solve specific application problems (e.g. different types of
computation). Application Programs called from the Master Scheduler and may call Utility subprograms and Interface Programs.
Finally, the Operating System is the application independent System Control Program, which contains two parts, the Master Scheduler (MS), and the Utility Subprograms ( US ). The Master Scheduler (MS ) is usually driven by means of Interrupt Signals (IS), and controls the execution of all other programs in the entire system. The Utility Subprograms (US) are a set of generally available subprograms, which have been developed to solve problems of a general nature, but which are too complex to be performed by means of single Machine Instructions. The Utility Subprograms can be called from any type of programs (OS, AP, or IOP).
Fig.5 shows a block diagram of the principal software structure, which is based on the hardware/software system structure in Fig. 4, and which is currently considered to be the basic software structure in conventional software technology. The software is split into two types of units, data and programs. The data of a system consist of individual data elements and structures (DE) located in the Data Base (DB) of the system. The programs consist of Operating System programs (OS), Application Programs (AP) and
Input/Output Programs (IOP). Both AP, IOP and US contain a set of programs (P), all of which are composed of
Machine Instructions of the actual computer.
The software structure in Figure 5 allows the three principal methods of communication illustrated in Figure 6 to be used, i.e. sequential communication, hierarchical communication and data communication.
The principle of sequential communication between two programs is illustrated in Figure 6a. In this case the first program passes control directly to the second program. In addition the first program may pass information to the second program by means of a data element, which is commonly accessible to both programs, whereby the first program writes information into this common data element which information is subsequently read by the second program. Sequential communication is the only type of communication used on the machine instruction level whereby program 1 and program 2 may be represented by two machine instructions stored in consecutive memory words and the common data element represented by the Data Register (DR) in Figure 1. Sequential communication may, nevertheless, be used on any program level up to, and including, the application programs (AP) in Figure 4. Sequential communication always implies a tight coupling and automatic synchronization between the two communicating programs through their control flow relationship, however. This fact makes sequential communication unsuitable for larger, independent software units, where principally no
control flow relationship at all is implied between the different programs. The principle of hierarchical communication between two programs is illustrated by Figure 6b. In this case the first program also transfers control to the second program. However, in this case the transfer of control is only provisional, subject to the termination of the second program, whereupon control reverts back to the first program to the point immediately following the provisional exit point. Information may be passed from the first program to the second program through a commonly accessible data element in connection with the transfer of control from the first to the second program and similarly from the second program back to the first program through the same or another commonly accessible data element in connection with the return of control to the first program. It is to be noted that the actual writing and reading of information to be passed between the two programs does not necessarily coincide in time with the transfer of control between the two programs. Nevertheless, the programs are still tightly coupled and automatically synchronized through their control flow relationship. This makes hierarchical communication also unsuitable for communication between independent software units.
The principle of data communication between two programs is illustrated by Figure 6c. In this case both
programs may read information from, and write information into, a commonly accessible data element. No direct passing of control nor any other type of control flow relationship between the two programs is principally implied, i.e. the two communicating programs may, in principle operate completely asynchronously. This is always the case when all programs are allowed to read information from the common data element, but only a single program is permitted to write information into that data element.
If, however, more than one program is allowed to write information into the common data element, then the programs may interfere with each other unless it can be guaranteed that no such interference can take place. The method conventionally used to guarantee non-interference is by so called 'mutual exclusion', i.e. two or more programs (or parts of the programs) writing into the same data element are prevented from being executed concurrently. If both programs are executed by the same processor, then the simplest way to achieve such 'mutual exclusion' is by ensuring that each program is executed to its termination before the next program is allowed to execute. However, in a time-sharing environment or in a multiprocessor system this cannot be simply guaranteed. In this cases explicit exclusion is obtained by 'locking' the data element, for instance by means of a so called
'semaphore'.
Figure 7 illustrates the basic principle of
'mutual exclusion' by means of a semaphore. The common data element consists of two parts, a data carrying part (D) and a semaphore element (S). The semaphore element in its simplest form consists of a binary element with one of its binary values used to indicate that the data carrying element (D) is 'unlocked' and the second value indicating that the data carrying element (D) is 'locked'.
Execution of any program starts at the normal entry point when the program is scheduled by the Master Scheduler. When, during the course of this execution, the program reaches a point where the data element (D) is to be accessed and changed, then the semaphore element is tested by means of an indivisible Test-and-Set instruction. This instruction sets the semaphore to its 'locked' condition but retains the previous condition, which subsequently can be tested. If the semaphore element was previously 'locked', then the data element is currently being accessed by another program. In this case the execution of the current program must be suspended and the program rescheduled pending the 'unlocking' of the semaphore. It is not necessary, and it may in fact be undesirable, to re-execute the part of the program that has already been executed. For this reason the Master Scheduler is informed, not only of the
rescheduling, but also of the actual rescheduling entry point, which differs from the normal entry point of the program. The rescheduling entry point is, in fact, located immediately prior to the above mentioned Test-and-Set insdtruction.
If the semaphore element (S) was previously 'unlocked', the Test-and-Set instruction will now 'lock' it, i.e. the actual program has uncontested access to the data element (D). After all accessing of this data element has been performed the semaphore element is reset, thereby 'unlocking' the data element (D) for other programs.
Use of data communication by means of 'semaphore locked' data elements is potentially dangerous, because of the risk of so called 'deadlocks', which may occur when several programs intercommunicate by means of several data elements. If, for instance program 1 needs to intercommunicate with program 2 by means of two data elements A and B (both of which are assumed to be semaphore locked) and program 1 has 'locked' data element A and is waiting for data element B to be 'unlocked' while at the same time program 2 has 'locked' element B and is waiting for data element A to be 'unlocked' then a deadlock has occurred. In order to avoid the risk of deadlocks a restricted form of data communication, so called message communication is used. In this case common data element is considered
as a Message Buffer or 'mailbox' and accessible only by two programs, the sender of the message and the receiver of the message.
The sender of the message writes information into the message buffer from which the information subsequently may be read by the receiver. Like normal data communication no direct passing of control nor any other control flow relationship between the two programs is implied, i.e. the two communicating programs may operate concurrently.
Similar problems as with general data communication exist with message based communcation, especially in the conventional type of software system illustrated in Figure 5, where each program (P) is scheduled to be executed by the Master Scheduler (MS) of the Operating System (OS). When program 2 fetches a message from the appropriate data element used to store the message program 1 must have deposited the message into that data element. This implies that program 1 has to have been executed (or at least the part of program 1 which deposits the message) before program 2 is executed (at least the part of program 2 which fetches the message). Furthermore, program 1 (or at least the part of program 1 which deposits the message) cannot be allowed to be executed a second time to deposit a new message into the appropriate data element before program 2 has fetched the previous message from that data
element. Thus an element of synchronization betweeen the execution of program 1 and program 2 will be absolutely essential in such an environment.
Figure 8 illustrates the basic principle commonly used to achieve the required synchronization. The data element carrying messages is defined as a Message Buffer and consists internally of two elements, a Semaphore Element (S) and a Message Element (M). The Semaphore Element in its simplest form is a binary element with one of its binary values used to indicate that the message buffer is free (i.e. no message in the buffer) and the other one that there is a message in the buffer. For the time being it is assumed that the value 0 indicates no message and the value 1 the existence of a message in the buffer.
Both program 1 and program 2 have a normal control flow entry point, where execution of each program is to start when they have been scheduled by the Master Scheduler (MS). Assume that program 1 has been scheduled and has started execution from its normal entry point. At a certain point during this execution program 1 now has to send a message to program 2 via the message buffer. In order to find out whether the message buffer is available (i.e. whether the previous message has been fetched from the buffer) the Semaphore element is tested by means of an indivisible Test-and-Set instruction, which sets the Semaphore Element and remembers its
previous value for subsequent testing.
If the Semaphore Element was previously set, then a message already exists in the buffer, which message must not be overwritten. In this case the execution of program 1 must be suspended and resumed at a later point in time when the message buffer is empty. Program 1 is therefore rescheduled, i.e. the Master Scheduler is informed that program 1 is to be re-executed. However, the part of program 1 which already has been executed does not have to be re-executed. It is therefore necessary to inform the Master Scheduler not only of the fact of the re-execution, but also of the actual rescheduling entry point from which execution is to be resumed. This rescheduling entry point is, in fact, immediately prior to the previously mentioned Test-and-Set instruction.
If the Semaphore Element was previously reset, then a message may be deposited in the message buffer and program 1 executed to its normal termination point (provided of course that no subsequent reschedulings are warranted or required).
Program 2 is scheduled to be executed by the Master Scheduler like program 1 and will thereby execute until the point in the program is reached, where a message has to be fetched. In order to establish whether a message is available in the message buffer the Semaphore Element is tested by means of an indivisible
Test-and-Reset instruction, again remembering the previous value of the Semaphore Element.
If the Semaphore Element was previously reset, then no message exists in the buffer. In this case the execution of program 2 must be suspended and resumed at a later point in time when a message is available. Program 2 is therefore rescheduled, i.e. the Master Scheduler is informed that program 2 is to be re-executed. Again, the part of program 2 which already has been executed does not have to be re-executed, i.e. it is therefore necessary to inform the Master Scheduler not only of the fact of the re-execution, but also of the actual rescheduling entry point from which execution is to be resumed. This rescheduling entry point is, in this case, immediately prior to the previously mentioned Test-and-Reset instruction.
If the Semaphore Element was previously set, then a message is avilable in the message buffer and program 2 may now fetch the message and continue to executed to its normal termination point (again provided that no subsequent reschedulings are warranted or required).
In the arrangement in Figure 8 the Semaphore Element (S) and the Message Element (M) is able to accomodate a single message only. It is, however, quite easy to extend both element to cover any number of messages. With a message buffer able to hold N simultaneous messages, the Test-and-Set operation on the
Semaphore Element would now indicate that the semaphore was previously set only in the case that the message buffer already holds N messages. Similarly the Test-and-Reset operation on the semaphore element would now indicate that the semaphore was previolsly reset only in the case where no message was held by the buffer. The operation is otherwise similar to Figure 8.
There are basically two ways in which the Master Scheduler may schedule execution of programs 1 and 2, namely periodic scheduling and on-demand scheduling. Periodic scheduling means that a program is scheduled to be executed regardless of whether there is anything for the programs to do or not. Periodic scheduling has two elements of time involved, i.e. the scheduling rate and the scheduling delay. The scheduling rate determines how often a program is to be executed, whereas the scheduling delay is defined as the time difference between the point in time when a program is scheduled to be executed and the the point in time when the execution actually starts. It can be seen that the scheduling rate of any periodically executed program must always be higher than or equal to the required rate of execution of that program. The scheduling delay may be anything from 0 up to the scheduling interval (i.e. 1/scheduling rate) during normal operating conditions. With an arrangement according to Figure 8 it can immediately be seen that program 1 and program 2 have to
be executed alternately, i.e. both programs need to be scheduled with the same scheduling rate. Even if the two programs were to be scheduled with different scheduling rates, the Semaphore Element of the message buffer would still force them to synchronize with each other and therefore be executed with exactly the same rate as long as the message buffer is able to hold a single message only. However, even if the message buffer is able to hold an arbitrary number of messages, the programs will, on the average, execute at the same rate.
Periodic scheduling always causes an overhead load on the system. This overhead load is directly proportional to both the number of scheduled programs and the scheduling rate of each program. It is therefore desirable to keep both the number and scheduling rates of periodically scheduled programs as low as possible, which means that on-demand scheduling should be used as much as possible.
On-demand scheduling means that a program is scheduled only when there is something for the program to do. Furthermore, scheduling should not be attempted unless the program can complete its execution without unnecessary rescheduling. Hence, in Figure 8 program 2 should only be scheduled when a message is available and waiting for it in the message buffer and program 1 when the message buffer is empty. However, the situation in Figure 8 is a special case of the more general situation
illustrated by Figure 9, where each program may interwork with other programs by means of both a number of messages and general common data elements . This implies the existence of an arbitrary number of locations within each program, where the program may be rescheduled.
Each concurrently executable and independently schedulable program is in conventional software terminology defined as a process. Process execution and scheduling are, in conventional Operating System controlled by means of so called Process Control Blocks (PCB).
Figure 10 shows a typical structure of a conventional Process Control Block containing the following elements: (a)The Process State
The process state is used to indicate the current status of the process as seen from the Operating System. The following states are typical:
WAIT Execution is currently suspended and no execution is currently scheduled.
READY Execution is currently suspended but the process is scheduled to be executed at the earliest possible opportunity. RUNNING The process is currently being executed
(b)The Scheduling Control
The scheduling of a process is usually performed on a priority queue basis, where each process is assigned a fixed priority and where higher priority processes may interrupt lower priority processes. This means that a lower priority process, which is RUNNING at the time a higher priority process is scheduled (i.e. READY), will be unconditionally interrupted, whereby the point of interruption will be saved in the Program Location below and the lower process itself rescheduled (i.e. inserted in the coreesponding priority queue with the process state reset to READY, after which the higher priority process will be unlinked from the higher priority process queue, the Process State of the higher priority process will be set to RUNNING and execution of the higher priority process will begin from the location held in the Program Location below. The Scheduling Control therefore comprises both a priority indication and necessary bidirectional link elements to acccomodate the scheduling queues as well as the need to be able to reschedule or change the scheduling at will. (c)The Message Queue Control
The Message Queue Control consists of the necessary data elements indicating the head and tail of the
queue of messages sent to the process but not yet received, assuming that the message queue is of the
FIFO type (First In, First Out). (d)Program Location The Program Location is used to hold the location in the program when execution is to start the next time the program is executed. (e)Global Data Access Control
The global data access control comprises a set of pointers (i.e. absolute addresses or similar), which define all global data elements accessible by the process. A global or common data element is any data element accessible from more than one process.(f)Local Data Space The local data space is a memory area dedicated to data elements used to hold specific information local to a process, i.e. not accessible to any other process. (g)Private Stack Space The private stack space is used for the storage of temporary values during the execution of a program.
A message based signalling system using on-demand process scheduling is for example described in "An Operating System for Reliable Real-Time
Telecommunications Control" by N.A. Martellotto in the IEE Conference Publication No.198, pages 138-143.
Another such signalling system is described in "A
Distributed Operating System for the TCP16 System" by D.
Hammer et al. in the IEE Conference Publication No.223, pages 178-183. The type of message/oriented, process scheduling based signalling system based on process scheduling illustrated by Figures 8-10 and by the referred examples has certain typical characteristics as will be outlined below together with their consequences on the systems as a whole.
Firstly, as already described, a process may be scheduled to be executed because of reception of a message, because it cannot send a message due to message buffer congestion, because a mutual exclusion condition is encountered, because the program has been interrupted by a higher priority program or for any other reason. In a system, where processes and programs may be scheduled for a number of different reasons, process scheduling is, in fact, the only feasible scheduling alternative. This, in turn, requires a Process Control Block of the type outlined in Figure 10.
Process Control Blocks of the type outlined in Figure 10 are quite big, ranging from a minimum of several hundreds of bytes of memory to several thousands of bytes, all of which have to be held in the main memory (M) of a computer according to Figure 4. As a consequence there is a limit to the maximum number of
available PCBs, which in turn limits the number of simultaneously active (schedulable) processes. Typical values for the maximum number of PCBs range from 100 up to 255. On the other hand, the required number of simultaneously active processes in a modern, complex Real-Time system is of the order of 10000 or more. Using at most a few hundred of PCBs to accomodate a several orders of magnitude larger number of processes forces the systems to rely heavily on dynamic allocation of resources. This also introduces the notion of 'starting' a process (i.e. allocation of a PCB to it from a common pool of resources) and the 'termination' of a process (i.e. the release of its PCB back into the common pool of resources), which in turn requires a process to be a program with an entry point and an exit point like any other program.
Secondly, scheduling a process waiting for a message before a message actually has been sent to that process would only be a waste of computer time and cause an unnecessary overhead. It is therefore accepted that sending of a message to a process should cause that process to be scheduled. However, once execution of a process scheduled due to reception of a message is started, it is again important to identify the actual message as efficiently as possible. Taking into account the fact that more than one message may actually have
been sent to the same process leads to an individual message buffer for each process, in which the messages are queued on a first-in, first-out basis as the most efficient method of realisation. Each process will thus, for reasons of efficiency, be required to have its own individual message buffer. However, in a Real-Time system messages are sent to the system in a completely arbitrary fashion. There is therefore nothing that principally prevents an arbitrary number of messages from being sent to the same process simultaneously. In order to accomodate all these messages each messsage buffer would have to be able to store an infinite number of messages. This is clearly impossible in any practical system. Each message buffer is therefore limited to a number of messages. As a result message buffer congestion may be encountered when a process tries to send a message. This now leads to the requirement to be able to reschedule the process when (and not before) message buffer space for the receiving process is again available with the direct implication that it must be possible to tie a particular message buffer, not only to the receiver of a message, but simultaneously also to the sender of the message.
The most efficient way of organising the message buffers is therefore considered to be to associate each message buffer with two processes. In such a system each message buffer may be considered to represent an
intercommunicating link or port between two processes. It should be evident, that the number of message buffers will grow more or less exponentially with the number of processes in the system, thereby enhancing the requirement to keep the number of simultaneous processes fairly low for reasons of manageability. Because of the limited resources available the number of messages that a single message buffer is able to hold simultaneously must be reduced in comparison with the case where each process had a single message buffer for all messages to that process, thus increasing the probability of buffer congestion.
Because a process actually has to be able to communicate with a number of other processes, a number of message buffers will actually be associated with each process. This means that the problem of identifying each message and treating the messages in the correct order becomes intensified. The scheduling of the processes and the identifying of the messages themselves in a system of this type is therefore a fairly complex and time consuming procedure. Typical CPU times for the transfer of a single message between a sending and a receiving process are of the order of 1 millisecond or more. This time includes all scheduling and necessary rescheduling on both the sending and receiving side. When it is taken into account that the actual processing triggered by such a message of is of the order of 100 μseconds or even
less CPU time, the message overhead becomes completely unacceptable.
Thirdly, because message communication in the form described above tightly couples two processes to each other it is generally accepted to use messages as a means of synchronizing two processes with each other, i.e. the message communicaton aspect and the synchronizing aspect are inextricably tied together. As a consequence the sending process must know that a receiving process exists before sending a message, with the case where no receiving process exists being treated as a direct error. Similarly the receiving process must know that a sending process exists and is about to send a message before the message is actually sent. This directly precludes any kind of off-line or on-line pluggability in a system.
Because message communication as described above carries a large overhead penalty and does not give significant advantages compared with other means of cσπmunication, the general trend has been to try to avoid message communication by using hierarchical communication and data communication wherever possible or to try to group messages together as much as possible where message communication is warranted in order to be able to reduce the overhead of the systems. The resulting distorsion of the systems is accepted as natural for lack of any better way to design the systems.
It is also possible to use messages as a completely asynchronous means of communication between different systems. There are a number of differences between a message communication system based on asynchronous messages and a message communication system based on synchronous messages. The important differences will be described below. In order to avoid confusion messages in a system based on asynchronous messages will hereafter be referred to as 'signals' and the system itself as a 'signal based communication system'.
The first of the differences between a system based on asynchronous signal communication instead of synchronous message communication is that scheduling of processes is based exclusively on the arrival of signals. This is always possible if each signal contains information enabling the receiving process to be unambiguously defined. There is consequently no need to tie signal buffers to either the receiving or the sending process, i.e. a common set of signal buffers may now be utilized for all processes. The immediate consequence of this is that this common set of signal buffers may be so dimensioned that buffer congestion never arises during any normal operating condition. Hence it is possible to assume that a signal can always be sent, i.e. rescheduling due to signal buffer congestion is never required. (If, for any reason, signal buffer congestion nevertheless would occur, it can, and should, be treated
as a general system fault like any processor hardware fault). It can thus be seen that signal scheduling is self-reenforcing, i.e. because signal reception is the only reason for scheduling a common set of signal buffers may be employed; because a common set of signal buffer may be employed, the number of individual buffers may be dimensioned so that buffer congestion never occurs and because buffer congestion never occurs there is no need for scheduling for any other reason than signal reception. (It is to be noted the the previously described process scheduling with synchronous messages is equally self-reinforcing, i.e. because processes are scheduled for a multitude of reasons of which messages scheduling is only one it is necessary to employ individual message buffers per process or process pair; because individual message buffers are used they have to be dimensioned for the possibility of congestion; because they have to be dimensioned for congestion the processes have to be scheduled when such congestion occurs in addition to being scheduled at the arrival of a message, hence the processes have to be scheduled for other reasons than simple message arrivals).
The second difference is that the signals are assigned priorities depending on their importance, whereas processes have no inherent priority of their own. Any process is now executed at the priority of the signal, the arrival of which triggered the execution of
the process. This means that the same process may be executed with completely different priorities at different points in time. Because the priority of the signals are assigned completely independently of the processes sending and receiving the signals it is possible to vary the priorities of the signals without modifying the design of the processes themselves. A system based on signal scheduling may therefore be easily 'tuned' by varying the signal priorities. Such tuning is impossible in a process scheduling system.
The third difference is that a signal scheduling system has no need for the type of Process Control Block shown in Figure 10. Because processes are never scheduled directly there is no need for any Scheduling Control. Secondly, because the signal buffers are not associated with any particular process there is no need for any Message Queue Control. A direct consequence of the signal scheduling as opposed to process scheduling is also that it is now possible to design the system so that a common stack area is utilized instead of a Private Stack Space. Hence, of the original Process Control Block in Figure 10, only the Program state, the Program Location, the Global Data Access Control and the Local Data Space remains. However, because the size of a PCB has already shrunk in size from possibly several thousands of bytes to at most a few tens of bytes, the need to restricting
the number of PCBs to a most several hundred disappears. It therefore becomes possible to use static allocation of resources (including data space) instead of the previously necessary dynamic allocation. Contrary to popular belief such static allocation actually saves resources, because the allocation may now be subject to strict ddimensioning according to need, with all resources for a particular function logically grouped together. The remains of the PCB may therefore be intergrated in the normal data space belonging to any function (whereby the need for any separate local data space completely disappears, thereby further shrinking the remains of the PCB).
The fourth important difference is that eventual interrrupt must now be comptletely transparent, i.e. they must not cause rescheduling because arrival of a signal is the only reason to cause any process to be executed. As a consequence, once initiated by a signal, execution of a process continues until the process has to wait for the next signal to arrive before execution can continue. As far as the Process State of the original PCB is concerned only two states need to remain, i.e.
WAIT Execution is currently suspended until the next signal is received. RUNNING The process is currently being executed and blocked from signal reception. There is no need for any 'READY' state because the
process is never itself scheduled.
The fifth important difference is that signal transfer may be made at least one order of magnitude more efficient than the message transfer in a process scheduling system because signal routing and scheduling may now be performed in a general way instead of individually per process or process pair. Typical signal transfer times are of the order of 20μseconds or less. This is to be contrasted with the message transfer times of 1000μseconds or more, which are typical in process scheduling systems. The fast signal transfer makes it possible to eliminate any other type of communication between software units without loss of eficiency, thereby again reinforcing the single reason for scheduling.
The final important difference is that the different processes now becomes pluggable, because asynchronous signal communication does not assume or require the processes themselves to be synchronized with each other. Because no buffer congestion is encountered it is possible to send signals to other processes without being aware of whether the destination process exist or is able to receive the signals. It is therefore also possible at any time to replace individual processes with updated versions of the same or even by completely different processes without any other process having to be aware of any change.
Systems base on asynchronous signal communication have already been implemented. One example of such a system is described in the articles "AXE-10 System Description" by M. Eklund et al, and "AXE-10 Software structure" by G. Hemdal, both published in Ericsson Review, No.21976, pages 70-89 and 90-99 respectively, this function block oriented stored programme controlled system being also disclosed in US patent 3969701. The basic principle of this system is illustrated by Figure 11. The signals may be assigned N different priority levels. Each priority level is assigned a predefined number of Signal Buffers (SB), exemplified by the m signal buffers (SBll-SBlm) on priority level 1 and the k signal buffers (SBNl-SBNk)on priority level N. The number of signal buffers assigned to the different priority levels (m, ... k) is dimensioned so that buffer congestion never will occur during any normal operating condition. Because the number of high priority signals will, in general, be considerably smaller than the number of low priority signals and because the high priority signals always arew treated before low priority signals the number of signal buffers required to accomodate high priority signals will always be smaller than the number of signal buffers required to accoamodate low priority signals in the same system.
The signal scheduling on each priority level (i)
is controlled by a set of two registers, a Job-In register (JIi) and a Job-Out register (JOi), and a
Comparator circuit (Ci) interacting with the interrupt control system of the processor. The JIi-register always points to the first free signal buffer out of the signal buffers assigned to the associated priority level
(i), whereas the JOi-register points to the first busy signal buffer associated with the priority level (i).
The Comparator circuit (Ci) compares the contents of the JIi and the JOi registers and generates an interrupt signal (INTi) as soon as the two registers are unequal.
When a signal of priority (i) is to be sent to a process, the signal buffer identified by the JIi register is seized for this particular signal. The signal buffer space is divided into two parts, a signal header part containing the Signal Identity (SID) and the Signal Destination (SDEST) as it most important components, and a signal information part (SINFO) carrying the information to be transported by the signal. After the signal buffer has been seized all information associated with the signal is transferred to the signal buffer as an indivisible operation, whereafter the JIi register is incremented. When all signal buffers on a priority level i are empty, then both the JIi and the JOi register identify the same signal buffer. No interrupt signal is
therefore generated by the Comparator circuit Ci.
However, as soon as the JIi register is incremented by sending of a signal as described above, the two registers become unequal, thereby generating an interrupt signal (INTi) to the interrupt system of the actual processor. If the processor is actually executing on a priority level, which is higher than or equal to the priority level i indicated by the newly generated interrupt signal (INTi), then the processor continues its current processing without any interrupt. However, if the priority (i) of the newly generated signal is higher than the priority on which the processor currently executes, then this processing is interrupted reception of the new signal initiated. Interrupt handling as such is well known in all modern processing systems and is not therefore further described.
On a signal interrupt to priority level (i), the processor starts executing a sequence of instructions or microinstructions, which identify the signal buffer indicated by the JOi register associated with priority level (i). This signal buffer contains all information associated with the signal next to be received. The Signal Destination part (SDEST) of the signal header identifies the actual receiving process and is now utilized to identify the data and program areas associated with that process. The Signal Identity (SID) is utilized to identify the task to be performed as will
be described below, whereafter the information carried by the signal (SINFO) can be transferred to, or utilized by, the receiving process. After all of the information in the signal buffer has been utilized the signal buffer is discarded by means of incrementing the JOi register. If, on incrementation of the JOi register, the discarded signal was the only signal on the priority level i, then the interrupt signal (INTi) will disappear, otherwise an interrupt signal ( INTi) will continue to be generated as long as any unreceived signal remains in the signal buffer for the associated priority level i.
When a signal is received the Signal Destination part (SDEST) is used to determine the identity of the receiving process. This identity allows both the program and the data areas associated with the receiving process to be uniquely determined. Figure 12 illustrates the basic principle used. The memory space associated with a particular process consists of a Program Area (PA) and a Data Area (DA), where the starting addresses (PSA) and (DSA) are assumed to be determined by a fixed algorithm from the signal destination (SDEST).
The Data Area (DA) contains all of the data for the process and also contains the so called STATE of the process. The STATE is a data element of fixed length, e.g. a byte element. One bit of the STATE element is used to represent the previously mentioned Program state
with the two values WAIT and RUNNING and the remaining 7 bits used to indicate one of a number (Q) functional conditions of the process.
The Program Area (DA) contains two parts. The first part of the program area consists of a so called Signal Distribution Table with M entries (SEAj to SEAm). Each signal entry (SEAj) contains the internal address to the program code related to reception of this signal. The actual tasks to be performed normally depend on the functional condition of the process, which functional condition is retained in the STATE element in the data area. For this reason a signal entry (SEAj) normally points to an associated Signal Jump Table (SJTj). The signal jump table principally contains one entry for each of the Q functional conditions of the process, each entry (PEjk) identifying the actual set of instructions (PCjk) to be executed on reception of signal (j) for functional condition (k). On reception of a signal the start addresses associated with the destination process are determined from the SDEST element of the associated signal buffer. The SID element is thereafter used to indicate the actual signal entry (SEASID), from which the location of the first instruction to be executed is obtained. This instruction usually is a table jump instruction or set of instructions (JTAB), which uses the functional condition
stored in the STATE element to determine further activities.
As already described a process may be either in the WAIT or in the RUNNING state. In the WAIT state the process is waiting for the next signal to be received, whereby the functional condition may be used to determine the activities to be performed on reception of the signal. However, when a process is RUNNING, then a signal has been received, at the reception of which the previous functional condition was used to determine the activities currently being performed. This previous functional condition is therefore no longer valid. Furthermore, the next valid functional condition is only determined at the termination of the processing, whereby the process reaches its' next WAIT state.
Before the functional condition is utilized for the identification of the actual instruction sequence to be performed it must be ascertained that the process actually is in the WAIT state and, if so, change this state to RUNNING. This must be done by means of a single indivisible instruction. Because the three activities of reading the process state, setting the process state and reading the functional code always are performed together, they have all been integrated into a single instruction, which is the reason for assigning one bit of the STATE element for this purpose. An example of the details of such an instruction is given by the Test and
Set and Operand (TAS) instruction of the Motorola MC 68000 microprocessor.
On reading the STATE element, the bit indicating the Process State is automatically set. If this bit was previously reset, then the process was in a WAIT state, i.e. the functional condition is valid and processing can therefore continue. At the termination of this processing the new functional conditioning is inserted into the STATE element, thereby also automatically resetting the process to WAIT.
If, however, the bit was previously set when the STATE element is first read, then the process is currently RUNNING (either on a lower priority level interrupted by a higher priority signal or asynchronously being executed by another CPU). In this case reception of the signal is not possible, i.e. the signal must now be deferred for reception at the termination of the currently interrupted processing in order to prevent faults from occurring. Such deferment may for instance be achieved by seizing a signal buffer on a lower priority level in the standard manner, copying all information from the actual signal buffer into the newly seized signal buffer and thereafter releasing the actual signal buffer by incrementing the JOi register for the priority level (i) of the actual signal, thereby either inhibiting the associated interrupt signal (INTi) and thus allowing the interrupted program to be resumed or
allowing the next signal to be analysed for reception. When processing is to terminate by reaching the next WAIT state the new functional condition is inserted in the STATE element of the process, thereby also resetting the Process state to WAIT. At this time at the latest, if not already done, the signal buffer of the old received signal must be released by incrementing the JOi register associated with the priority level (i) of this signal. When now the processor executes an instruction intended to cause execution of the previously interrupted program to be resumed, then three cases are possible depending on the number and priorities of the waiting signals as follows.
If, when a signal buffer is discarded, additional signals of the same priority as the discarded signals are waiting to be received, then the incrementing of the JOi register will not make the contents of the JOi and the JIi registers equal. Hence an interrupt signal for priority level (i) will continue to be generated by the Comparator circuit (Ci) of Figure 11. In this case the execution of the interrupted program will not be resumed but reception of the next signal on priority level (i) initiated in the same manner as for a signal interrupt. If, when a signal buffer is discarded, the actual signal was the only one on that priority level but in the meantime signals of an intermediate priority between the
discarded signal and the interrupted program have been received, then execution of the interrupted program is again not resumed. Instead reception of the first received signal of the highest intermediate priority is initiated in the same manner as for a signal interrupt.
Only in the case that there are no further signals of the same priority as the discarded signal nor any signals of any intermediate priority will execution of the interrupted program be resumed. It is, in this case, to be noted, that any interrupt will be completely transparent, i.e. a function does never know that, or if, it has been interrupted.
It is finally to be noted, with reference to
Figure 11, that when a signal buffer is seized and the associated JI register is incremented there does exist the possibility of all signal buffers being seized, which makes the contents of the JI and the JO register equal and therefore seemingly indicates no signal at all. This condition is, however, accounted for and treated as a fault although the corresponding circuitry has not been shown in Figure 11.
A system such as described with the aid of
Figures 11 and 12 has a number of advantages. The interface between the different software units is defined by means of signals, whereby each software unit knows which signals it is able to receive and can completely disregard all other signals. This makes the system
insensitive to faults and errors.
Secondly the signal identities are completely independent of the processes sending and receiving the signals, because each signal is identified by means of a unique signal number. It is thus possible to define standard interfaces for intercommunication between classes of processes. This provides the basic facilities required to achieve software pluggability.
A third advantage is that the signal communication is fast with typical average signal transfer times of the order of less than 30μseconds when this technique was first implemented. The signal transfer time is furthermore independent on the load of the system unlike a process scheduling system, where the actual message transfer time is dependnet on the amount of rescheduling that has to be done and therefore varies with the load on the system. Systems based on signal scheduling are therefore easily dimensionable.
There are equally a number of disadvantages with the described technique. The signal distribution table at the beginning of the program area in Figure 12 must, of necessity, be limited in size. In the actual system referred to, the maximum number of signals which any function block (process) is able to receive is 255, whereas the total number of signals in a system is of the order of several thousand. This means that the same signal identity numbers (SID) have to be reused for
different signals, thereby creating severa more or less related problems. Firstly, a number of the function blocks within the systems interwork via standard interfaces. What this eans is that the signal identity numbers within any standard interface have to be fixed.
As a consequence a fair amount of 'juggling' of the signal identity numbers may be required in order to accomodate the requirement for fixed signal identity within each individual standard interface and still be able to accomodate all signals within an operational system by a total of 255 signal identity numbers, especially as it must be impossible to receive two signals with the same identity number by the same function block. This may, of course, make any on-line pluggability difficult or even impossible. Secondly, having different signals with the same identity numbers may cause unwanted error situations in cases, where a signal for any reason whatsoever may be sent to the wrong function block and misinterpreted as a completely different signal because of the identical signal identity numbers. Thirdly, every system will change with time, which changes may require addition of new signal identity numbers. Again, because of the limited number of signal identities, such changes may be extremely difficult, especially in on-line conditions.
A second disadvantage is caused by the 'Finite State Machine' orientation of the approach with a
separate, coded STATEelement. This causes the functions to be programmed in an artificial way which will be illustrated by means of Figures 13, 14 and 15. Consider Figure 13. This figure illustrates a very simple process, MODE_THREE, which may receive two signals
(ADVANCE and RETREAT) and send three signals (ONE, TWO, THREE), i.e. it may receive any number of signals with the signal identities corresponding to ADVANCE, RETREAT, ONE, TWO and THREE. The MODE_THREE process contains a single data element (DATA) and performs a set of interconnected activities as follows. When the process is initially started the data element is cleared, after which the process reaches its first so called 'waiting node', i.e. a point where processing cannot proceed until a signal has been received. In this first waiting node (NO) the process waits for either an ADVANCE or a RETREAT signal to arrive before processing can continue.
When an ADVANCE signal arrives in waiting node NO, then the process will read information from its data element (DATA) and send this information with a ONE signal, whereafter the process will wait for the next signal to arrive in the second waiting node (N1). If instead a RETREAT signal arrives, the process will store the data carried by the signal into the data element
(DATA) and wait for the next signal in the third waiting node (N2).
N1, then the process will read information from its data element (DATA) and send this information with a TWO signal, whereafter the process will wait for the next signal to arrive in the third waiting node (N2). If instead a RETREAT signal arrives, the process will store the data carried by the signal into the data element (DATA) and return to the first waiting node (N0) to wait for the next signal. When an ADVANCE signal arrives in waiting node
N2, then the process will read information from its data element (DATA) and send this information with a THREE signal, whereafter the process will wait for the next signal to arrive in the first waiting node (N0). If instead a RETREAT signal arrives, the process will store the data carried by the signal into the data element (DATA) and return to the second waiting node (N1) to wait for the next signal.
Figure 14 illustrates the principle of the implementation of the function of Figure 13 in a system according to Figures 11 and 12 and assuming a processor according to Figure 1.
The Data Area (DA) contains two elements, the DATA element identified in Figure 13 and a STATE element required by the actual implementation. The STATE element may have three values (N0, N1, N2) which in this implementation correspond with the numeric values 0, 1
and 2.
The Program Area (PA) contains 10 identifiable sections. The first section (Located at Offsets +0, +1 and +2 relative to PSA) forms the Signal Distribution Table.. It is assumed that entry 0 in this table (I) is always reserved for possible initialisation activities, i.e. the first 'real' signal identity is '1', the second '2' etc. In the example the ADVANCE signal has been assigned th absolute identity '1' and the RETREAT signal the absolute identity '2'. The contents of the signal distribution table locates the initialisation sequence at offset +3, the ADVANCE reception sequence at offset 45 and the RETREAT reception sequence at offset +22. The second section of the program area (at offsets +4 and +5) forms the initialisation sequence.
The third section of the program area (at offsets +6 to +9) forms the jump table at reception of the ADVANCE signal and includes a previously mentioned Test-and-Set instruction (TAS) for interference protection. This jump table locates the sequences for reception of ADVANCE in state NO at offset +10, reception of ADVANCE in state Nl at offset +14 and reception of ADVANCE in state N2 at offset +18.
The forth, fifth and sixth sections of the program area (at locations +10 to +13, +14 to + 17 and +18 to +21) form the actual sequences initiated by the reception of ADVANCE signals .
The seventh section ( at offsets +22 to +26 ) forms a corresponding jump table section for the RETREAT signal.
Finally the three last sections ( at offsets +27 to +29, +30 to +32 and +33 to +35) form the actual sequences initiated by the RETREAT signal.
When a signal is received, then the signal is occupying a signal buffer as indicated in Figure 11. The SDEST information of the signal is now utilized to set the AR register of the processor in Figure 1 to point to the start of the data area (DSA). At the same time the start of the program area (PSA) is identified and the actual signal identity (SID) added. Because the MODE_THREE process may only receive ADVANCE and RETREAT signals, the only valid signal identities are '1' and
'2'. The contents of the location identified by the sum of PSA and SID is read and transferred to the PC register in Figure 1, whereafter normal program execution is started, i.e. program execution will be started at offset +5 in the program area if an ADVANCE signal is received and at location +22 if a RETREAT signal is received. Once started program execution continues until a RET(urn) instruction is encountered.
Figure 14 is an example of one valid implementation of Figure 13. Nevertheless, Figure 14 only represents the function of Figure 13 within the framework of, and the restrictions imposed by the means
of this implementation, i.e. the particular Finite State Machine Oriented approach with a STATE element coded in a particular way and with signals sent and received in a particular way. Furthermore, it makes it the responsibility of the programmer to take all these implementation restrictions into account, i.e. he has to design his functions with a particular implementation in mind and, as a result, introduce nonfunctional elements and sequences and thereby distort the original function. The function illustrated by Figure 13 shows, for example, a continuously looping process with three waiting nodes, where signals may be received and containing a single data element. However, Figure 15, which should show the same function, actually gives the impression that the function contains two data elements and a set of disjoint program sequences. Neither the existence of waiting nodes nor the looping nature of the interrelationship between the program sequences is apparent anymore. A third disadvantage with the signal communication as described above is the means available for deferring of signals. Because the JI/JO registers control the interrupt system as defined in Figure 11, a signal, which has to be deferred for whatever reason, has to be removed from its original signal buffer and transferred to a lower priority buffer in order not to block the resumption of execution of the lower level interrupted program, which temporarily block reception of
the signal. However, this means that the priority of the signal will be artificially lowered, with the result that the order between different signals may be lost. Hence the possibility for malfunctions exists in the case where signals are deferred.
A signalling system aimed at improving the advantages of asynchronous signal communication, while at the same time removing some of the disadvantages is disclosed in Belgian patent 876025. Like the previously described system, this system also works on the principle of a set of signal buffers, with the number of the signal buffers dimensioned so that buffer congestion never occurs. Unlike the previous system, where the signal buffers and the signal reception was controlled by means of fixed registers (JI/JO) with a fixed set of buffers for each priority level, the new signalling system controls the signal buffers by means of linked list of First-In-First-Out (FIFO) type with one list per priority level. This means that the signal buffers are, as such, common for all priority levels. Like the previously described system a signal buffer consists of a signal header part and a signal data part, where the signal data part directly corresponds with the previous system. The signal header part still contains information corresponding to SID and SDEST of Figure 11, although one of the objects of this signalling system is to allow undefined signal destinations in
connection with so called 'system basic messages' and thereby allow a greater flexibility an on-line pluggability than the previous system. In addition the signal header contains a list link element, necessary to allow the system to be controlled by means of linked lists.
Because the signal scheduling is controlled by means of linked lists instead of by a plurality of rotating registers (JI/JO), deferred signals may now simply be handled by means of another linked list.
A second major difference between the two systems is that the new signalling system does not assume a Finitie State Machine approach as such, i.e. it does not assume the existence of a STATE data element. The new signalling system basically retains the Program State and Program Location of Figure 10 in the form of an 'Action Indicator' to indicate whether a process is in its WAIT or in its RUNNING state element and a 'Program Pointer' to indicate the program location of a waiting node. This arrangement frees the system from maintenaning individual signal distribution tables of fixed length, i.e. the signal may be coded with completely arbitrary signal identity codes.
Figure 16 illustrates an implementation of the MODE_THREE process with the new signalling system, but still assuming a processor according to Figure 1.
The Data Area (DA) contains three elements inthis case, the DATA element explicitly identified in Figure 13, and two implicit data elements, the Action Indicator element (AI) and the Program Pointer element (PP). The Action Indicator is a binary elememt with two values corrseponding to the process states WAIT and RUNNING. When the Action Indicator indicates that the Process is in a WAIT state, then the Program Pointer identifies the offset to an instruction following a WAIT instruction. It is to be noted that the implicit data elements are not explicitly accessible by the programmer at any time.
The Program Area (PA) principally contains a single sequence of instructions, which directly corresponds with the network in Figure 13. Hence, the program starts at the Program Start Address (PSA), where the initialisation sequence is executed. This initialisation sequence consists of the two instructions at offsets +0 and +1, the second of which is a WAIT instruction. This WAIT instruction principally deposits the offset of the instruction following the WAIT instruction (in this case +2) into the Program Pointer, resets the action indicator to WAIT and initiates scheduling of the next signal. The WAIT instruction may also return the last signal buffer to the pool of idle signal buffers.
When a signal to the actual process eventually
arrives, the actual signal buffer is identified as the first signal buffer in the appropriate list. The SDEST information in the signal buffer is as before used to identify the start of the Data Area (DSA) and the start of the Program Area (PSA). The Action Indicator is Test-and-Set to RUNNING and, provide that it was previously reset to WAIT, signal reception s allowed to continue. If the Action Indicator was already set to RUNNING then the actual signal is deferred by inserting it in the deferred list.
If signal reception is allowed to continue, then the Program Pointer value is added to the PSA address, giving the resumption point for execution. Immediately after initialisation this would indicate the instruction at offset +2, which is an ACCEPT instruction. This instruction compares the actual identity of the signal as given by the SID element of the signal buffer with the signal identity number given by the instruction. If no match is found, i.e. the two signal identity numbers are unequal, then the next instruction is executed, whereby a either a new signal identity number is compared in the same manner or a DISCARD instruction is encountered. When a DISCARD instruction is encountered the Action Indicator is reset to WAIT and the actual signal buffer returned to the common pool of idle signal buffers, whereafter the next signal is scheduled for reception.
Whena match is found between the signal identity number from the signal buffer and the signal identity number in an ACCEPT instruction, the ACCEPT instruction also contains the location, from where further execution is to continue. Hence, if an ADVANCE signal is received at offset +2, then processing will continue from the instruction at offset +5, i.e. the instructions at offsets +5, +6 and +7 will be executed.
The instruction at offset +7 is another WAIT instruction. Hence, the Program Pointer is set to +8 in this case and the Action Indicator reset to WAIT. Again, if the previous signal buffer has not been previously released, it will be released at this time and the next signal to be received is scheduled. It is to be noted, that the existence of a separate DEFER list for deferred signals necessitates all signals in the deferred list to be scheduled for reception before all other signals, i.e. any deferred signal automatically has a higher priority than all other signals. However, due to the multipriority arrangement of the signals, there is no guarantee that a signal, which has been deferred once, will not be deferred again. Hence, when going through the list of deferred signals, each signal must only be analysed once without losing the order of arrival of the deferred signals. Furthermore, there is a distinct possibility of the number of deferred signals growing and the deferred signal analysis and
scheduling therefore causing a significant overhead in the systewms.
This possibility would be insignificantly small if signals would only be required to be deferred in order to avoid interference at signal reception. This is unfortunately not the case. Cases exist where signals have to be explicitly deferred. This will be illustrated by Figures 17, 18 and 19.
Figure 17 illustrates a process P, which is able to receive two types of signal (S1 and S2). The process P contains a network of tasks, of which the relevant part for the purpose of illustration consists of two waiting nodes (WX emd WY) and four tasks (T1, T2, T3 and T4. When the process P is in waiting node WX it is waiting exclusively for signal S1, i.e. any other received signal will be discarded. When the process P is in waiting node WY it is waiting exclusively for signal S2, i.e. all other signals will be discarded. However, as soon as signal S1 has been received, i.e. while executing the tasks T1, T2 and T3, no further signal reception is possible. It is, nevertheless, possible, due to the multipriority level environment, that the execution of any of the tasks T1, T2 or T3 gets interrupted, i.e. further signals of whatever type may arrive and be duly deferred because the process P is in the RUNNING state, i.e. these signals will be scheduled after the process P has reached the WY wating node. In this case only an S2
signal is of interest, because this is the only type of signal which may be received in waiting node WY.
Now consider Figure 18. This Figure shows the process P within its context as part of a larger structural entity (GP) which, in addition to the process
P, also contains a so called 'template definition' of a task template (TP). This task template is used for both task T2 and task T3 of the P process, i.e. the internal structure of both of these tasks is as defined by the TP template. To complicate matters further the TP template is assumed to contain an internal waiting node (WP), where either an S3 or an S4 signal may be received.
Due to the fact that the process P uses the template TP twice, the process P has two hidden waiting nodes (T2-WP and T3-WP) in addition to the two visible waiting nodes WX and WY. The full control flow representation of the relevant parts of the process P without use of a template would require all waiting nodes to be visible, seemingly in the way illustrated by Figure 19.
However, the function of process P according to Figure 19 is not equivalent with the function of process P according to Figures 17 and 18. One indication of this is that the process P according to Figure 19 is explicitly aware of the existence of the signals S3 and S4 in addition to S1 and S2, whereas the process P in Figures 17 and 18 only knows about S1 and S2.
Furthermore, if, in Figure 17 or Figure 18, a signal S2 actually occurs while the process is executing any of the tasks T1, T2 or T3, then the reception of this signal is, by default, deferred until the process has reached its next waiting node, i.e. WY. If, in Figure 19, a signal S2 occurs while the process is still executing task T1 (which is entirely possible due to the nondeterminate nature of asynchronous signals), then the signal is, by default deferred only until the process has reached its next waiting node, which in this case is T2-WP. However, in this waiting node the process is expecting only an S3 or an S4 signal but not an S2 signal. Hence, an early S2 signal would be discarded and consequently a fault would be generated. Such faults may be avoided by use of explicit DEFERS for all 'hidden' waiting nodes as illustrated by Figure 20, which shows an implementation of the relevant parts of process P according to Figure 18.
The use of explicit deferring of signals is necessary to ensure correct function, but it also significantly increases the probability of deferred signals existing within a system at any point in time and particularly so at peak load when processing time is at premium. The use of explicit deferring of signals also has another adverse effect on the system. A signal buffer consists of two parts as shown by Figure 11, i.e. a
signal header parts with a number of standard elements (SID, SDEST, etc.) and a signal information carrying part (SINFO). Normally the signal information part is never utilized to its full extent. This means that, provided that a signal buffer is retained from the time a signal is received and the execution of the corresponding activities are initiated until the process enters the next waiting node, the unused area in a signal buffer is available as a temporary local data space. This would significantly simplify the associated functions because temporary data space would normally be directly available, but requires that the signal buffers are released as part of the WAIT instruction.
However, referring to Figures 18, the signal buffer used for the signal S1 should be retained until enteering the waiting node WY, i.e. any local data temporarily stored in this signal buffer must be available for use in all of the tasks T1, T2 and T3. Nevertheles, when considering Figure 20 it should be obvious that the signal buffer used for S1 will be released on entering the waiting node T2-WP, which directly precludes use of the signal buffer for storage of local information on a regular basis.
The actual invention allows the advantages of asynchronous signalling between software units to be fully utilized while, at the same time removing the types of disadvantages mentioned above.
DESCRIPTION
The invention will be described with the aid of Figures 21-45. Figure 21 shows a general overview of a Processing System according to the invention. The difference between this processing system and a conventional processing system of the type illustrated by Figure 1 is that a Master Control Unit (MCU) according to the invention has been interposed between the Memory (M) and the CPU, whereby the CPU asserts addresses to the MCU on its Address Bus (ABUS) and exchanges data with the MCU via the Data Bus (DBUS). The MCU is in its turn connected to the memory (M) via secondary Address and Data Buses (ABUS2 and DBUS2) with the necessary control signals exchanged both between the CPU and the MCU and between the MCU and the memory (M).
Figure 22 shows em overview of the main partitionings of the memory (M) required to support independent software signalling units according to the invention. Such independent software units are hereinafter referred to as 'Sofchips'. The memory is first partitioned into two major areas, which are referred to as System Memory (SM) emd User Memory (UM).
The User Memory is partitioned into one logical area for each Sofchip (SCHP1, SCHP2, ... SCHPN). It is to be noted that, although the Sofchip areas in Figure 22 and in subsequent Figures are shown as contiguous areas for ease of understanding of the
invention, this is not, as such, a prerequisite for the invention.
The System Memory is partitioned into a Common Subroutine Area (CSA), a Stack Area (SA) and a Signal Buffer Area (SBA). The Common Subroutine Area contains a number of Common Subroutines (CS1, CS2, ... CSa). These subroutines may be used by any sofchip in a way equivalent with any machine instruction and may therefore be considered as extensions to the machine instruction list of the CPU. The Stack Area contains a single System Stack ( SS ) and a User Stack ( SS ) . The User Stack is utilized in connection with user programmed subroutine calls and temporary storage of information. The System Stack is used in connection with system interrupts and fault traps . The Signal Buffer Area contains a number of Signal Buffers (SB1, SB2, ... SBb), which are utilized for all communication between the sofchips. The number of signal buffers (b) is assumed to be dimensioned so that during all normal operating conditions there always is a free signal buffer available when required. Figure 23 illustrates the principal internal structure of an operational sofchip. A sofchip may contain a number of processes (P1 to PP), where each process forms a concurrently operating software unit. Each process may be instantiated a number of times individual for each process (NP1 to NPP), i.e. process P1 may be considered to form an array of NP1
instances, etc.
Each process always contains a program (PR1 to
PRp) and may, in addition, contain a data structure
(DSP1 to DSPP). THe sofchip itself may also contain a common data structure (DS1), which is accessible from any of the processes belonging to the sofchip. Each data structure may contain an arbitrary mixture of subsidiary data structures (DSil to DSij ) and single data elements (DEkl to DEkl), both of which may form arrays with an arbitrary numbewr of elements (N]il ....Nkl).
The sofchip communicates with its environment by means of a set of signals (S1 ... Sn). These signals are in reality sent and received by the program parts of the processes belonging to the sofchips with the information carried by the signals stored into and retrieved from the data elements belonging to the sofchips. It is to be noted that the processes of a sofchip can only access the data elements belonging either to the process itself or, if a common data structure (DS1) exists, to that data structure. Direct data access between sofchips is not possible.
Figure 24 illustrates an example of an actual sofchip (SCHP). This sofchip contains a single common data element (DE1) and three processes (P1, P2, P3).
Process P1 occurs only once (i.e. a process with a single instance) whereas processes P2 and P3 occurs N times
(processes with N instances). The signals Sl to Sn sent and received by the sofchip are distrubuted between the processes so that P1 sends and receives signals Sl to Si, process P2 sends and receives signals Si+l to Sj and process P3 sends and receives signals Sj+l to Sn.
Process P1 contains its program (PR1) and a data element (DE2), which forms an array with N individual elements. Program PR1 may access both the common data element DE1 and the DE2 array.
Process P2 contains its program (PR2 ) and a single data element (DE3). PR2 only accesses DE3, it is not required to access the common data element DE1.
Process P3 finally contains its program (PR3) and a data element (DE4), which forms an array with M elements. Program PR3 may access both DE4 and the common data element DE1.
Figure 25 illustrates a possible implementation of the sofchip in Figure 24 within a processing system according to the invention. Memory allocated to the sofchip is divided into two memory areas, a Program Area (PA) and a Data Area (DA).
The Program Area contains the programs (PR1, PR2, PR3) of the three processes (P1, P2, P3) belonging to the sofchip. It is thereby to be noticed that, although the processes themselves are shown to be instantiated in Figure 24, the corresponding implemented programs need
not be instantiated, because such instantiation would only mean an unnecessary replication of identical programs.
The Data Area contains the four data elements DE1, DE2, DE3 and DE4, which have been arranged contiguously for convenience. In this case all instantiations are carried over into the implementation, i.e space is allocated for N instances of DE2 (P1.DE2(1) to P1.DE2(N)), for N instances of DE3 (P2(1).DE3 to P2(N).DE3) and for M*N instances of DE4 (P3(1).DE4(1) to P3(N).DE4(M)). When any process of the sofchip is actually executing instructions, then the CPU controls this execution through its normal register set, exemplified by the registers PC, AR, IR and DR in Figure 25. However, when the number of memory areas to be accessed during the execution of any process is considered, then it can be seen that normally not less than 8 different memory areas need to be accessed. These are: (1)The Common Subroutine Area (CSA) for execution of any such subroutine (CSi). (2)The System Stack ( SS ) . (3)The User Stack (US).
(4)The Signal Buffer (SBR) containing information regarding the last received signal. (5)The Signal Buffer (SBS) containing information regarding the next signal to be sent.
(6)The Program of the process being executed. (7)Commonly accessible data elements within a sofchip. (8)Individual data belonging to the actual process within a sofchip.
It is, of course, conceivable to manage all of these areas directly from the CPU. However, unless the CPU has the capability to simultaneously address at least 8 memory areas, memory addresses must continuously be recalculated or swapped, which significantly reduces not only the efficiency of the actual processor but also the reliability, because all such address calculations and swaps have to be explicitly programmed.
According to the invention all such addresses are held by the MCU in the manner which will be subsequently described, thereby freeing the CPU and the programmer from all corresponding knowledge.
Figure 26 illustrates the internal arrangement of the MCU when addressing the Common Subroutine Area (CSA) and the Stack Area (SA) in the memory (M).
Both the Common Subroutine Area and the Stack Area are accessed on a word-by-word basis. There is no need, therefore, for any sophisticated range check and/or conversion of information read from or written into these areas. This means that as far as transfer of data is concerned, data can be gated from the data bus between CPU and MCU directly to the secondary data bus between
MCU and the memory (M) and vice versa. Such gating is standard technique and therefore only indicated but not explicitly shown in Figure 26.
It is further assumed, that the Common Subroutine Area and the Stack Area forms a single contiguous area in the memory and that this area can be addressed by means of M address bits, where M is less than the total number of address bits (N) on the primary address bus (ABUS). Figure 26 shows an arrangement where M = N-4. One of the four bits not directly used for addressing purposes (bit N-4) is used to discriminate between access of a common subsroutine or stack area, where no address or data translation is required, and access of any other area. In Figure 26 the value '0' has been arbitrarily selected to indicate direct memory access (common subroutine or stack access). In this case the three remaining bits are not used (NU) at all.
When an address is asserted on the primary address bus (ABUS) by the CPU and the discriminator bit is 0, then the N-4 least significant bits are, via the address gate (GA1) controlled by the discriminator bit, asserted on the secondary address bus (ABUS2) to the memory. Similarly, Read (R) and Write (W) signals asserted by the CPU are via the gates (GR1 or GR2) reasserted as secondary read and write signals (R2 and W2). Direct access thus bypasses the normal functions of the MCU, but is only possible for access of a limited
memory area.
Figure 27 illustrates the principle of accessing memory areas belonging to the sofchips and to the signal buffers. In this case the discriminator bit is set to 1, thereby causing the MCU to perform an address and data translation by means of a Descriptor Table (DT), three
Arithmetic Circuits (AC1, AC2, AC3), two Decoders (MD,
XDEC), a set of Index Registers (XR0, XR1, ... XRQ), a
Mask and Shift Unit (MSU) and a RAnge Conversion Unit (RSU). The general principles of this address and data translation is described in detail in U.K. Patent Application 8405491 and is, as such, not part of the invention although it forms a possible base for a signalling system according to the present invention as will be described in the following.
The arrangement in Figure 27 differs from the general arrangement in the U.K. Patent Application 8405491 in two details. Firstly, the descriptor table entries have been grouped together so that all descriptors belonging to the same sofchip form single contiguous areas in the descriptor table (SCHPD1, SCHPD2, ... SCHPDN). Secondly, one of the index registers (exemplified by XRO in Figure 27) has been given the special purpose of identifying the actual sofchip under execution by means of the starting address to the sofchip descriptor area within the descriptor table (DT). In order to speed up address translation
this index register (XRO ) is directly added to the Virtual Address obtained on the address bus from the CPU by means of an extra adder circuit.
An unmodified arrangement according to Figure 27 and otherwise as described in U.K. Patent Application
8405491 may cause an efficiency problem when the programs in the CPU are able to execute on different priority levels. In this case a higher priority program will interrupt the execution of a lower priority program. The standard technique used to avoid interference between the interrupting and the interrupted program is to save all process registers on the system stack in connection with an interrupt. Thus, when the interrupting program terminates, then the registers may be restored to their values at the time of the interrupt, thereby allowing the interrupted program to resume execution as if the interrupt had not occurred at all.
The existence of an MCU with its own set of index registers (XRO to XRQ) complicates this problem, because not only need the CPU save all its own registers, it must also in some way save the values of the index registers in the MCU. In order to avoid the need to save the MCU index registers the set of index registers (XRO to XRQ) may be replicated with one set of index registers for each priority level (XRS(0) to XRS(7)) as indicated in Figure 28. The actual priority level may now be provided as part of the Virtual Address asserted on the primary
address bus (ABUS) from the CPU. Figure 28 shows an arrangement, whereby the three most significant bits of the Virtual Address indicates the current CPU priority and are used to select one of the index register sets when the discrimator bit in the virtual address is 'I'.
It is to be noted that the above arrangement not only allows a single processor to execute programs on different priority levels without interfering with each other, it also allows several CPUs to be connected to a single MCU and execute programs independently of each other, provided that each processor executes on a unique priority level.
In order to cater for the need of signalling between the processes of the sofchips and the need to permit all relevant program and data areas to be accessed the index registers within each index register set XRS(i) of the MCU are given further dedicated use as exemplified in Figure 29.
One of the index registers is dedicated to the identification of the actual sofchip. In order to indicate that any of the index registers could have been selected for this purpose, this index register is called the SofCHiP Register (SCHPR). The SCHPR register forms the base for points (6), (7) and (8) in Figure 25. Figure 27 and Figure 29 show an arrrangement where index register XRO has been selected as the sofchip register and whereby the sofchip identification is performed by
means of the starting address to the sofchip descriptor area within the descriptor table (DT).
A second index register is dedicated to the identification of the actual received signal. This register is called the Received Signal Register (RSR) and corresponds with point (4) in Figure 25. In Figure 29 index register XR1 has been selected as the received signal register.
Two further index registers are dedicated to the control of signal queues as will be described later.
These signal queues are of the FIFO type (First-In-First-Out) and are controlled by a Signal Queue Head Register (SQHR) and a Signal Queue Tail Register (SQTR). Index registers XR2 emd XR3 are dedicated for this purpose in Figure 29. It is to be noted that neither of these two register corresponds with any of the points in Figure 25. A fifth index register is dedicated to the identification of the signal next to be sent. This register is called the Send Signal Register (SSR) and corresponds with point (5) in Figure 25. Index register XR4 is dedicated to this purpose in Figure 29.
A sixth index register is dedicated to the actual process instance index. This index register is called the Instance index Register (IXR) and is used in conjunction with with the SCHPR register when accessing any data elements belonging to a process instance. Index register XR5 is selected for this purpose in Figure 29.
The remaining index registers (XR6 to XRQ) may be used without any dedicated purpose.
Figure 30 illustrates the essential additional circuitry required for a signalling system according to the invention and the principle for interfacing the new circuitry to existing MCU circuitry. This additional circuitry consists of a Microprogram Unit (MPU), an Interrupt Controller (ICNT) and a set of gates (GI0 to GI7) and comparator circuitry (CA0, CB0, ... CA7, CB7, STCD).
The MPU interfaces to the main memory (M) via the secondary address bus (ABUS2), i.e. the MPU may independently assert addresses to the main memory. The MPU also controls the readout of information from the memory via MSU and is able to interface directly to the RCU. The MPU is also able to independently control the task level via the Level Decoder (LDEC) as will be later explained. The MPU is itself controlled from the Interrupt Controller (ICNT) and may also generate interrupt signals (INT) to the CPU.
The relevant parts of the MPU for the purpose of the invention are the Micro Program Memory (MPM), the Micro Program Descriptor Table (MPDT) and two internal registers, an Idle Queue Header register (IQH) and an Idle Queue Tail register (IQT). The use of these will be explained in detail in the following. The Micro Program Memory contains a set of microprograms (MP1, MP2, ...
MPP), which control the use of the MCU according to the invention. These microprograms are described in detail in Figures 36, 37, 38, 39, 40, 41, 44 and 45.
The interrupt Controller is a standard interrupt controller such as exemplified by the INTEL 8259A
Programnable Interrupt Controller emd is as such only a subject of the invention as far as its interconnections to the other relevant parts of the MCU axe concerned.
The Interrupt Controller reacts on a number of interrupt signals. These interrupt signals are either generated by the above mentioned gate/comparator circuitry as will be explained below or generated for particular combinations of bits on the primary address bus from the CPU when the previously mentioned discriminator bit is set to '1' by means of the Mode Decoder (MD) as shown in Figure 30. It is to be noted that neither the number or the particular bits chosen to indicate the corresponding mode is critical for the invention, bits 12-15 on the address bus in Figure 30 have been chosen only as an example. Each set of index registers (XSR(0) to XSR(7) in Figure 30) contains a Received Signal Register (RSR) and a Signal Queue Header Register (SQHR), which in the example shown in Figure 29 have been assigned as index registers 1 and 2 (XR1 and XR2). The contents of the RSR register belonging to each group of index registers (XRS(i)) is on a bit-by-bit basis by means of an associated comparator circuit (CAi) compared with a
predefined Stop Code (STCD), whereby the comparator circuit generates an output signal for any difference between the contents of the RSR register and the stop code value. Similarly, the contents of the SQHR register is by means of a second associated comparator circuit (CBi) compared with the stop code value, whereby the comparator generates an output signal for any unequality.
The outputs of each set of comparator circuits (CAi emd CBi) are by means of a gating arrangement (GIi) combined into a single interrupt signal (IRi, whereby an interrupt signal will be generated when the SQHR register is different from the stop code, provided that the RSR register is simultaneously equal to the stop code. Each set of index registers (XRS(0) to XRS(7)) is thus able to generate one interrupt signal, whereby the the Interrupt Controller (ICNT) resolves the internal priority between these internally generated interrupt signals and the interrupt signals generated from the CPU by means of the Mode Decoder (MD). It is again to be noted that the number of index register sets is not critical for the invention. Thus, although Figure 28 and 30 shows a total of 8 index register sets, any other number is possible. For any process, for which the program belonging to that process is currently being executed by a CPU, the MCU has one of its set of index registers allocated to
the execution of that process program. According to the invention, the only reason for scheduling and execution of a program is the reception of a signal. In this respect the signalling system according to the invention is similar in kind to the previously described signalling systems, i.e. the signals are assigned different priorities and signal transfer takes place by means of a set of signal buffers.
Similar to the signalling system disclosed in the Belgian patent 876025 each process is controlled by means of a Program Pointer (PP), which is not directly accessible to the programmer, and which contains two components, an Action Indicator (AI) and a Pointer (PTR) as illustrated by Figure 31. However, unlike this known signalling system, the Pointer is not restricted to instruction addresses, and the Action Indicator has a total of four values instead of only two. The four values of an Action Indicator according to the invention are as follows: WAIT Execution is currently suspended until the next signal is received. In this case the Pointer indicates the instruction address to the next waiting node. TRANSIT The process is in transit on the indicated level of execution and blocked from signal reception on this level. In
this case the Pointer identifies the signal buffer containing the signal which caused the transition.
NORMAL The asssociated signal is currently under analysis. The Pointer value is irrelevant.
DEFERRED The associated signal is currently deferred. The Pointer value is irrelevant.
Similar to the previously described signalling systems the signal buffers according to the invention consists of a Signal Header part (SH) and a Signal Information Part (SINFO) as illustrated by Figure 32. The significant difference between a signal buffer according to the invention and previously known arrangements is that the Signal Header part in addition to the standard Signal Identity element (SID) and possible standard Link and Priority Indicator elements (LE, PL) contains a Program Pointer Element (PP) with the same internal structure as the process program pointer shown in Figure 31 and used in a similar way, and that the Signal Destination element (SDEST) in structured in a particular way, which directly reflects the sofchip structure.
The invention assumes that a standard sofchip structure has been defined. An example of such a standard
sofchip structure is illustrated by Figure 24. This standard sofchip structure assumes that each sofchip may contain two types of processes, unreplicated processes and replicated processes, whereby all replicated processes have exactly the same number of instances (N). If it is assumed that all sofchips within a system are assigned a unique numeral identity code starting from one, then each sofchip may be identified in toto by means of its identity number or Sofchip Index (SCHPX).
Secondly, because all replicated processes have exactly the same number of instances (N), then the actual instance may be identified by a number in the value range 1...N. This index is defined as the Instance Index (IX). Furthermore, the value 0 may used as an Instance Index value to indicate nonreplicated processes.
Finally, each process within a sofchip may be given a number, which is unique within the sofchip or, alternatively, within the group of either nonreplicated or replicated processes belonging to the sofchip. This number may now be used as a Process Index (PX) to identify the actual process.
With such a standard sofchip structure as described above each process may uniquely be identified by means of three components, i.e. the Sofchip Index (SCHPX), the Instance Index (IX) and the Process Index (PX) shown as the three components of the SDEST element
of the Signal Header part of a signal buffer in Figure 32.
The standard sofchip structure in Figure 24 and the associated structuring of the process identity are only one example of possible standard sofchip structures, i.e. the invention is not restricted to this particular standard sofchip structure, although this structure will be assumed for the rest of this description.
Figure 33 illustrates the total implementation structure of a sofchip according to Figure 24. In addition to the program and data areas already shown in Figure 25, Figure 33 also shows the Program Pointers required for the different processes as well as an assumed Common Process Routine Area (CPR) within the sofchip Program Area (PA). The nonreplicated process P1 has a single program pointer (P1.PP), whereas each instance (i) of the processes P2 and P3 has its own program pointer (P2(i).PP, P3(i).PP). The Common Process Routine Area contains subroutines, which belong to the sofchip, i.e. they can only be called by programs belonging to the sofchip itself. Figure 33 also shows a possible descriptor table structure associated with the sofchip, which encompasses both a general MCU descriptor table (DT, see Figure 27) and a special MPU descriptor table (MPDT, see Figure 30).
The MPU Descriptor Table (MPDT) contains the Signal Buffer Base Address (SBAB), i.e. the address in
the main memory (M) Where the first signal buffer is located. SBAB is used internally within the MPU to calculate individual signal buffer addresses as will be described in connection with the individual microprograms below.
The MPU Descriptor Table (MPDT) further contains a Sofchip Descriptor Table area (SCHPDT) where, for each sofchip, two pointers are held. The first pointer identifies a secondary descriptor area within the MPU Descriptor Table itself (SCHPPDk). This secondary descriptor area holds the starting addresses within the Program Area of each process within the sofchip (P1PA, P2PA, P3PA) as well as the base addresses to the program pointers for the processes (P1PPA, P2PPA, P3PPA). The second pointer identifies the starting address within the general Descriptor Table (DT) of the sofchip descriptor (SCHPDk).
The sofchip descriptor table (SCHPDk) is shown to contain a descriptor (PD) to the Program Area (PA), a descriptor for each accessed data element ( DE1D, DE2D, DE3D, DE4D) belonging to the process as well as a descriptor for accessed information in sent or received signals (RSD-, SSD-). The internal structure and use of these descriptorsare as described in U.K. Patent Application 8405491.
The CPU executes the program of each process instruction by instruction as in any normal processing
system. As long as no Store (SD) or Load (LD) instructions requiring MCU address translation (discriminator bit N-4 in Figure 30 set to 1) and with an address modifier code (bits 12-15 in Figure 30 ) generating an interrupt to the MCU Interrupt Controller (ICNT) are executed, the processing proceed normally. However, by performing a Load or Store Instruction with the discriminator bit set and with the address modifer generating an MPU interrupt, then one of the special microprograms (MP1, MP2, ... MPP in Figure 30) may be evoked and independently executed by the MPU. These microprograms include, but are not restricted to, the following microprograms:
(1) Signal Discard (2) Waiting Node Entry
(3) Signal Buffer Release
(4) Signal Buffer Seize
(5) Signal Dispatch
Figure 34 shows a typical program arrangement of the program in the main memory (M) according to the invention. This program consists of two kinds of instruction sequences. The first kind is a normal instruction sequence representing the execution of tasks to take a process from one waiting node to another. This type of program is therefore defined as a Task Program. Task Programs are typically terminated by an instruction
or an instruction sequence causing the process to enter the next waiting node. According to the arrangement in Figure 34 this is performed by means of a standard Common Subroutine (CSWAIT). On calling this subroutine (as on calling any subroutine) the return address from the subroutine (address to the Call instruction + 1) is deposited topmost on the currently used stack. The CSWAIT subroutine simply pops the return address from the top of the stack into the DR register and then issues a Store Instruction with the value to be stored in the DR register and with the address modifier indicating 'WAITMODE'. When the 'WAITMODE' code is decoded by the Mode Decoder (MD), this is assumed to cause the actual value (i.e. the return address) to be transferred to index register XR6 (=GIR1 in Figure 29) and independently to generate an interrupt to the MPU, causing the Waiting Node Entry microprogram to be executed. This effectively dissacociates the CPU from the process as will be described in connection with this microprogram. Because execution of a task program in the CPU is initiated by means of an interrupt, the only remaining instruction in the CPU is a conventional Interrupt Return instruction (IRET).
The second kind of instruction sequence in the program of a process is associated with signal reception in a waiting node and is not a proper program (i.e. an executable sequence of instructions) at all. The first
emory word of such a sequence, which follows a Call CSWAIT instruction according to the invention, contains two items of information, i.e. the Task Level (TL) of the Waiting Node and the Number of Signals (SIGNUM) expected in that waiting node.
The Task Level is used to indicate whether a waiting node occurs directly within a process (subsequently coded as task level '0' ) or within a procedure called from a process (subsequently coded as task level '1' ) or from another procedure (subsequerntly coded as task level '2', '3', etc.). The significance of the Task Level is that it indicates whether an unexpected signal is to be discarded (task level '0') or deferred (all other task levels). Thus, in an implementation of the program of Figure 18, the waiting nodes WX and WY would have task level '0' and the waiting node WP task level '1'. It is to be noted that ejcplicit 'DISCARD' and 'DEFER' codes as illustrated by Figure 20 could have been used to discriminate between discarding and deferring of signals. Use of Task Levels would still be necessary due to the nature of the invention, i.e. use of explicit 'DISCARD' and 'DEFER' codes would be an additional unnecessary overhead.
The memory word containing the Task Level, Signal Number information is followed by a signal reception table with the number of entries given by the Signal Number information (SIGNUM). Each such entry consists of
two memory words, where the first word of any entry (i) contains the actual Signal Identity (SIDi in binary code and the second word contains the address (TAi) within the Program Area of a sofchip to the first instruction of the corresponding task program.
Figure 35 illustrates a complete implemented sofchip, which corresponds with the logic function of Figure 13. This program contains three waiting node sequences as marked (instructions 2-6, 12-16 and 22-26) and a total of seven task sequences (instructions 0-1, 7-11, 17-21, 27-31, 32-33, 34-35 and 36-37). According to the invention the task sequences are executed by the CPU, whereas the waiting node sequences are handled completely autonomously by the MPU within the MCU. As shown in Figure 35, certain instructions may by themselves cause the MPU to perform extra actions.
This is, for instance the case when a signal is to be sent. Signals can only be sent by a process executing a particular task sequence. It is assumed that this execution is performed on a particular priority leveland that a set of index registers as exemplified in Figure 29 has been assigned to the process in connection with the reception of the signal initiating the task execution. Signals are always assumed to be transferred from the sending process to the receiving process by means of a Signal Buffer, which is assumed to be a contiguous,
identifiable memeory area within the main memory (M). In order to be able to transfer information from the sending process into a signal buffer, the corresponding memory area must be accessible from the program. This postulates some kind of access mechanism as indicated by (5) in Figure 25. According to the invention this access mechanism is provided by the use of one of the index registers of the MCU as a dedicated Signal Send Register (SSR) as indicated by Figure 29. The Signal Buffer Area (SBA) in the main memory (See Figure 22) is postulated to contain a sufficient number of individual signal buffers so that a free buffer is always obtainable when required under all normal operating conditions. What this means is that the number of signal buffers must be so dimensioned, that the probability of not finding a free signal buffer is less than 0.000001.
All free signal buffers are assumed to be organized as a conventional FIFO (First-In-First-Out) queue. This requires the Head and Tail ends of the queue to be held at all times. According to the invention all seizing and releasing of signals buffers is performed under control of the MPU. For this reason the control registers holding the head and tail ends of the idle queue are indicated as internal registers (IQH and IQT) of the MPU in Figure 30. It is to be noted that organizing idle signal buffers in a FIFO queue is only
one possible way of organizing these buffers and is only used as an example to illustrate the use of the invention. The invention could equally well be used with any other efficient method to seize and release idle signal buffers.
Signal sending is actually performed in three phases. In the first phase a signal buffer is seized from the set of idle signal buffers. In the second phase the sending process transfers information to the seized signal buffer. Finally, in the third phase the signal is dispatched to the receiver. These three phases are exemplified by instructions 7-10 in Figure 35, where instruction 7 contains a 'SEIZE' instruction, instructions 8 and 9 illustrates the loading of information into the seized signal buffer and instruction 10 executes a 'DISPATCH' instruction.
A 'SEIZE' instruction according to the invention consists of an instruction, which asserts a 'virtual address' on the primary address bus (ABUS) in Figure 30, such that the discriminator bit (N-4) is set to '1' and the address mode (bits 12-15) causes the Mode Decoder (MD) to output an interrupt signal to the MPU. It is also assumed that the actual priority level on which the CPU is executing is indicated (bits N-1, N-2, N-3). On reception and acknowledging this interrupt signal the MPU will perform a microprogram, a particular arrangement of which is shown in Figure 36.
The first microinstruction of this microprogram tests whether a free signal buffer is available by comparing the contents of the Idle Queue Head register (IQH) with the unique code value used as stop code (STCD)and jumps to microinstruction 7 if no free signal buffer is available.
When a free signal buffer is available the X register group (XRS(L)) associated with the actual priority level (L) on which the CPU is executing is identified by multiplying (microinstruction 2) the priority level value (L) by the number of index registers (Q+1 in Figure 29). The resulting value identifies index register XRO within the actual index register group XRS(L). The value contained in the IQH register, which identifies the first idle signal buffer is now transferred to index register XR4 (microinstruction 3) which, according to Figure 29, is used as the Signal Send Register (SSR). The memory address tc the signal buffer in the main memory is calculated by means of the Signal Buffer Area Base address (SBAB in Figure 33) to which the index to the actual signal buffer (IQH) multiplied by a constant (CSB) giving the number of words for a single signal buffer is added (microinstruction 4). The contents of the Link Element (LE) of that signal buffer addressed by a constant and known offset (CLE) from the beginning of the signal buffer is transferred (microinstruction 5) to the Idle Queue Head register
(IQH), whereby the seized signal buffer is unlinked from the idle queue. Thereafter the microprogram execution is terminated (microinstruction 6).
If no free signal buffer was available the MPU generates a fault interrupt signal to the CPU
(microinstruction 7) whereafter the microprogram execution is terminated (microinstruction 8).
After termination of the 'SEIZE' microprogram the
XR4 index register of te MCU associated with the current level (L) on which the CPU is executing contains the identity of the seized signal buffer. This index register may now be used in the normal fashion according to U.K. Patent Application 8405491 to transfer information from the CPU into the signal buffer through the HCU. This is exemplified by instruction 8 of Figure 35, which causes the constant value corresponding to 'ONE' to be transferred into the signal identity element (SID), and by instruction 9, which causes the contents of the data element 'DATA' to be copied into the first information element (SDATA1) of the signal information part (SINFO) of the signal buffer. Other elements of the signal buffer, for instance the SDEST and PL elements in Figure 32, may be set in a similar manner.
When all information has been transferred to the actual signal buffer, then the signal buffer is dispatched by means of a 'DISPATCH' instruction. The 'DISPATCH' instruction is, like the 'SEIZE' instruction
performed by a microprogram in the MPU invoked by an interrupt via the virtual address asserted on the primary address bus by the CPU. A particular arrangement of such a microprogram is illustrated by Figure 37. The microprogram assumes that sent signals are queued in ϊ. priority order, whereby each signal priority is assigned its own queue. This queue must be of the FIFO type in order to guarantee the order between different signals. It is therefore assumed that each group of index registers contains two dedicated registers, SQHR and SQTR as illustrated by Figure 29 to manage these signal priority queues. The principal function of the dispatch signal microprogram is therefore to insert the signal identified by the SSR register (See Figure 29) of the index register group associated with the currently executing program in the CPU into the signal priority queue indicated by the priority of the signal itself.
The signal dispatch microprogram starts by identifying (instruction 1) the index register group XRS(L) associated with the actual priority level (L) on which the CPU is executing in the same way as in the 'SEIZE' instruction. Thereafter (instruction 2) the memory address of the actual signal buffer is calculated by adding the signal buffer identity in XR4 of the actual index register group multiplied by the constant (CSB) to the Signal Buffer Area Base address (SBAB). The priority of the actual signal (PL) may now be read
(instruction 3) from the signal buffer (See Figure 32) by reading the contents of the memory word addressed by the constant offset value (CPL) from the start of the actual signal buffer. It is to be noted that the signal identity could alternatively have been provided as part of the virtual address on the primary address bus, in which case the microprogram would have to write the actual priority value into the signal buffer instead. Regardless of how the signal priority is actually obtained it is now used to identify (instruction 4) a second index register group XRS(PL) associated with the actual signal priority in the same way as the original index register group.
At this stage (instruction 5) all interrupts to the MPU within the MCU are disabled in order to avoid interference between different microprograms, whereafter the link element within the signal buffer according to Figure 32 is set to the stop code value (STCD) by writing this value into the memory word adressed by the constant offset ( CLE ) from the start of the signal buffer
(instruction 6), thereby indicating that the actual signal buffer will be the last in any queue it is subsequently to be inserted in.
The contents of index register XR2 of the index register group XRS(PL) identified by the signal priority (PL) is now tested for equality with the stop code (STCD), i.e. for an empty signal priority queue
(instruction 7), in which case the microprogram execution continues from instruction 14.
Otherwise (i.e. the signal priority queue already contains at least one signal) the last signal buffer in the queue is identified by the SQTR register (See Figure 29), which is represented by the XR3 register of the register group associated with the signal priority (PL). The address to this last signal buffer in the queue is calculated in the same way as before (instruction 8) whereafter this signal buffer is linked to the signal buffer to be dispatched by writing the contents of the index register XR4 of the index register group associated with the priority level (L) of the currently executing program in the CPU into the memory word addressed by the fixed offset value (CLE) from the starting point of the last signal buffer (instruction 9). The contents of said index register XR4 is thereafter also transferred into index register XR3 of the index register group associated with the signal priority (PL) of the dispatched signal (instruction 10), thereby indicating that the dispatched signal buffer is now the last signal in the signal priority queue. The above mentioned index register XR4 is now reset to the stop code (STCD), thereby preventing the sending process from further accesses of the signal buffer of a dispatched signal (instruction 11). Finally the MPU interrupts are again enabled (instruction 12) and the microprogram execution terminated (instruction 13).
If the signal priority queue was previously empty, then the actual signal buffer to be dispatched is inserted both as the last instruction in the signal priority queue (instruction 14) by tremsferring the contents of the index register XR4 of the index register group associated with the actual execution level (L) within the CPU into the XR3 register associated with the actual signal priority (PL), and as the first instruction in the signal priority queue (instruction 15) by transferring the same XR4 contents into the XR2 register associated with the actual signal priority (PL). The microprogram is thereafter (instructions 16, 17, 18) terminated in the same way as in the first case.
When a signal is inserted into a previously empty signal queue, then the associated gate/comparator arrangement (GIi' CAi' CBi) causes an internal interrupt signal (IRA) to be issued to the Interrupt Controller (ICNT) as illustrated by Figure 30. This interrupt signal eventually causes the Interrupt Controller to generate a signal interrupt to MPU, whereby a Signal Interrupt Microprogram according to the invention is invoked. A possible arrangement of such a signal interrupt microprogram is shown in Figure 38a-g. This signal interrupt microprogram is executed completely independently of any program that simultaneously is being executed in any connected CPU. The signal interrupt microprogram starts
(instruction 1) by identifying the actual index registerset XRS(L) associated with the priority (L) of the signalwhich caused the signal Interrupt. Thereafter thecontents of the SQHR register (See Figure 29) are copiedinto the RSR register of this index register set
(instruction 2), thereby causing the actual interrupt signal generated by the gate/comparator arrangement according to Figure 30 to be masked. The address to thesignal buffer is now calculated in the already described way (instruction 3), whereafter the contents of link element of the actual signal buffer (See Figure 32) are transferred into the SQHR register (instruction 4), thereby unlinking the signal buffer from the signalpriority queue. The link element is now reset to the standard stop code (instruction 5).
At this point the signal buffer containing information about the received signal is held only by the RSR register (XR1). It is also to be noted that if the signal was the only signal in the signal priority queue,then the link element in the associated signal buffer contains the standard stop code. Resetting the SQHR register as described above will therefore cause the interrupt signal (IRL) to disappear. In the case that the signal priority queue contained more than one signal the SQHR register will differ from the standard stopcode, i.e. an interrupt signal will be generated as soon as the RSR eregister is reset to the stemdard stop code.
At this point no identification has yet been made of the process to which the signal contained in the identified signal buffer is directed. The identity of this process is held by the Signal Destination element within the signal buffer, which is assumed to be structured as indicated by Figure 32. The MPU next
(instruction 6) reads all three components from the signal buffer by reading the required number of memory words from the main memory addressed by a constant offset (CDEST) from the start of the actual signal buffer.
The Sofchip Index part (SCHPX) is now used to identify the entry belonging to the sofchip in the Sofchip Descriptor Table within the MPU Descriptor Table (MPDT) shown in Figure 33, from which the starting point to the actual Sofchip Descriptor area in the main MCU Descriptor TAble (DT) is copied into the Sofchip Register (SCHPR), which is represented by index register XRO within the index register group XRS(L) associated with the priority of the present signal (instruction 7) as shown by Figure 29. Secondly (instruction 8), the Instance Index part of the signal destination (IX) is copied into the Instance Index Register (IXR), which is represented by the XR5 register within the same index register group. It is thereby to be noted, that all transfers of information to index registers are performed via the Range Check and Conversion Unit of the MCU, whereby it can be guaranteed that all values retained by any index register fall
within their legal value ranges in the manner described in U.K. Patent Application 8405491.
The address to the program pointer (PP) can now be calculated by means of the SCHPDT and SCHPPDk tables within the MPDT of the MPU as shown in Figure 33 and utilizing the Process Index part (PX) of the signal destination (instruction 9). It is to be noted that the information contained in the SCHPPDk table for sofchip (k) only gives the starting address to the area in the main memory, where the program pointers of all instances of the actual process are located (P1PPA, P2PPA, P3PPA). Therefore, in order to obtain the address to the program pointer (Px(IX).PP) belonging to the actual instance of the process, the value of the Instance Index must be added to this starting address (instruction 10). It is thereby to be observed with reference to Figure 24, that for nonreplicated processes the original Instance Index value is 0 and for replicated processes a value in the range 1 to N according to the previously described convention. However, when the Instance Index is stored into the IXR register (XR5), then a range conversion will automatically take place in the way described by U.K Patent Application 8405491, whereby the instance index value for nonreplicated processes will remain 0 but for replicated processes be 'normalized' within the range 0 to (N-1). The allocation of the program pointers in Figure 33 conforms with this principle.
For any signal interrupt it is always assumed that the actual task level (process level, procedure level 1, procedure level 2, etc) is the process level.
In order to indicate this an internal MPU counter (TL) relevant only within this microprogram is set to 0
( instruction 11). Thereafter all MPU interrupts are disabled (instruction 12) to prevent interference from other microprograms within the MPU and the actual value of the process program pointer is read (instruction 13). The Action Indicator (AI) component of the program pointer as shown by Figure 31 is now tested (instructions 14 and 15). For processes only the two values WAIT (exemplified by the absolute value '0') and TRANSIT (exemplified by the absolute value '1') are legal. If the process was in a waiting node (WAIT) when the signal interrupt occurred, then execution of the microprogram is continued from instruction 22 and if the process was in transit between waiting nodes (TRANSIT), then execution of the microprogram is continued from instruction 53.
The case where the process is neither in a waiting node or in transit between waiting node is an error case and causes an automatic restart of the associated process. Each process has a restart position,which is identified by the appropriate entry (PIPA, P2PA or P3PA) within the SCHPPDk table for sofchip (k) as shown by Figure 33. In order to cause a process to be
restarted, the restart address is transferred to index register XR6 (instruction 16). The program pointer (PP) belonging to the actual signal buffer as shown by Figure 32 is thereafter (instruction 17) set to the compound value ('2,0'), which means that the Action Indicator component (AI) of the program pointer is set to the value '2' indicating a 'NORMAL' signal undergoing analysis, and the pointer (PTR) part of the program pointer reset to zero (don't care value). The program pointer of the actual process is now set to a compound value
(instruction 18), where the action indicator component (AI) is set to '1', thereby indicating that the process is in transit between two waiting nodes, and the pointer component (PTR) is set to the identity of the actual signal as obtained from the RSR register. The MPU interrupts are now again enabled (instruction 19) and an interrupt signal (INT in Figure 30) with the same priority as the previously received signal (L) is now generated from the MPU to the CPU (instruction 20), whereafter the microprogram execution is terminated (ionstruction 21).
In the normal case a process is in a waiting node when a signal directed to that process arrives, i.e. the action indicator (AI) component of the program pointer was '0', signifying that the pointer component (PTR) of the program pointer contains the address to a location within the process program. in this case the signal
identity is first read from the corresponding element of the signal buffer (instruction 22). Secondly an internal signal address register (SAR) of the MPU is set to the address of the first memory word associated with the particular memory word by adding the content of the pointer component (PTR) of the program pointer, which gives the relative offset of the waiting node within the process program, to the start address (PD) of the process program, which start address is assumed to be given by the first entry of the sofchip descriptor table area (SCHPDk) as shown by Figure 33 (instruction 23). The memeory word addressed by the SAR register contains a compound value with a task level (TL) and a signal number (SIGNUM) component as shown by Figure 34. The signal number component is now read into an internal signal number register (SNR) of the MPU (instruction 24), whereafter the SAR register is incremented (intsruction 25).
The microprogram now performs a loop, whereby the contents of the signal number register is tested
(instruction 26). If no more signals remain to be tested (SNR ≤ 0), then the microprogram continues from instruction 40. Otherwise the memory word addressed by the SAR register is read an compared with the previously read signal identityu (instruction 27). If a match is found the microprogram continues from instruction 31, otherwise the SAR register is incremented to point to the
next signal (instruction 28), the SNR register is decremented to reduce the number of signals left to analyze (instruction 29) and the loop is repeated
(instruction 30). When a signal match has been found, then the task address for the actual task to be performed is read from the memory addressed by the contents of the SAR register + 1 and copied into the XR6 register of the actual index register group (instruction 31) and the program pointer of the signal buffer is set to the compound value ('2,0') indicating a normal signal undergoing analysis (instruction 32). If the signal is received on the process task level (the internal MPU registr TL = 0), then the microprogram resets the program counter ( instruction 18 ) , enables interrupts ( instruction 19), generates a CPU level (L) interrupt (instruction 20) and terminates (instruction 21) as already described above. The interrupt signal (INT) to the CPU eventually causes the CPU to interrupt its currently executed program. At this point all the relevant index registers belonging to the corresponding priority level in the MCU are set up, i.e. XR0 identifies the start address of the actual sofchip descriptor table area (SCHPDk for sofchip (k) , XR1 identifies the signal buffer containing information about the received signal, XR5 contains the actual instance index and XR6 the address to the actual task program. The signal interrupt routine in the CPU
therefore only has to read the contents of the XR6 register into its Program Counter (PC) register, whereby execution of the corresponding task program can commence and proceed until the next waiting node is reached. It is to be noted that neither the XR0, the XR1 nor the XR5 register must be changed by any activity of the CPU during this execution. The XR6 register may be freely used for any required purpose, because it only indicated the starting address of the task program, which is relevant only in order to be able to start the execution. Returning to the signal interrupt microprogram execution, if no match between the identity of the received signal and the signals expected in a particular waiting node is found, then the task level is tested (instruction 40) in order to determine whether to discard or defer the signal. If the signal was to be received on the process level (represented by the value '0' of teh internal task level counter (TL)) then the signal is to be discarded, otherwise the signal is to be deferred. In the discard case an internal microprogram soubroutine for signal buffer release is called (instruction 41), giving the received signal identity in the RSR register as input parameter, Thereafter the RSR register is reset to the actual stop code ( instruction 42). It is now to be noted that resetting of the RSR register to the stop code (STCD) removes the masking of any further interrupts on the same signal priority level.
i.e. if the associated signal priority queue contains further signals, then a new signal interrupt (IRL) will automatically be generated. This is prevented from taking effects until the MPU has enabled further interrupts (instruction 43) emd the actual microprogram execution is terminated (instruction 44).
While the CPU is executing a task program the MCU may handle additional signal interrupts independently of the program execution in the CPU. If these signal interrupts are directed to another process than the one currently under execution by the CPU the signal interrupt microprogram proceeds in the way already described. However, if the signal destination identifies exactly the same process already undergoing execution in the CPU, then the test of the action indicator (instruction 15) will cause an automatic signal defer routine to be activated. This signal defer routine first copies (instruction 53) the identity of the actual signal from the RSR register (XR0) into the next available general purpose index register (exemplified by XR7), whereafter the RSR register is reset (instruction 54) to the identity of the signal buffer containing the signal which caused the currently ongoing transition, which is obtained from the pointer component (PTR) of the process program pointer.
The address to the signal buffer, which caused the ongoing transition is now calculated in the earlier
described manner (instruction 55), the program pointer of this signal buffer is read (instruction 56) and the internal task level counter (TL) of the MPU is incremented (instruction 57). The Action Indicator (AI) component of the read program pointer is now tested (instructions 58, 59 and
60). In this case the legal action indicator values are
WAIT (exemplified by '0'), TRANSIT (exemplified by '1') and NORMAL (exemplified by '2'). Normally the action indicator value would be
NORMAL, i.e. the actual signal is undergoing analysis.
In this case the new signal has to be unconditionally deferred. According to the invention a signal is deferred by setting the program pointer of the associated signal buffer to the compound value ('3,0'), i.e. the action indicator component (AI) is set to 'DEFERRED'
(instruction 45). Thereafter the link element of the signal buffer for the signal, which caused the ongoing transition in the CPU, is read (instruction 46) and tested (instruction 47). In the normal case this link element contains the stop code value (STCD), whereby the signal buffer for the signal causing the ongoing transition is linked to the deferred signal (instruction 50), whereafter the RSR register is reset (instruction 51) to the stop code value (STCD) and the microprogram execution terminated (instructions 52, 43, 44). It may occur that when a signal is deferred, previously deferred
signals already exist. In this case the link element of the ongoing signal will point to the first deferred signal and the link element of this signal to the next deferred signal, etc. The newly deferred signal is in this case linked to the previously last deferred signal (instructions 48, 49, 46, 47).
The action indicator (AI) of the program pointer for the signal presently undergoing execution in the CPU may also indicate that the signal buffer is in a WAIT condition (instruction 58) although the process is undergoing a transition. This condition is caused by encountering a waiting node within a procedure (compare Figure 18 ) in manner which will be described later in connection with the waiting node entry microprogram. In this case the signal identity is checked in the same manner as desribed previously (instructions 22-30). When a signal identity match is found (instruction 27), then the actual task address is read and set into the XR6 register (instruction 31) and the actual signal program pointer (action indicator) set to 'NORMAL' (instruction 32). The internal task level counter of the MPU is now greater than 0 (instruction 33), which causes the program ponter of the signal buffer for the signal which caused the ongoing process level transition to be reset to a compound value indicating that the signal buffer is in 'TRANSIT' and identifying the newly received signal as the signal undergoing transition (instruction 34). In
order to retain any possible deferred signals, the link element of the newly received signal is copied from the link element of the previous signal (instruction 35).
Finally the link element of the previous signal is reset to the newly received signal (instruction 36) whereafter the microprogram terminates by generating a CPU level (1) interrupt (instructions 37, 38, 39).
It may now be appreciated that the action indicator of the program for a signal buffer may now also take the value 'TRANSIT' (instruction 59). In this case the actual signal identity (which is already in a general purpose index register, for instance XR7), is now copied
(instruction 69) into the next available general purpose index register (e.g. XR8) and the first available general purpose index register (e.g. XR7) is reset to the signal buffer identity obyained from the program pointer of the previous signal (instruction 70). Thereafter the microprogram proceeds as for the case when the original process was in transit (instructions 71, 55, 56, etc.) The final case handled by the signal interrupt microprogram is the case where the action indicator tested (instructions 58, 59, 60) has an illegal value.
This is an error cetse and causes a restart of the process
(instructions 61-68), which differs from the previously described restart only by the fact that the signal buffere for all deferred signals are released before the process restart is initiated.
Figure 39 shows a possible arrangement of the
Signal Buffer Release microprogram already referred to from the signal interrupt microprogram. The entry parameter of this microprogram identifies the signal buffer, which is to be released, according to the already described functions all free signals buffers are managed by means of a idle signal buffer queue, the head and tail end of which are managed by the Idle Queue Head
(IQH) and Idle Queue Tail (IQT) registers shown as internal elements of the MPU in Figure 30. The Signal Buffer Release microprogram is a straightforward FIFO queue insert program, which calculates the address of the actual signal buffer (instruction 1), sets the link element of the signal buffer to the standard stop code (instruction 2) and either inserts the signal buffer as the last signal buffer in an existing idle queue (instructions 4-7) or as the only signal in the idle queue (instructions 8-10).
When the CPU executes a task program a point is eventually reached, where the task program is terminated by executing a call of a waiting node entry subroutine is executed as illustrated by Figure 34. This subroutine eventually executes an instructions, which causes a virtual address to be asserted on the primary address bus (ABUS) so that the actual information on the data bus (DBUS) from the CPU to the MCU is transferred into a predefined index register (e.g. XR6 in Figure 29) and
simultaneously generates an interrupt to the MPU via the interrupt controller (ICNT) in the same way as a 'SEIZE' or a 'DISPATCH' instruction.
This interrupt causes a Waiting Node Entry microprogram to be executed, a possible arrangement of which is shown by Figure 40a-k. On entering this microprogram the index register SCHPR (XRO) still identifies the Descriptor
Table area (SCHPDk) belonging to the actual sofchip
(k), the RSR register (XR1) still identifies the primary signal buffer and the IXR (XR5) register contains the actual normalized instance index value. The XR6 register now, due to the above described resetting, also contains the actual address to the new waiting node. Further general purpose registers may be utilized to contain the identities of signal buffers associated with deferred signals.
The waiting node entry microprogram is started (instruction 1) by identifying the actual index register set (XRS(L)) associated with the priority (L) as provided by the corresponding bits of the virtual address asserted on the primary address bus. Thereafter the address to the primary signal buffer (i.e. the signal buffer identified by the RSR register) is calculated (instruction 2) and the signal destination obtained from this signal buffer (instruction 3). It is to be noted that one feature of the invention is that the primary signal buffer is retained during the entire transition
caused by the associated signal, thereby enabling the system internal identity of the 'own' process to be automatically established at any time. The process identity given from the old signal destination is now used to calculate the address to the process program in the same manner as in the signal interrupt microprogram
(instructions 4 and 5), whereafter the internal task level indicator (TL) within the MPU is reset to 0
(instruction 6), indicating a process level waiting node. Thereafter all interrupts are disabled to prevent other microprograms from interfering with the waiting node entry microprogram (instruction 7) and the contents of the program pointer belonging to the actual process is read (instruction 8) and tested (instruction 9). In this case the only legal value of the action indicator (AI) component of the program pointer is the 'TRANSIT' value. If the action indicator is does not indicate 'TRANSIT', then the process is restarted with release of all eventually deferred signals in the same way as for signal interrupts (instructions 13-26).
In the normal case the action indicator (AI) for the process does indicate 'TRANSIT'. In this case the program pointer of the received signal is read (instruction 10) and tested (instructions 11 and 12). The action indicator (AI) for this signal program pointer may now have the legal values 'TRANSIT' (represented by '1') and 'NORMAL' (represented by '2'). Again, if the
action indicator does not have a legal value, then the process is restarted in the same way as described above.
When the action indicator indicated a 'NORMAL' waiting node entry, then the address to the first memory word associated with the waiting mode is calculated
(instruction 35), the contents of this memory word, which contains the waiting node task level (TL) and the number of expected signals (SIGNUM) as illustrated by Figure 34), are read and the value of the task level component is compared with the actual task level as given by the internal task level indicator (TL) within the MPU (instruction 36).
When the task level of the waiting node is the same or lower than the actual task level (the normal case), then the link element of the actual signal is read and retained (instruction 44) to cater for eventual deferred signals, whereafter the actual signal buffer is released (instruction 45) by calling the Signal Buffer Release microprogram illustrated in Figure 39. Thereafter the program pointer of the process is reset (instruction 46) to a compound value, where the action indicator component (AI) of that value indicates 'WAIT' (represented by '0') and where the pointer (PTR) component of the value indicates the relative address of the waiting node as given by the XR6 register. The RSR register is now reset to the value given by the retained link element (instruction 47). it is to be noted that
the RSR register will now either contain a stop code value (STCD) in the case that no deferred signals existed, or the signal buffer identity to the first deferred signal. The resetting of the RSR register in the first case will thereby cause the masking of eventual signal interrupts to be removed as already explained in connection with the signal interrupt microprogram. If no deferred signals exist (instruction 48), then the microprogram execution is terminated (instructions 25, 26).
When a deferred signal exists, then the RSR register has already (instruction 47) been reset to identify this signal. The address to this signal is now calculated (instruction 49), whereafter the program pointers of all deferred signals are reset to the compound value ('2,0'), i.e. the action indicator components are reset to 'NORMAL' (instructions 50-53). This action removes all previous 'DEFER' conditions, i.e. enables previously deferred signals to analyzed again. The signal identity of the first, previously deferred signal, is now tested for an identity match with one of the expected signals for the actual waiting node in the same manner as in the signal interrupt microprogram (instructions 54-62). If a match is found, then the task program address is read and copied into the XR6 register (instruction 70), whereafter either the process program pointer (on task level 0) or the previous
signal buffer program pointer (all other task levels) is set to a compound value, where the action indicator (AI) component indicates 'TRANSIT' (represented by '1') and the pointer component (PTR) identifies the actual signal (instructions 71-74). At this point the priority level
(PL) of the actual signal is read ( instruction 75 ) and tested ( instruction 76 ) . If the signal priority is lower than or equal to the present priority level, then an interrupt signal (INT) of the actual priority (L) is generated to the CPU (instruction 24) , whereafter the execution of the microprogram is terminated in the normal manner (instructions 25 and 26). If, however, the signal priority (PL) is higher than the present priority (L), then the index register set XRS( PL) associated with the signal priority (PL) is identified (instruction 77), whereafter all relevant index registers are copied from the lower level index register set to the higher level index register set (instructions 78-85). This is done in order to guarantee that any deferred signal will be not be executed on a lower than its indicated level. Finally the original RSR register is reset to the stop code value
(instruction 86), whereafter a level (PL) interrupt signal is generated to the CPU (instruction 87) and the microprogram terminated (instructions 88 and 89). if, by the signal identity test, none of the signals expected in the actual waiting node matches with the actual signal, then the actual task level (TL) is
defer the signal (instruction 63). If the signal is to be discarded then the link element of the signal is read and retained (instruction 64) and the signal buffer released by calling the Signal Buffer Release microprogram in Figure 39 (instruction 65), whereafter the RSR register is reset to the retained value of the link element (instruction 66). If no further deferred signals exist, then the microprogram is again terminated in the normal way (instructions 67, 25, 26), otherwise the address to the next signal is calculated (instruction 68) and the identity of this signal analyzed (instructions 69, 54, 55, etc.).
When a signal is to be deferred instead of discarded, then the signal program pointer is first reset to the compound value ('3,0'), thereby setting the action indicator component to 'DEFERRED' (instruction 90). The link element of the actual signal is now read (instruction 91) and tested (instruction 92). If no further deferred signals exist, then the actual RSR register is reset (instruction 116), thereby unmasking new signal interrupts of the same priority and the microprogram is terminated (instructions 117 and 118). If further waiting signals exist, then the identity of the newly deferred signal is retained in a temporary register (TSR) within the MPU (instruction 93) and the identity of the actual signal to analyze is reset
(instruction 94). The address to the previous signal is also temporarily retained (instruction 95) and the address to the new signal calculated (instruction 96).
The action indicator of the program pointer belonging to the new signal is now tested (instruction 97).
When the new signal is not deferred, then the new and the previous signals are relinked in such a way that the primary deferred signal is linked to the new signal, which is linked to the just deferred signal, which is linked to the eventual signal previously linked to the new signal (instructions 98-100), whereafter the new signal identity is analyzed in the normal manner (instructions 101, 54, 55, etc.). However, if the new signal is already deferred, then the primary deferred signal is linked to the new signal (instruction 102), whereafter the link element of the new signal is read (instruction 103) and tested (instruction 104). If no further waiting signals exist, then the newly deferred signal is inserted as the last deferred signal (instructions 114 and 115), whereafter the RSR register is reset (instruction 116) and the microprogram terminated (instructions 117 emd 118). If further waiting signals exist then each waiting signal is examined whether it is deferred or not (instructions 105, 106, 107 and 108), whereby Lhis process is repeated until either a nondeferred waiting signal is encountered or no further waiting signals are found. If further
nondeferred waiting signals exist, then the actual signals are relinked so that the primary deferred signal is linked to the newly found undeferred signal, which is linked to the first analyzed deferred signal and the last analyzed deferred signal is linked to the newly deferred signal, which in turn is linked to the eventual signal previously linked to by the newly found undeferred signal (instructions 109-112). In this way it is ensured that deferred signals are still to be treated in their order of arrival wherever applicable. Thereafter the newly found deferred signal is again analyzed in the normal manner (instruction 113, 54, 55, etc.).
When, in the original analysis of the primary signal (instruction 11), the action indicator of this signal indicates 'TRANSIT', then the iternal task level indicator (TL) of the MPU is incremented (instruction 27), and the address to the primary signal retained (instruction 28). In this case one of the general purpose index registers (e.g. XR7) contains the identity of a secondary signal buffer, which identity is now used to calculate the address to the secondary signal (instruction 29), whereafter the program pointer of this signal buffer may be read (instruction 30) and tested (instructions 31 and 32). In the same way as before the only legal values of the action indicator elements are
'TRANSIT' (represented lay '1') and 'NORMAL' (represented by '2'). For any other value the process is restarted
(instructions 33, 34, 13, etc.).
If the action indicator of the actual signal element indicates 'TRANSIT' (instruction 31), then the actual process is repeated one further task level down. When the action indicator indicates 'NORMAL' (instruction 32), then the address to the first memory word asociated with the waiting node (instruction 119) is calculated and the task level of the waiting node read out and compared with the internal task level in the MPU (instruction 120). For waiting node task levels higher than the actual task level the same activities are performed as for the previously described primary signal analysis (instructions 37, 38, etc.). For the same or lower task level the linke element of the actual signal is again read and retained (instruction 121), whereafter the actual signal buffer is released by mean of a call of the Signal Buffer Release microprogram (instruction 122). Thereafter the primary signal program pointer is reset to a complex value, where the action indicator component (AI) indicates 'WAIT' (represented by '0' ) and the pointer component indicates the location of the next waiting node as obtained from the XR6 register (instruction 123). The link element of the primary signal is now reset to the retained link element (instruction 124) and the actual general purpose index register similarly (instruction 125) whereafter the next eventual signal is analysed as already described
(instructions 126, 48, etc.).
The above described arrangements of the signal interrupt and waiting node entry microprograms according to the present invention ensures that when a task program is executed, then the signal buffer for the signal which triggered the task program is held for the entire task program, i.e. until the next waiting node is entered. Thus, with reference to Figure 18, when the signal S1 is received in waiting node WX, the signal buffer for this signal is held during all three tasks T1, T2 and T3 and only released when the process enteres the waiting node WY. This also means that any unused space in the signal buffer may be used for temporary storage of information within this task program. The tasks T2 and T3 consist internally of a task sequence containing an inner waiting node (WP). Hence, when the task T2 is executed, this execution consists of the execution of the subtask TP1, after which the waiting node WP is entered. Entering a waiting node on an inner task level does not cause the signal buffer on the primary level to be released but instead causes the program pointer of this signal buffer to be reset to the inner waiting node as described above (instruction 37 of the waiting node entry microprogram). Hence, when the next expected signal is received (either S3 or S4 in Figure 18) then the signal buffer for this signal is retained in addition to, and independently of the primary signal. Hence for the task program triggered
within an outer task the signal buffer for the received inner signal is retained until either the next inner waiting node is entered or until the inner task program terminates by returning to the next outer task level. In the first case the waiting node entry microprogram handles the release of the signal buffer for the inner signal. However, in the second case an explicit signal discard microprogram must be invoked in connection with the task return. Anarrangement of such a Signal Discard microprogram according to the invention is exeplified by Figure 41a and b.
Like the other microprograms the signal discard microprogram starts by identifying the actual index register group XRS(L) for the priority level (L) on which the CPU currently executes (instruction 1), whereafter the address to the primary signal buffer is calculated (instruction 2). THis microprogram may only be executed on inner task levels,, for which reason the task level counter (TL) of the MPU is set to 1 immediately (instructions). The MPU interrupts are now disabled (instruction 4) in order to prevent interference from other microprograms and the program pointer of the primary signal is read (instruction 5) and tested (instruction 6). The only legal value of the action indicator (AI) component is in this case 'TRANSIT' (represented by '1'). If the action indicator of the primary signal has any other value the microprogram is
simply terminated (instructions 11 and 11a). In the normal case the address to the second signal is calculated (instruction 7) and the program pointer of this signal is read (instruction 8) and tested (instructions 9 and 10). In this case the legal action indicator values are 'TRANSIT' (represented by '1')and 'NORMAL' (represented by '2').
If the action indicator of the secondary signal indicates 'TRANSIT' (instruction 9) then the actual signal is reset as the primary signal (instruction 12) and the task level indicator (TL) incremented (instruction 13) whereafter the next signal is analyzed in the same way (instructions 14, 7, 8, etc.).
If the action indicator of the secondary signal indicates 'NORMAL' (instruction 10) then the program pointer of the current primary signal is reset to the compound value ('2,0'), whereby the action indicator (AI) for the primary signal is reset to 'NORMAL' (instruction 15), whereafter the link element of the primary signal is reset to the value of the link element of the secondary signal (instruction 16), thereby unlinking the secondary signal from the pprimary signal. The signal buffer for the secondary signal is now released by calling the Signal Buffer Release microprogram in Figure 39 (instruction 17) whereafter the signal discard microprogram is terminated (instructions 17 amd 18). The circuitry of Figures 26, 27, 28 emd 30
combined with the microprograms exemplified by Figures
36-41 illustrates the main principles of the invention.
A system built around a Master Control Unit according to the invention forms a signal scheduling system, which has the following advantages compared with previously known similar signalling systems.
Firstly, signal routing, transfer and signal identity analysis is handled internally within the MPU and therefore does not cause an overhead in the CPU. As a result, all of the CPU time can be utilized for task processing.
Secondly, the invention adequately solves the signal defer problem.
Thirdly, all absolute address handling is handled automatically by the MPU. All data accessing by the CPU can now be specified by means of constant offset values in the instructions. This not only facilitates the code generation and compilation, thereby making efficient code generation from high level language programs possible, it also directly increases the run-time reliability of the system.
Fourthly, the Master Control Unit may be used in combination with any modern microprocessor, utilizing the adequate task program execution capabilities of these processors and nevertheless at the same time tremsforming the processors into efficient support machines for real time systems.
Fifthly, the same Master Control Unitmay serve several CPUs, allowing these CPUs to concurrently execute different process instances.
The capabilities of a Master Control Unit according to the invention may be further extended by combining a timing facility with the signal scheduling. This timing facility consists of one or more prediodically scanned time queues in the MPU, an arrangement of which will be described below. The signal header part (SH) of the signal buffere will in this case have to contain at least one time counter element (TC) as illustrated by Figure 42 and the MPU at least one Time Queue Head register (TQH), at least one Time Queue Tail register (TQT) and at least one Clock Register (CLR) as illustrated by Figure 43.
The time queues are managed by two additional microprograms, a Timed Signal Dispatch microprogram and a Time Queue Check microprogram.
A possible arrangement of a timed signal dispatch microprogram is illustrated by Figure 44. Like the normal signal dispatch microprogram the timed signal dispatch microprogram starts by identifying the index register group XRS(L) associated with the priority level (L) on which the CPU is currently executing (instruction 1) and by calculating the address to the actual signal buffer (instruction 2). It is furthermore assumed that the time counter element (TC) of the signal buffer is set
to a value corresponding with the number of time queue check program intervals to elapse before the signal may be rotued to its destination. The value of the time counter element in the signal buffer is now read and added to the actual value of the Clock Register of the MPU and retained in an internal time counter (T) of the MPU (instruction 3), which value is then copied back into the time counter element of the signal buffer (instruction 4). Interrupts are now disabled (instruction 5) in order to prevent interference from other microprograms, whereafter the Time Queue Head register is tested (instruction 6). If the time queue was currently empty, then the actual signal buffer identified by the actual SSR register (XR4) is inserted as the only signal buffer in the time queue (instructions 7, 8, 9), whereafter the SSR register is reset (instruction 10) and the microprogram execution terminated (instructions 11 and
12). if the time was not previously empty, then the address to the first signal buffer in the time cue is calculated (instruction 13) and the time counter of this signal buffer read out (instruction 14). The difference between the previously retained time value (T) and read time counter value is now formed modulo the range of the Clock Register (instruction 15). The obtained value of this difference is now compared with a comparison value
(CD) (instruction 16). If thereby the difference is greater than the comparison value, then the expiration time of the actual signal is earlier than the present first signal in the time queue. In this case the actual signal is inserted as the first signal in the time queue and linked to the previoulsy first signal in the queue (instructions 17 and 18), whereafter the SSR register is reset ( instruction 19) and the microprogram execution is terminated (instructions 20 emd 21). If the time difference is less than the comparison value by the comparison (instruction 16), then the link element of the current first signal in the time cue is read (instruction 22) and tested (instruction 23). If the link element contains the standard stop code, then the current signal buffer was the previously last element in the time queue. In this case the actual signal is inserted as the last signal in the time queue (instructions 24, 25 and 26), whereafter the SSR register is reset (instruction 27) and the microprogram execution terminated (instructions 28 and 29).
If the link element identifies a subsequent signal buffer, then the address to this signal buffer is calculated (instruction 30)and the time counter value read (instruction 31) and the time difference calculated (instruction 32) in the same way as before, whereafter the time difference is tested against the same comparison value (CD) as before (instruction 33). If now the time
difference is greater than the comparison value, then the expiration time of the actual signal is later earlier than the currently tested signal in the time queue but later than the previous signal in the time queue. In this case the actual signal is therefore inserted between the two mentioned signals (instructions 34 and 35), whereafter the SSR register is reset (instruction 36) and the microprogram execution terminated (instructions 37 and 38). In the remaining case the next signal in the time queue is tested in exactly the same manner (instructions 39, 40, 22, 23, etc.)
The Timed Signal Dispatch microprogram in Pigure 44 thus ensures that the signal buffers are inserted into the time queue in an order determined by their respective expiration times, so that the signal with the shortest remaining expiration time will always be at the head of the queue.
A Time Queue is scanned by a periodically executed microprogram in the MPU. A possible arrangement of such a microprogram according to the invention is shown in Figure 45. The microprogram first read the value of the Clock Register (CLR) and increments module the range of the Clock Register and retains the resulting value (T) for subsequent references (instruction 1), whereafter the result is transferred back into the Clock Register (instruction 2). The Time Queue Head register
(TQH) is now tested (instruction 3) whereby the microprogram terminates if the time queue was empty (instruction 4).
For a nonempty time queue the address to the first signal buffer is calculated (instruction 5) and the time counter value read from the signal buffer (instruction 6). The difference between this time counter value and the retained clock register value (T) is now calculated modulo the range of the Clock Register (instruction 7) and the resulting difference compared with a comparison value (CD). If the actual value is less than the comparison value, then the time for the first signal in the queue has not yet expired. Because the signals are ordered in the time queue so that the signal with the shortest remaining expiration time is always the first signal in the queue none of the other signal times can have expired, i.e. the microprogram execution is terminated (instruction 4) in this case. If the actual value by the comparison (instruction 8) is greater than the comparison value (CD) due to the modulo subtraction, then the time for the signal has actually expired. In this case interrupts are disabled (instruction 9) and the first signal extracted from the time queue (instructions 10 and 11), whereafter the link element of the actual signal is reset (instruction 12), the signal priority of the actual signal (PL) read from the signal buffer (instruction 13)
and used to identify the corresponding index register set XRS(PL), whereafter the signal is either inserted as the last signal in an existing signal priority queue (instructions 16, 17, 18) or as the only signal in the signal priority queue (instructions 21 and 22). In both cases interrupts are thereafter again enabled (instrucion 19 or 23), whereafter the new first signal (if any) in the time queue is tested (instruction 20 or 24, 3, 4, etc.). By means of the described extension of the MCU by a time queue capability a processing system built around an MCU according to the invention will be able to support any kind of real time system.
Claims
1. A computer system having at least one central processor, main memory means for storage of information and a master control unit connected between the central processor(s) and a memory means, said master control unit being able both to intercept and interpret virtual addresses and commands issued by a central processor in such a way that one class of virtual addresses and commands are interpreted to be associated with reading of information from and writing of information into the main memory means and are translated into real addresses issued by the master control unit to the main memory means, and a second class of virtual addresses (either independent of or combined with specific command codes issued by a central processor) which cause, for each virtual address code, autonomous signal routing, transfer and reception activity to be performed by the master control unit, and which master control unit has means to perform signal reception independently of the central processor and means to initiate operation of the central processor when a received signal has been accepted by the master control unit, the master control unit being able to access the main memory means independently in performing such signalling activities.
2. A computer system as claimed in claim 1 wherein the master controller has a memory containing a description table having means to distinguish between accesses of data and program elements belonging to different software signalling units from each other and to translate virtual addresses issued by a CPU executing a program for one particular software signalling unit ' into real addresses to the corresponding locations in the main memory allocated to that software signalling unit and at the same time prohibiting a central processing unit executing a program for one particular software signalling unit from simultaneously accessing any program or data element belonging to any other software signalling unit.
3. A computer system as claimed in claim 2
2 wherein the master control unit permits memory areas allocated to common usage to be accessed from a cental processing unit simultaneously with memory areas belonging to the software signalling unit currently undergoing execution in that central processing unit.
4. A computer system as claimed in claim 2 or claim 3 communication between software signalling unite is performed by software signals realised by means of a set of signal buffers, each signal buffer comprising a contiguous area in the main memory and containing a standard signal header part for the control of sending, transfer, routing and reception of a signal and also containing a signal information part for temporary storage of information carried by a signal, whereby the standard signal header contains at least a signal identity defining information element, a signal destination defining information element, a signal priority defining information element and a signal program pointer defining element, the last of which contains a pointer element and an action indicator element with the pointer element directly or indirectly identifying a location in the main memory associated with either a program or with a signal buffer and with the action indicator being able to take at least four different values, the first value of which indicates that no central processing unit is currently in the act of processing information contained in the signal buffer and that the pointer element in this case identifies the program location in the main memory from which program execution is to continue when a central processing unit next becomes active to process information contained in the signal buffer, the second value of which indicates that a central processing unit is currently in the act of or about to process information contained in the signal buffer and that the information contained in the pointer element in this case is irrelevant, the third value of which indicates that a central processing unit is currently in the act of processing information contained in a secondary signal buffer and that the information contained in the pointer element in this case identifies the secondary signal buffer, and the fourth value of which indicates that processing of information contained in the signal buffer has been deferred and that the information contained in the pointer element is irrelevant.
5. A computer system as claimed in any of claims 2 to
4 where in a software signalling unit comprises a structure containing one or more processes and also optionally containing a data structure, each process being replicatable an arbitrary, and for each process individual, number of times, thereby forming arrays of identical but individual processes, and each individual process forming the origin and destination of the software signals sent or received by the software signalling unit to which the process belongs and also forming the origin and destination of software signals sent to and received from other processes belonging to the same software signalling unit, each individual process internally consisting of a program comprising instructions sequentially stored in a program area in the main memory and an optional data structure, whereby the same program area may be used for replicated identical processes and whereby each data structure including the optional data structure directly belonging to the software signalling unit is individually replicatable an arbitrary number of times, thereby forming arrays of data structures and whereby each data structure may internally consist of any combination of subsidiary data structures and single data elements, both the subsidiary data structures and single data elements being replicatable in the same way as their parent data structure, thereby forming arrays of subsidiary data structures or data elements, and whereby each data element is individually allocated storage space in the main memory with data elements forming arrays of consecutively allocated memory elements in the main memory, each data element belonging to a software signalling unit being uniquely identified by a fixed, unigue virtual address within that software signalling so that this fixed virtual address may be given as part of the instruction code in the program accessing the data element and when recognized by the master control unit by means of its address translation capability causes the real location in the main memory associated with the data element to be accessed.
6. A computer system as claimed in claim 5, wherein a software signalling unit comprises a nonreplicated set of processes, a set of replicated processes and an optional data structure wherein all processes belonging to the set of replicated processes are replicated the same number of times.
7. A computer system as claimed in any of claims 2-6 wherein the destination of a softweure signal is given as a compound value with at least three components, one component of which identifies the actual software signalling unit as a whole, a second component of which identifies the actual process within the software signalling unit and additional components of which may be used to identify an individual process within an array of identical processes.
8. A computer system as claimed in claim 5, claim 6, or claim 7 when appendant to claim 6 wherein the program of each process represents a network of waiting nodes and transitions between these waiting nodes, being realised by means of task programs and waiting node tables in such a manner that each task program consists of program instructions to be sequentially executed by a central processing unit and to be terminated by an instruction or sequence of instructions causing the process to enter the next waiting node and so that each waiting node table contains information about the number of signals that a process is able to receive in that waiting node and for each signal identifies both the identity of the signal and the location of the task program associated with the reception of that particular signal in that particular waiting node, and whereby the waiting node table is not executable by any central processing unit but accessible only by the master control unit which, on arrival of any signal, either initiates the execution of the associated task program by the central processing unit if the signal is expected in the waiting node or causes the signal to be discarded if it is unexpected in that waiting node, and wherein each process is associated with a program pointer element containing a pointer element and an action indicator element with the pointer element directly or indirectly identifying a location in the main memory associated with either a program or with a signal buffer and with the action indicator being able to take at least two different values, the first value of which indicates that no central processing unit is currently in the act of executing instructions belonging to the actual process and that the pointer element in this case identifies the program location in the main memory from which program execution is to continue when a central processing unit next becomes active in order to execute a program associated with the process and second value of which indicates that a central processing unit is currently in the act of executing instructions of the program belonging to the process and that the pointer element in this case identifies a signal buffer containing the signal that initiated this execution.
9. A computer system as claimed in claim 8 wherein task programs may be nested inside each other in such a manner that an instruction within a task program may contain both subsidiary task programs and subsidiary waiting node tables, whereby each waiting node table also contains information indicating the level of waiting node, which table information is utilized by the master control unit to discard unexpected signals in any waiting node on the topmost task level and to defer unexpected signals in any waiting node on a subsidiary task level.
10. A computer system as claimed in any of claims 4 to 9 wherein the pointer element identifies the starting point of a waiting node table when the action indicator contains a code indicating that no central processing is currently in the act of executing a program associated with the corresponding process or signal buffer.
11. A computer system as claimed in any of claims 2 to 10 wherein the master control unit contains means to simultaneously identify, locate and access any program or data element belonging to or associated with a software signalling unit and any data element belonging to the signal buffers associated with at least one received signal and at most one signal in the act of being sent.
12. A computer system as claimed in claim 11, whersin the means to identify, locate and access the program and data elements comprise a set of index registers, one of which index registers is dedicated to identifying the part of the description table area within the master control unit associated with a particuleur software signalling unit, one or more of which index registers are dedicated to the identification of signal buffer associated with received signals, one of which is dedicated to the identification of a signal buffer associated with a signal being sent, and at least one of which is dedicated to the identification of the index to the actual process within an array of identical processes.
13. A computer system as claimed in claim 12, wherein the master control unit contains several sets of index registers, each one capable of independently identifying, locating and accessing a software signalling unit.
14. A computer systemas claimed in any of claims 11 to 13 wherein the central processing unit identifies any accessible program or data element by means of a virtual address code issued by the central processing unit to the master control unit and identifying the actual set of index registers to be used by the master control unit wherever applicable.
15. A computer system as claimed in any of claims 1 to 14 containing means within the master control unit for signal priority queue control comprising a set of registers including a signal queue head register, a signal queue tail register and a received signal register, a gate and comparator network, an interrupt control unit and a microprogram unit with a set of dedicated microprograms, and wherein the signal priority queues are realised as linked lists of signal buffers of the first-in-first-out type with one element of the signal header part of each signal buffer used as a link element, whereby the gate and comparator network generates an interrupt signal to the aforementioned interrupt control network as soon as a signal buffer is inserted in a signal priority queue, which interrupt signal causes the interrupt control unit to initiate execution of a specific signal interrupt microprogram by the microprogram unit and whereby additional interrupt signals may be generated by decoding of specific virtual addresses issued by the central processing unit to the master control unit, which additional interrupt signals cause the microprogram unit to initiate execution of microprograms for at least waiting node entry, seizing of signal buffers, dispatching of signals and discarding of signals.
16. A computer system as claimed in claim 15, wherein execution of the signal interrupt microprogram unlinks the signal buffer which caused the signal interrupt from the signal priority queue, calculates and retains all addresses to memory areas associated with or to be accessed by the process to which the signal is directed, determines whether the signal is to be received immediately, to be temporarily deferred or to be discarded, and in the case of the signal being received determines the task program entry point associated with the reception of the signal in the particular waiting node, stores this task program entry point in a location known to the central processing unit and generates an interrupt signal to the central processing unit of the same priority as the signal itself, which interrupt signal causes the central processing unit to read the task program entry point from the known location within the master control unit and to start execution from this entry point.
17. A computer system according to claim 16, wherein the identity of the received signal is inserted into a received signal register, thereby causing any further signal interrupt generated by subsequent signals of the same signal priority to be masked until the received signal register is again reset.
18. A computer system according to claim 16 or 17,wherein the locations of the program and data areas associated with the process within the signalling unit to which the actual signal is directed and which are required for the subsequent execution of an associated task program are identified and set up within the master control unit by means of the signal destination information contained in the signal header part of the signal buffer associated with the received signal and utilizing fixed special purpose data descriptor tables within the microprogram unit.
19. A computer system according to any of claims 16 to 18 wherein the contents of the action indicator associated with the identified process is analysed by means of a noninterruptable sequence of microinstructions whereby the signal is either to be received if the said action indicator indicates that no central processing unit is executing a program for the identified process, or subject to further analysis if the action indicator indicates that a central processing unit is processing information conatined in s secondary signal buffer.
20. A computer system according to claim 19 wherein, in the case that a signal is to be received, the pointer element associated with the process identifies the relevant waiting node table enabling the identity of the received signal to be analysed and either the associated task program start address to be determined if the signal identity corresponds to any of the signals which are expected in that waiting node, in which case this task program address is set into a predefined memory location or index register in the master control unit subsequently to be read by the central processing unit, the action indicator of the signal buffer associated with the received signal is set to a value indicating that the contents of the signal buffer is undergoing processing and the action indicator belonging to the identified process is set to a value indicating that a central processor is currently processing information contained in a secondary signal buffer and the pointer element associated with this action indicator is set to the identity of the signal buffer containing the newly received signal and finally an interrupt signal of the same priority as the signal is generated to the central processing unit, or causes the signal to be discarded if the signal identity does not match with any of the identities of the signals expected in the waiting node, whereby the signal buffer is returned to the existing pool of idle signal buffers and the received signal register is reset in order to allow subsequent signals of the same priority as the discarded signal to be received.
21. A computer system according to claim 19 or 20, wherein, in the case that the signal is to be subject to further analysis, the pointer element associated with the process identifies the signal buffer associated with the first received signal still subject to processing by a central processing unit, which signal is henceforth considered to be the primary signal and whereby the action indicator associated with this primary signal is further analysed, whereby the actual signal is either to be received on a secondary task level if the action indicator of the primary signal indicates that no central processing unit is currently executing a program, to be subject to still further analysis if the said action indicator indicates that a central processing unit is processing information contained in a secondary signal buffer, or to be deferred if the said action indicator indicates that processing of the information contained in the signal buffer associated with the primary signal is in progress.
22. A computer system according to claim 21, wherein, in the case that the signal is to be subject to still further analysis, the pointer element associated with the primary signal identifies a signal buffer containing information associated with a further signal also subject to processing by a central processing unit, which signal is henceforth considered as the primary signal and whereby the action indicator of this new primary signal is analysed in the same way and with the same alternatives as any previoulsy analysed primary signal.
23. A computer system according to claim 21 or claim 22, wherein, in the case that a signal is to be received, the pointer element belonging to the signal buffer associated with the primary signal identifies the relevant waiting node table enabling the identity of the received signal to be analysed and either the associated task program start address to be determined if the signal identity corresponds to any of the signals which are expected in that waiting node, in which case this task program address is set into a predefined memory location or index register in the master control unit subsequently to be read by the central processing unit, the action indicator of the signal buffer associated with the received signal is set to a value indicating that the contents of the signal buffer is undergoing processing and the action indicator belonging to the signal buffer associated with the primary signal is set to a value indicating that a central processor is currently processing information contained in a secondary signal buffer and the pointer element associated with this action indicator is set to the identity of the signal buffer containing the newly received signal and finally an interrupt signal of the same priority as the signal is generated to the central processing unit, or the signal to be deferred if the signal identity does not match with any of the identities of the signals expected in the waiting node.
24. A computer system according to any of claims 21,22, or 23 wherein, if a signal is to be deferred, the signal buffer containing the signal to be deferred is chained either to the primary signal if no signal buffer containing a previously deferred signal exists or to the signal buffer containing the last previously deferred signal, in both cases by inserting the indentity of the signal buffer of the newly deferred signal into the link element of the signal buffer containing the primary signal or the last previously received signal and inserting a unique stop code into the link element of the newly deferred signal, and wherein the action indicator belonging to the signal buffer of the newly deferred signal is set to a value indicating that processing has been deferred.
25. A computer system as claimed in any of claims 15 to 24,wherein the central processing unit when issuing a virtual address code generating a waiting node entry interrupt to the interrupt controller of the master control unit also stores the location of the waiting node entry table in the main memory associated with the waiting node to be entered into a known location or index register within the master control unit, whereafter execution of the waiting node entry microprogram initiated by the said waiting node entry interrupt by the microprogram unit of the master controller first identifies the task level of the terminated task program executed by the central processing unit and compares the idenfied task level with the task level stored in the waiting node table in the main memory associated with the waiting node to be entered and either returns the signal buffer associated with the signal which initiated the terminated task program to the pool of free signal buffers if the identified task level of the terminated task program is the same or higher than the task level of the waiting node to be entered, in which case for unnested task levels the action indicator associated with the process is set to the value indicating that no central processing unit is executing with the associated pointer element set to the location of the waiting node entry table obtained from the known location or index register within the master control unit and for nested task levels the action indicator and pointer element of the signal buffer associated with the actual task level level is set correspondingly, or sets the action indicator and pointer element associated with the signal buffer containing the signal which initiated the terminated task to the corresponding above mentioned values, thereby increasing the task level, whereafter the microprogram is terminated if no deferred signals exist and continued with a deferred signal analysis otherwise.
26. A computer system as claimed in claim 25, wherein the task level of the terminated task program is identified by counting the number of chained signal buffers where the action indicator belongin to that signal buffer indicates that a central processing unit is processing information contained in a secondary signal buffer and where the identity of the first buffer is obtained from the pointer element of the actual process and where the identities of chained signal buffere are obtained by the link element within the signal header part of the appropriate signal buffers.
27. A computer system as claimed in claim 25 or 26,wherein the deferred signal analysis first identifies the signal buffer containing the first previously deferred signal by means of the contents of the link element within the signal header part of the signal buffer for the signal which initiated the terminated task program, and then resets the action indicator associated with this identified signal buffer and with any signal buffer directly cheined to the identified signal buffer through the link element within the identified signal buffer or indirectly chained through the link element of the directly chained or any other indirectly chained signal buffer to the value indicating that a central procesing unit is processing or about to process information contained in the signal buffer, whereafter the signal contained in the signal buffer is subjected to analysis in the same manner as within the signal interrupt microprogram, however so, that if the signal is discarded the analysis proceeds with the next deferred signal if there is one, and so, that if the signal is to be deferred, the action indicator belonging to the signal buffer associated with the signal is reset to the value indicating that processing is deferred, whereafter the microprogram terminates if no further deferred signal exist, and otherwise rearranges the chain of signal buffere for the deferred signals in such a way that the first signal buffer in the chain always contains an action indicator indicating that the signal is processed or about to be processed and that the signal buffers of the other deferred signals at all times are chained in their order of arrival, whereafter the signal in the new first signal buffer in the chain is subjected to analysis in the same way until either all signals in the chain of signal buffers have been analysed and deferred again or a signal which may be received is found.
27. A computer system according to any of claims 15 to 26, wherein execution of the signal discard microprogram by the microprogram unit within the master control unit is initiated by an interrupt signal generated by decoding a virtual address issued by te central processor to the master control unit, whereby the microprogram returns the signal buffer associated with the last received signal to the pool of free signal buffers and thereafter raises the task level of the currently executing program in the central processing unit by resetting the action indicator of the signal buffer which initiated execution on the previous task level to the value that indicates that a central processing unit is processing information in that signal buffer.
28. A computer system according to any of claims 15 to 27, wherein execution of the signal buffer seize microprogram by the microprogram unit within the master control unit is initiated by an interrupt signal generated by decoding a virtual address issued by the central processor to the master control unit, which microprogram causes a free signal buffer to be identified and seized from the pool of free signal buffers and the identity of the seized signal buffer to be retained within a known location or index register within the master control unit.
29. A computer system according to any of claims 15 to 28, wherein execution of the signal dispatch microprogram by the microprogram unit within the master control unit is initiated by an interrupt signal generated by decoding a virtual address issued by the central processor to the master control unit, which microprogram causes the signal buffer retained in the known location or index register within the master control unit to be inserted as the last signal buffer in the signal priority gueus associated with the signal priority value held in the priority level element of the signal header part of the signal buffer and managed by a signal queue head register and a signal queue tail register.
30. A computer system according to any of the claims 1 to 29, wherein the pool of free signal buffers is managed by means of a list structure within the microprogram unit of the master control unit, which list structure is only accessed by the appropriate microprograms for seizing and releasing of signal buffers within the microprogram unit itself.
31. A computer system according to any of the claims 1 to 30, wherein any discrepancy or illegal combination of conditions and action indicator values causes the associated process to be restarted.
32. A computer system according to claim 31, wherein restarting of a process is performed by returning all deferred signals to the pool of free signal buffers, resetting the action indicator to a value indicating that a process is currently processing information contained in a secondary buffer, fetching a for the process unique restart location held within the microprogram unit and storing this value in the same location or index register, which holds the address to a task program when execution of that task program is initiated, and generating an interrupt signal to the central processing unit in the same manner as by a normal signal interrupt.
33. A computer system according to any of the claims 1 to 32 , wherein the microporgam unit of the master control unit also contains means for control of time queues, including a time queue head register, a time queue tail register emd a clock register for each time queue, and the appropriate microprograms for management of the time queues, and wherein the signal header part of the signal buffere contains an additional time counter element to be used by the time queue management microprograms.
34. A computer system according to claim 33, wherein a timed signal dispatch microprogram within the microprogram unit within the master control unit is initiated by an interrupt signal generated by decoding a virtual address issued by the central processor to the master control unit, which microprogram causes a the signal buffer containing the timed signal to be inserted in the timed signal queue in such a way that after insertion the signal associated with the previous signal buffer in the queue has an earlier or the same expiration time than the newly inserted signal and the signal associated with the next signal buffer in the time queue has a later expiration time than the newly inserted signal.
35. A computer system according to claim 33, wherein a time queue check microprogram is periodically executed by the microprogram unit within the master control unit, whereby the clock register is cyclically stepped one step forward wherafter no further action is taken if the time queue does not contain any signal buffer and the expiration time contained in the time counter element of the first signal buffer in the time queue is compared with the actual stepped value of the clock register if the time queue contains at least one signal buffer, in the last case of which again no further action is taken if the time for the first signal in the time queue has not yet expired and otherwise the first signal buffer is unlinked from the time queue emd inserted in the signal priority queue associated with the priority level element within the signal header part of the signal buffer and the new first signal buffer, if any, is analysed for time expiration.
36. A computer system as claimed in any of claims 33, 34 or 35, wherein the clock register is able to store N different clock values and where the time counter within a signal buffer is set to an expriation time represented by the corresponding number of time queue check program intervals forward from the value of the clock register at dispatching time and where the time is considered to have expired as soon as the value of the clock register has passed the value held in the said time counter provided that the difference between these two values remains within a predefined limit.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB848408444A GB8408444D0 (en) | 1984-04-02 | 1984-04-02 | Computer systems |
GB8408444 | 1984-04-02 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO1985004499A1 true WO1985004499A1 (en) | 1985-10-10 |
Family
ID=10559035
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/GB1985/000138 WO1985004499A1 (en) | 1984-04-02 | 1985-04-01 | Computer interprocess signal communication system |
Country Status (6)
Country | Link |
---|---|
EP (1) | EP0178309A1 (en) |
JP (1) | JPS62500548A (en) |
AU (1) | AU4154885A (en) |
CA (1) | CA1241451A (en) |
GB (1) | GB8408444D0 (en) |
WO (1) | WO1985004499A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0374338A1 (en) * | 1988-12-23 | 1990-06-27 | International Business Machines Corporation | Shared intelligent memory for the interconnection of distributed micro processors |
WO2010041259A2 (en) * | 2008-10-08 | 2010-04-15 | Rry - Technologies Ltd. | Device and method for disjointed computing |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4325120A (en) * | 1978-12-21 | 1982-04-13 | Intel Corporation | Data processing system |
US4418396A (en) * | 1979-05-04 | 1983-11-29 | International Standard Electric Corporation | Signalling system |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS55142476A (en) * | 1979-04-24 | 1980-11-07 | Nec Corp | Address conversion system for information processing system |
JPS56127261A (en) * | 1980-03-12 | 1981-10-05 | Hitachi Ltd | Multiprocessor system |
JPS5939764B2 (en) * | 1980-11-12 | 1984-09-26 | 株式会社日立製作所 | Interprocessor communication control device |
-
1984
- 1984-04-02 GB GB848408444A patent/GB8408444D0/en active Pending
-
1985
- 1985-04-01 AU AU41548/85A patent/AU4154885A/en not_active Abandoned
- 1985-04-01 WO PCT/GB1985/000138 patent/WO1985004499A1/en not_active Application Discontinuation
- 1985-04-01 JP JP50146185A patent/JPS62500548A/en active Pending
- 1985-04-01 EP EP19850901512 patent/EP0178309A1/en not_active Withdrawn
- 1985-04-02 CA CA000478207A patent/CA1241451A/en not_active Expired
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4325120A (en) * | 1978-12-21 | 1982-04-13 | Intel Corporation | Data processing system |
US4418396A (en) * | 1979-05-04 | 1983-11-29 | International Standard Electric Corporation | Signalling system |
Non-Patent Citations (3)
Title |
---|
Electronic Design, Volume 32, Nr. 4, February 1984 (Waseca, US) B. AKERLIND and L. STAFFORD: "OS Kernel Wields Task-Management Power", pages 205-211, see page 207, right-hand column, second paragraph - page 210, right-hand column, second paragraph * |
Electronische Rechenanlagen, Volume 19, Nr. 6, 1977, Munich (DE) H. WETTSTEIN: "Moglichkeiten zur Erhohung der Parallelitat von Softwareablaufen", pages 269-274, see page 270, left-hand column, line 10 - right-hand column, line 13 * |
Nachrichtentechnische Zeitschrift, Volume 35, Nr. 12, December 1982, Berlin (DE) M. SCHAM: "Prozesskommunikation beim iAPX 432", pages 744-748, see page 744, right-hand column, line 1 page 747, left-hand column, line 28 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0374338A1 (en) * | 1988-12-23 | 1990-06-27 | International Business Machines Corporation | Shared intelligent memory for the interconnection of distributed micro processors |
US5148527A (en) * | 1988-12-23 | 1992-09-15 | International Business Machines Corporation | Interface for independently establishing a link and transmitting high level commands including logical addresses from dedicated microprocessor to shared intelligent memory |
WO2010041259A2 (en) * | 2008-10-08 | 2010-04-15 | Rry - Technologies Ltd. | Device and method for disjointed computing |
WO2010041259A3 (en) * | 2008-10-08 | 2010-07-15 | Rry - Technologies Ltd. | Device and method for disjointed computing |
Also Published As
Publication number | Publication date |
---|---|
EP0178309A1 (en) | 1986-04-23 |
JPS62500548A (en) | 1987-03-05 |
AU4154885A (en) | 1985-11-01 |
GB8408444D0 (en) | 1984-05-10 |
CA1241451A (en) | 1988-08-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US4229790A (en) | Concurrent task and instruction processor and method | |
Anderson et al. | Real-time computing with lock-free shared objects | |
US4318182A (en) | Deadlock detection and prevention mechanism for a computer system | |
US7925869B2 (en) | Instruction-level multithreading according to a predetermined fixed schedule in an embedded processor using zero-time context switching | |
US4394725A (en) | Apparatus and method for transferring information units between processes in a multiprocessing system | |
JP2866241B2 (en) | Computer system and scheduling method | |
EP0330836B1 (en) | Method for multiprocessor system having self-allocating processors | |
US6934950B1 (en) | Thread dispatcher for multi-threaded communication library | |
EP1242883B1 (en) | Allocation of data to threads in multi-threaded network processor | |
US4369494A (en) | Apparatus and method for providing synchronization between processes and events occurring at different times in a data processing system | |
US6983462B2 (en) | Method and apparatus for serving a request queue | |
EP0602359A2 (en) | Architectural enhancements for parallel computer systems | |
EP0682312A2 (en) | Hardware implemented locking mechanism for parallel/distributed computer system | |
US20110289511A1 (en) | Symmetric Multi-Processor System | |
JPS60128537A (en) | Resouce access control | |
JP2003044295A (en) | Sleep queue management | |
US7103631B1 (en) | Symmetric multi-processor system | |
EP1131704B1 (en) | Processing system scheduling | |
JPS6334490B2 (en) | ||
US6701429B1 (en) | System and method of start-up in efficient way for multi-processor systems based on returned identification information read from pre-determined memory location | |
WO1985004499A1 (en) | Computer interprocess signal communication system | |
Liu et al. | Lock-free scheduling of logical processes in parallel simulation | |
Harbour | Real-time posix: an overview | |
Poledna | Optimizing interprocess communication for embedded real-time systems | |
Mackenzie et al. | UDM: user direct messaging for general-purpose multiprocessing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Designated state(s): AU BR FI JP KR NO SU US |
|
AL | Designated countries for regional patents |
Designated state(s): AT BE CH DE FR GB IT LU NL SE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1985901512 Country of ref document: EP |
|
WWP | Wipo information: published in national office |
Ref document number: 1985901512 Country of ref document: EP |
|
WWW | Wipo information: withdrawn in national office |
Ref document number: 1985901512 Country of ref document: EP |