WO1987003395A2 - Computer stack arrangement - Google Patents

Computer stack arrangement Download PDF

Info

Publication number
WO1987003395A2
WO1987003395A2 PCT/GB1986/000719 GB8600719W WO8703395A2 WO 1987003395 A2 WO1987003395 A2 WO 1987003395A2 GB 8600719 W GB8600719 W GB 8600719W WO 8703395 A2 WO8703395 A2 WO 8703395A2
Authority
WO
WIPO (PCT)
Prior art keywords
data
processor
memory
computer
stack
Prior art date
Application number
PCT/GB1986/000719
Other languages
French (fr)
Other versions
WO1987003395A3 (en
Inventor
David Michael Harland
Original Assignee
Linn Products Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Linn Products Limited filed Critical Linn Products Limited
Publication of WO1987003395A2 publication Critical patent/WO1987003395A2/en
Publication of WO1987003395A3 publication Critical patent/WO1987003395A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/448Execution paradigms, e.g. implementations of programming paradigms
    • G06F9/4482Procedural
    • G06F9/4484Executing subprograms
    • G06F9/4486Formation of subprogram jump address

Definitions

  • This invention relates to computers (the term "computers” being used to denote stored-program- l ⁇ gic digital computers).
  • the central processor unit CPU
  • logic functions involving arbitrary re ⁇ cursion can only be handled at a level higher than the instruction set.
  • the memory is a one-level, heap-based structure. That is, because it is one-level, data in both backing store and core store are held in the same format, and because it is heap-based, data is retrieved on a per-value basis rather than in huge chunks, value in this context referring to some feature of a data block by which it may be identified.
  • Aprincipal object of the present invention is to provide a substantial improvement in computer speed. Another object is to provide improved flexibility in the possible uses of the machine.
  • the invention provides a computer including a processor and a memory; the processor including an arithmetic and logic unit, a microcode control store and decoding logic, ' and stack means; and is characterised in that the stack means is adapted to hold both data representing microcode and data representing user-generated computational data and control information, the stack means being arranged to hold either type of data without discrimination; and in that the processor includes a plurality or registers assocaited with the stack means to act as stack pointers at least for microcode and for user-generated data and control information, respectively.
  • Fig 1 is a schematic block diagram of a computer embodying the invention
  • Fig 2 illustrates the organisation of the stack - 3 -
  • Fig 2 illustrates in more detail a practical embodiment of the stack
  • Fig 4 illustrates the memory system in greater detail
  • Fig -5 illustrates a preferred data format
  • Fig 6 is a block diagram showing a preferred form of pager/indexer in more detail. Overview
  • the computer has a main processor 10 including a main ALU 12, internal (core) store 14, stack means 16, and microcode control store 18 and decode logic 20.
  • the stack means 16 and the ALU 12 communicate with main bus 22.
  • the core store 14 operates in conjunction with an external backing store 26 (eg har disk drive) under ' the control of an autonomous pager/indexer 28.
  • an external backing store 26 eg har disk drive
  • the stack means 16 includes a stack 30 and a
  • the stack 30 is arranged to hold data representing not only microcode processing but also high-level computational data and control information; the plurality of registers 32 being used to identify the current frame of the
  • the preferred embodiment operates with three levels of language, namely user language, a mid-level language such as assembly language, and microcode. Therefore the following three levels of language, namely user language, a mid-level language such as assembly language, and microcode. Therefore the three levels of language, namely user language, a mid-level language such as assembly language, and microcode. Therefore the following three levels of language, namely user language, a mid-level language such as assembly language, and microcode. Therefore the user language, a mid-level language such as assembly language, and microcode. Therefore the various levels of language, namely user language, a mid-level language such as assembly language, and microcode. Therefore the following three levels of language, namely user language, a mid-level language such as assembly language, and microcode. Therefore the following three levels of language, namely user language, a mid-level language such as assembly language, and microcode. Therefore the following three levels of language, namely user language, a mid-level language such as assembly language, and microcode. Therefore the following three levels of language, namely
  • 30 stack contains a mixture of information frames of variable length.
  • Four pointers are required to indicate the current top of the stack plus the current frame for each level of language, and each pointer requires a corresponding hardware register.
  • the core store 14, backing store 26, and pager/indexer 28 act together as a one-level, heap-based memory system in which no distinction is drawn between information currently in core store 14 and that in backing store 26.
  • the pager/indexer 28 pages individual values off disk when they are needed, and is associated with a garbage collector running in parallel with the main processor 10 to maintain maximum storage utility.
  • the pager/ indexer is connected to the main bus 22 and also in parallel to the ALU 12.
  • the stores 14, 16, however, are only visible to the rest of the system via the pager/indexer 28.
  • Fig 3 shows in greater detail a practical archi ⁇ tecture for the stack means 16 of Fig 1. This is connected to the main bus 22, but most operations within the stack means 16 are conducted in parallel by dedicated links.
  • Register 32a acts as a main stack pointer, holding an address defining the current top of the stack.
  • Registers 32b, c, d hold addresses defining the bottom of the topmost stack frame of each language.
  • each of the registers 32 can be used to address the data stack 30 via bus
  • addresses can be modified under microprogram control by means of an auxiliary ALU 38 in ⁇ terposed in the bus 34.
  • the main stack pointer register 32a is provided with a conventional increment/decrement loop 40. - 5 -
  • This arrangement of the stack means allows the machine to perform nested and recursive routines involving both microcode and levels of higher-level language.
  • a control stack 44 is provided which can be loaded with the contents of any of the registers 32a- ⁇ via select circuit 46.
  • a control stack pointer register 48 identifies the current top of this stack.
  • the CSP register 48 can be counted up and down by increment/decrement circuit 50, or loaded with a desired location from the main b s 22 via line 52.
  • Data from the control stack 44 can be read out to the main bus 22. It can also be supplied to the registers 32 via bus 36.
  • the main ALU 12 is connected to receive data from the data stack 30 and the main bus 22, and to transmit data to the main bus 22 and to the data stack 30.
  • the invention in its preferred form operates withdata and control information stored in blocks of variable length, or "objects".
  • Each object comprises a tag defining the size and the type of the object followed by its components.
  • the "type" identifier can be compared by the processor against a list of operations (eg addition, subtraction) permitted for an object of that type.
  • all information held, whether in core store 14 or backing store 26, is in the same format.
  • the pager/indexer 28 comprises an autonomous processor operating on its own programs.
  • This table is a list defining object numbers currently held in core store and the physical location in core store of a specified point of the object or data block, for each of those object numbers.
  • the pager/indexer 28 addresses the component of that block specified by an index, also supplied by the CPU 10, and then either updates or retrieves that component. Data to be updated is taken off the main bus 22, and data to be retrieved is placed on that bus. In every case the pager/indexer 28 verifies that the index given specifies a data element within the bounds of that block.
  • an interrupt signal is generated which disables the system clock of the main CPU 10.
  • the CPU 10 is thus simply frozen for an indefinite length of time.
  • pager/indexer 28 searches backing store 26 for that object number, retrieves the relevant block, and loads it in core store 14, at the same time updating the table. Once this has been done, the interrupt signal is cancelled and
  • FIG. 5 illustrates a preferred format for the data blocks or objects.
  • An object 54 comprises a 32-bit object number 55, a tag consisting of two 32-bit words 56 and 58, and n_ components each of two 32 bits.
  • the first tag word 56 contains house- keeping information 56a and a size identifier 56b defining the overall size of the object 54.
  • 32 second word 58 defines the object type; thus 2 types are possible.
  • This component is referred to hereinafter as the "object representation”, and suitably comprises the part of that object which will have the highest frequency of use. This format is preferred to provide a minimum average access time.
  • Fig 6 shows in schematic form a preferred implemenation of the pager/indexer 28, which may be considered as a pager 28a and indexer 28b.
  • object number object address f object size object type object representation
  • the tables are stored in buffers 62a-62e.
  • the pager 28a comprises the buffers 62 connected in parallel to the main bus 22 via switch 64, and connected to provide inputs to registers 66-76.
  • Each buffer 62 stores the relevant part of every object which is present in core store at any given time.
  • the corresponding object number and index are made available on main bus 22.
  • the index is loaded in index register 68 and the object number is passed via switch 64 is parallel to the buffers 62.
  • the object number addresses the appropriate content for that object; on the next cycle this information is fed in parallel to the registers 66, 70-76 in such manner that each register holds one separate item of in ⁇ formation, as follows:- 66 Object number 70 Object base address 72 Object type 74 Object representation 76 Object size
  • the desired object number is compared with the output of object number buffer 62a in comparator 63. If comparison occurs, this indicates that the object is in core storage; if comparison does not occur, an output is generated at 63a to disable the main CPU clock.
  • the contents of the registers 66-76 are supplied in parallel to the indexer 28b, in which desired tests are carried out separately and in parallel, the results of which are made available at the subsequent cycle, and in which the object number and index are merged or altered.
  • An address ALU 78 is connected to receive the base address of the object from address register 70 and the desired index from index register 68 in order to combine these to give as output 79 the address of the desired component.
  • An increment/decrement circuit 81 is provided to permit, the index to be readily incremented or decremented, thus providing a rapid means of addressing sequential components of the same object.
  • the indexer carries out the following tests:
  • a type test circuit 80 compares the type defined in word 58 (Fig 5) with the representation to determine whether the representation is one which is valid for the stated type.
  • a comparison circuit 82 checks that the requested component index is within the size range specified for that object. If tests such as (a) and (b) indicate the presence of an invalidity, the respective circuits may be arranged to give an output which cuases repetition of the command or termination of the current program. Validity checks of this general nature have been discussed in the prior art but have been little used in practice owing to the overheads they impose in serial-processing machines, which overheads are very high in relation to the probability of an error. In the present invention, however, the overhead is low since all desired validity tests are performed simultaneously in one machine cycle without using the main ALU. To this end, a separate testing circuit (such as circuits 80, 82) is required for each test.
  • Such circuits may be entirely hardware implementd for the desired test, or may comprise a separate processor programmed to effect the desired test.
  • the leading part of the object up to and including the representation is stored in the registers 62.
  • the table registers could hold only object number " and address, in which case the parts of the object required for the pager/indexer 28 must be retrieved from memory.
  • Garbage collection Reverting to Fig 4 it was stated above that when core store is full, objects must be removed on a pre ⁇ determined criterion. This could be for example, LIFO, FIFO, or by recording frequency of use and removing on the basis of at least use.
  • the blocks to be removed are determined by scanning the table to identify new and modified blocks.
  • the pager/indexer 28 classifies blocks to be removed from core store as:
  • the original in backing store must be located and over ⁇ written with the updated data.
  • the core store 14 is preferably provided as two banks 14A and 14B arranged for use alternately. This allows one bank to be taken out of use for removal of data blocks on overflow while the system continues in operation with the other bank active, thereby minimising the porcessing time lost upon memory overflow.
  • the pager/indexer 28 can be selectively coupled to either of the banks 14A and 14B, the other bank being coupled with an autonomous garbage collector CPU 15 which has the dedicated function of performing the above classification, which can be performed by suitable software in a manner which will be apparent to those skilled in the art.
  • Summary The invention is concerned with a computer in which three areas are of significance, namely (a) the stack arrangement, (b) the memory and paging system and (c) the garbage collection system. Each of these is believed to be useful in itself, but for maximum benefit all three areas will be used together. This requires a degree of hardware complexity but has the potential to improve operating speed by orders of magnitude in comparison with conventional machines.
  • Languages including the microcode level, can be mixed on a single stack and can call upon one another in an arbitrarily, nested and recursive manner.

Abstract

A computer has a CPU (10) comprising central ALU (12) operating in conjunction with stack means (16), decode logic (20) and microcode control store (18). The stack means (16) is adapted to hold a mixture of microcode and higher level languages, each with separate stack pointer in a corresponding number of registers (32), to enable nested and recursive routines to be implemented directly. The ALU (12) is connected to backing store (26) and core store (14) only via an autonomous pager/indexer (28) in an arrangement which makes page faults invisible to the ALU. The backing store (26) and core store (14) form a one-level, heap-based structure in which data is held in a common format. A novel form of garbage collection from core store is also described.

Description

COMPUTER STACK ARRANGEMENT
This invention relates to computers (the term "computers" being used to denote stored-program- lδgic digital computers). In conventional computers, the central processor unit (CPU) operates its instruction set in a non¬ nested manner, one instruction after another. In particular, logic functions involving arbitrary re¬ cursion can only be handled at a level higher than the instruction set.
Within the CPU, there may be some measure of recursion in executing standard microcode instructions, but this is of a very limited nature and cannot "reach out" to the user during execution. A computer organised in this way has certain limitations on flexibility and speed. In particular, once a machine-language instruction has been entered in the CPU, the CPU is limited to performing that instruction, and must do so at the microcode level; if any recursive or looping algorithm requiring arbitrary computation is involved, then it is necessary to implement that algorithm at a higher level, and execute it as a machine code program.
Another limitation on the speed and flexibility of conventional computers arises from the manner in which memory is organised. It is usual to retrieve data by addressing defined locations. It is also usual to page data from backing store to core store under the control of the CPU. In such a virtual memory system, on accessing data it is necessary to test addresses to detect page faults. If a page fault occurs, it is necessary to transfer the data from backup store to core store and to subsequently restart the operation which was in process at the time of the page fault. Virtual memory systems transfer whole blocks of memory, whereas typically only a small part of this will actually be addressed.
Important preferred features of the present invention are directed at improving memory architecture. In accordance with the preferred scheme, the memory is a one-level, heap-based structure. That is, because it is one-level, data in both backing store and core store are held in the same format, and because it is heap-based, data is retrieved on a per-value basis rather than in huge chunks, value in this context referring to some feature of a data block by which it may be identified.
Aprincipal object of the present invention is to provide a substantial improvement in computer speed. Another object is to provide improved flexibility in the possible uses of the machine.
Accordingly the invention provides a computer including a processor and a memory; the processor including an arithmetic and logic unit, a microcode control store and decoding logic,' and stack means; and is characterised in that the stack means is adapted to hold both data representing microcode and data representing user-generated computational data and control information, the stack means being arranged to hold either type of data without discrimination; and in that the processor includes a plurality or registers assocaited with the stack means to act as stack pointers at least for microcode and for user-generated data and control information, respectively.
An embodiment of the present invention will now be described, by way of example only, with reference to the accompanying drawings, in which:-
Fig 1 is a schematic block diagram of a computer embodying the invention;
Fig 2 illustrates the organisation of the stack - 3 -
in Fig 1 ;
Fig 2 illustrates in more detail a practical embodiment of the stack;
Fig 4 illustrates the memory system in greater detail;
Fig -5 illustrates a preferred data format; and
Fig 6 is a block diagram showing a preferred form of pager/indexer in more detail. Overview
10 Referring to Fig 1, the computer has a main processor 10 including a main ALU 12, internal (core) store 14, stack means 16, and microcode control store 18 and decode logic 20. The stack means 16 and the ALU 12 communicate with main bus 22.
" The core store 14 operates in conjunction with an external backing store 26 (eg har disk drive) under'the control of an autonomous pager/indexer 28.
The stack means 16 includes a stack 30 and a
20 plurality of registers 32a, 32b, ... The stack 30 is arranged to hold data representing not only microcode processing but also high-level computational data and control information; the plurality of registers 32 being used to identify the current frame of the
2^ stack for the different classes of information.
More specifically, referring to Fig 2, the preferred embodiment operates with three levels of language, namely user language, a mid-level language such as assembly language, and microcode. Therefore the
30 stack contains a mixture of information frames of variable length. Four pointers are required to indicate the current top of the stack plus the current frame for each level of language, and each pointer requires a corresponding hardware register. As processing
35 migrates form level to level, using up and the releasing the main stack, the "current frame" register for the appropriate frame type must be saved and then reset. To facilitate this, a second, simpler stack is preferably employed, as will be described, f
Reverting to Fig 1, the core store 14, backing store 26, and pager/indexer 28 act together as a one-level, heap-based memory system in which no distinction is drawn between information currently in core store 14 and that in backing store 26. The pager/indexer 28 pages individual values off disk when they are needed, and is associated with a garbage collector running in parallel with the main processor 10 to maintain maximum storage utility. The pager/ indexer is connected to the main bus 22 and also in parallel to the ALU 12. The stores 14, 16, however, are only visible to the rest of the system via the pager/indexer 28. Stack Architecture
Fig 3 shows in greater detail a practical archi¬ tecture for the stack means 16 of Fig 1. This is connected to the main bus 22, but most operations within the stack means 16 are conducted in parallel by dedicated links.
The stack 30 is used as described above to hold mixed data. Register 32a acts as a main stack pointer, holding an address defining the current top of the stack. Registers 32b, c, d hold addresses defining the bottom of the topmost stack frame of each language.
The contents of each of the registers 32 can be used to address the data stack 30 via bus
34. Additionally, such addresses can be modified under microprogram control by means of an auxiliary ALU 38 in¬ terposed in the bus 34.
The main stack pointer register 32a is provided with a conventional increment/decrement loop 40. - 5 -
However, it is also provided with a loop containing a further local ALU 41 connected to main bus 22.
This arrangement of the stack means allows the machine to perform nested and recursive routines involving both microcode and levels of higher-level language.
When the main stack is operated on in such a manner, it is necessary to save the contents of the frame register for the current level of language, to be able then to create nested frames, and to be able to reset that register in a LIFO sequence when the program retracts back through the levels of language. For this purpose, a control stack 44 is provided which can be loaded with the contents of any of the registers 32a-ά via select circuit 46. A control stack pointer register 48 identifies the current top of this stack. The CSP register 48 can be counted up and down by increment/decrement circuit 50, or loaded with a desired location from the main b s 22 via line 52.
Data from the control stack 44 can be read out to the main bus 22. It can also be supplied to the registers 32 via bus 36.
As is also shown in Fig 3, the main ALU 12 is connected to receive data from the data stack 30 and the main bus 22, and to transmit data to the main bus 22 and to the data stack 30.
It will be seen that this stack architecture allowsia number of data transfers and operations to be performed at each clock cycle. Memory System
The invention in its preferred form operates withdata and control information stored in blocks of variable length, or "objects". Each object comprises a tag defining the size and the type of the object followed by its components. The "type" identifier can be compared by the processor against a list of operations (eg addition, subtraction) permitted for an object of that type. As discussed aέove, all information held, whether in core store 14 or backing store 26, is in the same format. Referring to Fig 4, the pager/indexer 28 comprises an autonomous processor operating on its own programs.
When the main CPU 10 calls for data from memory, this is received by the pager/indexer 28 which masks from the data received the object number identifying the desired object and compares this object number with the entries in a table which it holds and updates. This table is a list defining object numbers currently held in core store and the physical location in core store of a specified point of the object or data block, for each of those object numbers.
If the object number requested is present in the table, the pager/indexer 28 addresses the component of that block specified by an index, also supplied by the CPU 10, and then either updates or retrieves that component. Data to be updated is taken off the main bus 22, and data to be retrieved is placed on that bus. In every case the pager/indexer 28 verifies that the index given specifies a data element within the bounds of that block.
If the object number requested is not present in the table, an interrupt signal is generated which disables the system clock of the main CPU 10. The CPU 10 is thus simply frozen for an indefinite length of time. During this time, pager/indexer 28 searches backing store 26 for that object number, retrieves the relevant block, and loads it in core store 14, at the same time updating the table. Once this has been done, the interrupt signal is cancelled and
CPU 10 restarts from exactly where it was. In this way, paging from backing to core storage does not require the intervention of the main processor and therefore does not involve abandoning microprogram stieps already effected. The core store will, of course, overflow from time to time. Data blocks (objects) are then removed from core storage on some predetermined criterion, as discussed below. Detailed example of data format Fig 5 illustrates a preferred format for the data blocks or objects. An object 54 comprises a 32-bit object number 55, a tag consisting of two 32-bit words 56 and 58, and n_ components each of two 32 bits. The first tag word 56 contains house- keeping information 56a and a size identifier 56b defining the overall size of the object 54. The
32 second word 58 defines the object type; thus 2 types are possible.
The object 54 when in core store is identified by using the location of the second word 58 as a base address, and a given component by the base address plus an index 1 . . . n, the first word 56 being defined by (base address) + (index = -1).
Thus the first component is identified by (base address) + (index = 1). This component is referred to hereinafter as the "object representation", and suitably comprises the part of that object which will have the highest frequency of use. This format is preferred to provide a minimum average access time.
Detailed example of pager/indexer
Fig 6 shows in schematic form a preferred implemenation of the pager/indexer 28, which may be considered as a pager 28a and indexer 28b. In this preferred form, whenever an object is loaded into core store there is stored in tables the following information: object number object address f object size object type object representation
The tables are stored in buffers 62a-62e. The pager 28a comprises the buffers 62 connected in parallel to the main bus 22 via switch 64, and connected to provide inputs to registers 66-76. The indexer
28b comprises, in principle, individual dedicated means for effecting each of a number of tests, as will be discussed below.
Each buffer 62 stores the relevant part of every object which is present in core store at any given time. When the main processor call for a given component, the corresponding object number and index are made available on main bus 22. During one machine cycle the index is loaded in index register 68 and the object number is passed via switch 64 is parallel to the buffers 62. In the buffers 62a-e, the object number addresses the appropriate content for that object; on the next cycle this information is fed in parallel to the registers 66, 70-76 in such manner that each register holds one separate item of in¬ formation, as follows:- 66 Object number 70 Object base address 72 Object type 74 Object representation 76 Object size
At the same time, the desired object number is compared with the output of object number buffer 62a in comparator 63. If comparison occurs, this indicates that the object is in core storage; if comparison does not occur, an output is generated at 63a to disable the main CPU clock.
On the following cycle, the contents of the registers 66-76 are supplied in parallel to the indexer 28b, in which desired tests are carried out separately and in parallel, the results of which are made available at the subsequent cycle, and in which the object number and index are merged or altered.
An address ALU 78 is connected to receive the base address of the object from address register 70 and the desired index from index register 68 in order to combine these to give as output 79 the address of the desired component. An increment/decrement circuit 81 is provided to permit, the index to be readily incremented or decremented, thus providing a rapid means of addressing sequential components of the same object.
In the example shown, the indexer carries out the following tests:
(a) a type test circuit 80 compares the type defined in word 58 (Fig 5) with the representation to determine whether the representation is one which is valid for the stated type.
(b) a comparison circuit 82 checks that the requested component index is within the size range specified for that object. If tests such as (a) and (b) indicate the presence of an invalidity, the respective circuits may be arranged to give an output which cuases repetition of the command or termination of the current program. Validity checks of this general nature have been discussed in the prior art but have been little used in practice owing to the overheads they impose in serial-processing machines, which overheads are very high in relation to the probability of an error. In the present invention, however, the overhead is low since all desired validity tests are performed simultaneously in one machine cycle without using the main ALU. To this end, a separate testing circuit (such as circuits 80, 82) is required for each test. Such circuits may be entirely hardware implementd for the desired test, or may comprise a separate processor programmed to effect the desired test. In the particular example described above, the leading part of the object up to and including the representation is stored in the registers 62. Thus in the limiting cases where the object contains only one component (the representation) or no components, it is not necessary for the object to be physically prsent in either core store or backing store since the whole of the object is defined within the table. It will be appreciated, however, that this arrangement is not essential to the invention. At the other extreme, the table registers could hold only object number" and address, in which case the parts of the object required for the pager/indexer 28 must be retrieved from memory. There is a trade-off between the hardware complexity of the tables and the loading and updating thereof on the one hand, and the speed of operation of the pager 28a on the other. Garbage collection Reverting to Fig 4, it was stated above that when core store is full, objects must be removed on a pre¬ determined criterion. This could be for example, LIFO, FIFO, or by recording frequency of use and removing on the basis of at least use. Preferably, however, the blocks to be removed are determined by scanning the table to identify new and modified blocks.
In any event, the pager/indexer 28 classifies blocks to be removed from core store as:
(a) data which has not been altered: this can be discarded, since the original is also in backing store.
(b) data which has been paged in and then altered: - li ¬
the original in backing store must be located and over¬ written with the updated data.
(c) new data, which can simply be transferred to the end of the backing store. As seen in Fig 4, the core store 14 is preferably provided as two banks 14A and 14B arranged for use alternately. This allows one bank to be taken out of use for removal of data blocks on overflow while the system continues in operation with the other bank active, thereby minimising the porcessing time lost upon memory overflow.
Thus, the pager/indexer 28 can be selectively coupled to either of the banks 14A and 14B, the other bank being coupled with an autonomous garbage collector CPU 15 which has the dedicated function of performing the above classification, which can be performed by suitable software in a manner which will be apparent to those skilled in the art. Summary The invention is concerned with a computer in which three areas are of significance, namely (a) the stack arrangement, (b) the memory and paging system and (c) the garbage collection system. Each of these is believed to be useful in itself, but for maximum benefit all three areas will be used together. This requires a degree of hardware complexity but has the potential to improve operating speed by orders of magnitude in comparison with conventional machines.
The preferred embodiment of the invention gives a number of advantages:-
(1) Languages, including the microcode level, can be mixed on a single stack and can call upon one another in an arbitrarily, nested and recursive manner.
(2) Page faults are made invisible to the micro- code level. - 12 -
(3) The two above factors in combination allow the realisation of very high level instruction sets, which substantially increases overall performance of the machine.

Claims

1. A computer including a processor and a memory; the processor including an arithmetic and logic unit, a 'microcode control store and decoding logic, and stack means;
5 characterised in that the stack means is adapted to hold both data representing microcode and data representing user-generated computational data and control information, the stack means being arranged to hold either type of data without discrimination;
]_0 and in that the processor includes a plurality of registers associated with the stack means to act as stack pointers at least for microcode and for user-generated data and control information, respectively.
2. The computer of claim 1, in which the stack 15 means is adapted to hold also control information from the registers selectively.
3. The computer of claim 1 or claim 2, in which the stack means comprises a data stack, a control stack, and means for selectively altering the control
20 stack pointer and at least one of the main stack pointers.
4. The computer of any preceding claim in which the memory comprises (a) backing store external to the processor and (b) processor memory, both of which
25 are arranged to hold data in the same format so that a one-level storage system is provided.
5. The computer of claim 4, in which data is stored in the form of blocks of varied and arbitrary size, each block comprising components of data and a tag,
30 the tag defining the length of that, block and identifying it as being a member of a particular class of blocks for which only a certain range of operations is to be permitted.
6. The computer of claim 5, in which the memory
35 system further includes an autonomous processor operating - 14 -
to page data from backing store to processor memory.
7. The computer of claim 6, in which the first- mentioned processor is stopped while the autonomous processor is engaged in paging in data from backing store to processor memory.
8. The computer of claim 6 or claim 7, in which the processor memory comprises two alternately usable memory banks, and in which a further processor engages in removal of data from one bank when it over- flows while the main processor continues in operation using the other bank, data being removed selectively from said one bank according to a predetermined criterion.
9. A computer according to claim 1; characterised in that the backing store is arranged - to hold information in data blocks comprising a tag and n_ components where n_ is zero or a positive number, and in that the memory further comprises a pager/indexer operable to locate a given data block in backing store, load it into an address in processor memory, and update a table defining the address for each data block in processor memory.
10. The computer of claim 9, in which data called for by the main processor causes the pager/indexer to check said table and, if the relevant data block is not present, generate a signal which disables the main processor clock until the relevant data block is loaded into processor memory.
11. The computer of claim 9 or claim 10, in which the pager/indexer is arranged to perform a plurality of validity tests relating to a number of features of a data block called for by the main processor.
12. The computer of claim 11, in which the pager/indexer includes a number of registers connected for to be loaded in parallel with said features during one machine cycle.
13. The computer of claim 12, in which the pager/indexer further includes a plurality of validity testing means connected to said registers for effecting said tests in parallel during one subsequent machine cycle. 14'. A computer including a main processor and a memory; the memory comprising a processor memory, and a backing store; characterised in that the backing store is arranged to hold information in data blocks comprising a tag and n components where ii is zero or a positive number, and in that the memory further comprises a pager/ indexer operable to locate a given data block in backing store, load it into an address in processor memory, and update a table defining the address for each data block in processor memory.
PCT/GB1986/000719 1985-11-25 1986-11-25 Computer stack arrangement WO1987003395A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB858528984A GB8528984D0 (en) 1985-11-25 1985-11-25 Computers
GB8528984 1985-11-25

Publications (2)

Publication Number Publication Date
WO1987003395A2 true WO1987003395A2 (en) 1987-06-04
WO1987003395A3 WO1987003395A3 (en) 1987-08-13

Family

ID=10588740

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB1986/000719 WO1987003395A2 (en) 1985-11-25 1986-11-25 Computer stack arrangement

Country Status (3)

Country Link
EP (1) EP0281561A1 (en)
GB (1) GB8528984D0 (en)
WO (1) WO1987003395A2 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB976499A (en) * 1960-03-16 1964-11-25 Nat Res Dev Improvements in or relating to electronic digital computing machines
US3277447A (en) * 1954-10-22 1966-10-04 Ibm Electronic digital computers
US3333251A (en) * 1964-11-13 1967-07-25 Ibm File storage system
US3737864A (en) * 1970-11-13 1973-06-05 Burroughs Corp Method and apparatus for bypassing display register update during procedure entry
US4056848A (en) * 1976-07-27 1977-11-01 Gilley George C Memory utilization system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3277447A (en) * 1954-10-22 1966-10-04 Ibm Electronic digital computers
GB976499A (en) * 1960-03-16 1964-11-25 Nat Res Dev Improvements in or relating to electronic digital computing machines
US3333251A (en) * 1964-11-13 1967-07-25 Ibm File storage system
US3737864A (en) * 1970-11-13 1973-06-05 Burroughs Corp Method and apparatus for bypassing display register update during procedure entry
US4056848A (en) * 1976-07-27 1977-11-01 Gilley George C Memory utilization system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
The 12th Annual Symposium on Computer Architecture, 17-19 June 1985, IEEE, (New York, US), E.F. GEHRINGER et al.: "Tagged Architecture: How Compelling are its Advantages ?", pages 162-170 see paragraph 2, "The Characteristics of Tagged Architecture" *
The 3rd Annual Symposius on Computer Architecture, 19-21 January 1976, IEEE, (New York, US), T.A. WELCH: "An Investigation of Descriptor Oriented Architecture", pages 141-146 see the whole document *

Also Published As

Publication number Publication date
GB8528984D0 (en) 1986-01-02
WO1987003395A3 (en) 1987-08-13
EP0281561A1 (en) 1988-09-14

Similar Documents

Publication Publication Date Title
US5812868A (en) Method and apparatus for selecting a register file in a data processing system
US5517651A (en) Method and apparatus for loading a segment register in a microprocessor capable of operating in multiple modes
US5008812A (en) Context switching method and apparatus for use in a vector processing system
US4794524A (en) Pipelined single chip microprocessor having on-chip cache and on-chip memory management unit
US5148544A (en) Apparatus and method for control of asynchronous program interrupt events in a data processing system
US4410941A (en) Computer having an indexed local ram to store previously translated virtual addresses
US6209085B1 (en) Method and apparatus for performing process switching in multiprocessor computer systems
EP0239181B1 (en) Interrupt requests serializing in a virtual memory data processing system
KR930004328B1 (en) Method and apparatus for executing instructions for a vector processing system
US5043867A (en) Exception reporting mechanism for a vector processor
US20060036824A1 (en) Managing the updating of storage keys
JP3663317B2 (en) Computer system
WO1987005417A1 (en) Instruction prefetch control apparatus
US20040123090A1 (en) Providing access to system management information
US5129071A (en) Address translation apparatus in virtual machine system using a space identifier field for discriminating datoff (dynamic address translation off) virtual machines
US5226132A (en) Multiple virtual addressing using/comparing translation pairs of addresses comprising a space address and an origin address (sto) while using space registers as storage devices for a data processing system
US4747044A (en) Direct execution of software on microprogrammable hardware
EP0196736B1 (en) Microprogram controlled data processing apparatus
US5430864A (en) Extending computer architecture from 32-bits to 64-bits by using the most significant bit of the stack pointer register to indicate word size
JP3170472B2 (en) Information processing system and method having register remap structure
CA1287177C (en) Microprogrammed systems software instruction undo
US5138617A (en) Method for masking false bound faults in a central processing unit
EP0297892B1 (en) Apparatus and method for control of asynchronous program interrupt events in a data processing system
WO1987003395A2 (en) Computer stack arrangement
US5822607A (en) Method for fast validation checking for code and data segment descriptor loads

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): JP US

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): AT BE CH DE FR GB IT LU NL SE

AK Designated states

Kind code of ref document: A3

Designated state(s): JP US

AL Designated countries for regional patents

Kind code of ref document: A3

Designated state(s): AT BE CH DE FR GB IT LU NL SE

WWE Wipo information: entry into national phase

Ref document number: 1986906892

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 1986906892

Country of ref document: EP

WWW Wipo information: withdrawn in national office

Ref document number: 1986906892

Country of ref document: EP